Skip to main content
Statistics LibreTexts

Untitled Page 01

  • Page ID
    7076
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    University of Missouri, St. Louis IRL @ UMSL

    Open Educational Resources Collection Open Educational Resources

    11-13-2018 An Introduction to Psychological Statistics

    Garett C. Foster University of Missouri-St. Louis , fostergc@umsl.edu

    David Lane Rice University , lane@rice.edu

    David Scott Rice University

    Mikki Hebl Rice University

    Rudy Guerra Rice University

    See next page for additional authors

    Follow this and additional works at: https://irl.umsl.edu/oer

    Part of the Applied Statistics Commons , Mathematics Commons , and the Psychology Commons

    Recommended Citation Foster, Garett C.; Lane, David; Scott, David; Hebl, Mikki; Guerra, Rudy; Osherson, Dan; and Zimmer, Heidi, "An Introduction to Psychological Statistics" (2018). Open Educational Resources Collection . 4. https://irl.umsl.edu/oer/4

    This Textbook is brought to you for free and open access by the Open Educational Resources at IRL @ UMSL. It has been accepted for inclusion in Open Educational Resources Collection by an authorized administrator of IRL @ UMSL. For more information, please contact marvinh@umsl.edu .

    Authors Garett C. Foster, David Lane, David Scott, Mikki Hebl, Rudy Guerra, Dan Osherson, and Heidi Zimmer

    This textbook is available at IRL @ UMSL: https://irl.umsl.edu/oer/4

    Department of Psychological Sciences University of Missouri – St Louis

    A N I NTRODUCTION TO P SYCHOLOGICAL S TATISTICS

    This work was created as part of the University of Missouri’s Affordable and Open Access Educational Resources Initiative ( https://www.umsystem.edu/ums/aa/oer ).

    The contents of this work have been adapted from the following Open Access Resources:

    Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane , Rice University.

    Changes to the original works were made by Dr. Garett C. Foster in the Department of Psychological Sciences to tailor the text to fit the needs of the introductory statistics course for psychology majors at the University of Missouri – St. Louis. Materials from the original sources have been combined, reorganized, and added to by the current author, and any conceptual, mathematical, or typographical errors are the responsibility of the current author.

    This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License .

    pg. 1

    Prologue: A letter to my students

    Dear Students:

    I get it.

    Please believe me when I say that I completely understand, from firsthand experience, that statistics is rough. I was forced to take an introductory statistics course as part of my education, and I went in to it with dread. To be honest, for that first semester, I hated statistics. I was fortunate enough to have a wonderful professor who was knowledgeable and passionate about the subject. Nevertheless, I didn’t understand what was going on, why I was required to take the course, or why any of it mattered to my major or my life.

    Now, almost ten years later, I am deeply in love with statistics. Once I understood the logic behind statistics (and I promise, it is there, even if you don’t see it at first), everything became crystal clear. More importantly, it enabled me to use that same logic not on numerical data but in my everyday life.

    We are constantly bombarded by information, and finding a way to filter that information in an objective way is crucial to surviving this onslaught with your sanity intact. This is what statistics, and logic we use in it, enables us to do. Through the lens of statistics, we learn to find the signal hidden in the noise when it is there and to know when an apparent trend or pattern is really just randomness.

    I understand that this is a foreign language to most people, and it was for me as well. I also understand that it can quickly become esoteric, complicated, and overwhelming. I encourage you to persist. Eventually, a lightbulb will turn on, and your life will be illuminated in a way it never has before.

    I say all this to communicate to you that I am on your side. I have been in your seat, and I have agonized over these same concepts. Everything in this text has been put together in a way to convey not just formulae for manipulating numbers but to make connections across different chapters, topics, and methods, to demonstrate that it is all useful and important.

    So I say again: I get it. I am on your side, and together, we will learn to do some amazing things.

    Garett C. Foster, Ph.D.

    pg. 2

    Table of Contents

    Prologue: A letter to my students ............................................................................................... 2

    Chapter 1: Introduction ............................................................................................................... 8

    What are statistics? ................................................................................................................... 8

    Why do we study statistics? ................................................................................................... 10

    Types of Data and How to Collect Them ............................................................................ 11

    Collecting Data ....................................................................................................................... 19

    Type of Research Designs ...................................................................................................... 24

    Types of Statistical Analyses .................................................................................................. 26

    Mathematical Notation ......................................................................................................... 32

    Exercises – Ch. 1 ...................................................................................................................... 34

    Answers to Odd-Numbered Exercises – Ch. 1 .................................................................... 35

    Chapter 2: Describing Data using Distributions and Graphs ................................................ 36

    Graphing Qualitative Variables ............................................................................................ 36

    Graphing Quantitative Variables ......................................................................................... 43

    Exercises – Ch. 2 ...................................................................................................................... 69

    Answers to Odd-Numbered Exercises – Ch. 2 .................................................................... 72

    Chapter 3: Measures of Central Tendency and Spread ...................................................... 73

    What is Central Tendency? ................................................................................................... 73

    Measures of Central Tendency ............................................................................................. 79

    Spread and Variability ........................................................................................................... 85

    Exercises – Ch. 3 ...................................................................................................................... 93

    Answers to Odd-Numbered Exercises – Ch. 3 .................................................................... 94

    Chapter 4: z-scores and the Standard Normal Distribution .................................................. 95

    Normal Distributions ................................................................................................................ 95

    z-scores ..................................................................................................................................... 96

    Z-scores and the Area under the Curve ............................................................................ 101

    Exercises – Ch. 4 .................................................................................................................... 102

    Answers to Odd-Numbered Exercises – Ch. 4 .................................................................. 103

    Chapter 5: Probability .............................................................................................................. 105

    What is probability? .............................................................................................................. 105

    Probability in Graphs and Distributions .............................................................................. 107

    pg. 3

    Probability: The Bigger Picture............................................................................................. 114

    Exercises – Ch. 5 .................................................................................................................... 114

    Answers to Odd-Numbered Exercises – Ch. 5 .................................................................. 115

    Chapter 6: Sampling Distributions .......................................................................................... 116

    People, Samples, and Populations ..................................................................................... 116

    The Sampling Distribution of Sample Means ..................................................................... 117

    Using Standard Error for Probability .................................................................................... 121

    Sampling Distribution, Probability and Inference ............................................................. 124

    Exercises – Ch. 6 .................................................................................................................... 124

    Answers to Odd-Numbered Exercises – Ch. 6 .................................................................. 125

    Chapter 7: Introduction to Hypothesis Testing ..................................................................... 127

    Logic and Purpose of Hypothesis Testing .......................................................................... 127

    The Probability Value ............................................................................................................ 128

    The Null Hypothesis ............................................................................................................... 129

    The Alternative Hypothesis................................................................................................... 130

    Critical values, p-values, and significance level .............................................................. 131

    Steps of the Hypothesis Testing Process ............................................................................. 136

    Example: Movie Popcorn .................................................................................................... 137

    Effect Size ............................................................................................................................... 140

    Example: Office Temperature ............................................................................................. 140

    Example: Different Significance Level ............................................................................... 143

    Other Considerations in Hypothesis Testing ...................................................................... 144

    Exercises – Ch. 7 .................................................................................................................... 146

    Answers to Odd- Numbered Exercises – Ch. 7 ................................................................. 147

    Chapter 8: Introduction to t -tests ........................................................................................... 148

    The t -statistic .......................................................................................................................... 148

    Hypothesis Testing with t ...................................................................................................... 150

    Confidence Intervals ............................................................................................................ 154

    Exercises – Ch. 8 .................................................................................................................... 158

    Answers to Odd- Numbered Exercises – Ch. 8 ................................................................. 160

    Chapter 9: Repeated Measures............................................................................................. 161

    Change and Differences ..................................................................................................... 161

    Hypotheses of Change and Differences ........................................................................... 163

    pg. 4

    Example: Increasing Satisfaction at Work ......................................................................... 165

    Example: Bad Press ............................................................................................................... 168

    Exercises – Ch. 9 .................................................................................................................... 170

    Answers to Odd- Numbered Exercises – Ch. 9 ................................................................. 172

    Chapter 10: Independent Samples ....................................................................................... 174

    Difference of Means ............................................................................................................. 174

    Research Questions about Independent Means ............................................................. 174

    Hypotheses and Decision Criteria ...................................................................................... 176

    Independent Samples t -statistic ......................................................................................... 178

    Standard Error and Pooled Variance ................................................................................ 178

    Example: Movies and Mood ............................................................................................... 180

    Effect Sizes and Confidence Intervals ............................................................................... 185

    Homogeneity of Variance ................................................................................................... 188

    Exercises – Ch. 10 .................................................................................................................. 189

    Answers to Odd- Numbered Exercises – Ch. 10 ............................................................... 191

    Chapter 11: Analysis of Variance ........................................................................................... 194

    Observing and Interpreting Variability ............................................................................... 194

    Sources of Variance ............................................................................................................. 197

    ANOVA Table ........................................................................................................................ 200

    ANOVA and Type I Error ....................................................................................................... 202

    Hypotheses in ANOVA ......................................................................................................... 203

    Example: Scores on Job Application Tests ........................................................................ 204

    Effect Size: Variance Explained .......................................................................................... 208

    Post Hoc Tests ........................................................................................................................ 209

    Other ANOVA Designs ......................................................................................................... 211

    Exercises – Ch. 11 .................................................................................................................. 212

    Answers to Odd- Numbered Exercises – Ch. 11 ............................................................... 213

    Chapter 12: Correlations ......................................................................................................... 215

    Variability and Covariance ................................................................................................. 215

    Visualizing Relations .............................................................................................................. 217

    Three Characteristics ............................................................................................................ 220

    Pearson’s r .............................................................................................................................. 225

    Example: Anxiety and Depression ...................................................................................... 226

    pg. 5

    Effect Size ............................................................................................................................... 231

    Correlation versus Causation .............................................................................................. 231

    Final Considerations.............................................................................................................. 233

    Exercises – Ch. 12 .................................................................................................................. 238

    Answers to Odd- Numbered Exercises – Ch. 12 ............................................................... 240

    Chapter 13: Linear Regression ................................................................................................ 242

    Line of Best Fit ........................................................................................................................ 242

    Prediction ............................................................................................................................... 243

    ANOVA Table ........................................................................................................................ 248

    Hypothesis Testing in Regression ......................................................................................... 249

    Example: Happiness and Well-Being.................................................................................. 249

    Multiple Regression and Other Extensions ......................................................................... 255

    Exercises – Ch. 13 .................................................................................................................. 256

    Answers to Odd- Numbered Exercises – Ch. 13 ............................................................... 257

    Chapter 14. Chi-square ........................................................................................................... 259

    Categories and Frequency Tables ..................................................................................... 259

    Goodness-of-Fit ..................................................................................................................... 260

    χ 2 Statistic ................................................................................................................................ 261

    Goodness-of-Fit Example: Pineapple on Pizza ................................................................. 262

    Contingency Tables for Two Variables .............................................................................. 263

    Test for Independence ......................................................................................................... 265

    Example: College Sports ...................................................................................................... 265

    Exercises – Ch. 14 .................................................................................................................. 267

    Answers to Odd- Numbered Exercises – Ch. 13 ............................................................... 269

    Epilogue: A Brave New World ................................................................................................. 271

    pg. 6

    Unit 1 – Fundamentals of Statistics

    The first unit in this course will introduce you to the principles of statistics and why we study and use them in the behavioral sciences. It covers the basic terminology and notation used for statistics, as well as how behavioral sciences think about, use, interpret, and communicate information and data. The unit will conclude with a brief introduction to concepts in probability that underlie how scientists perform data analysis. The material in this unit will serve as the building blocks for the logic and application of hypothesis testing, which is introduced in unit 2 and comprises the rest of the material in the course.

    Chapter 1: Introduction

    This chapter provides an overview of statistics as a field of study and presents terminology that will be used throughout the course.

    What are statistics?

    Statistics include numerical facts and figures. For instance:

    • The largest earthquake measured 9.2 on the Richter scale.

    • Men are at least 10 times more likely than women to commit murder.

    • One in every 8 South Africans is HIV positive.

    • By the year 2020, there will be 15 people aged 65 and over for every new baby born.

    The study of statistics involves math and relies upon calculations of numbers. But it also relies heavily on how the numbers are chosen and how the statistics are interpreted. For example, consider the following three scenarios and the interpretations based upon the presented statistics. You will find that the numbers may be right, but the interpretation may be wrong. Try to identify a major flaw with each interpretation before we describe it.

    1) A new advertisement for Ben and Jerry's ice cream introduced in late May of last year resulted in a 30% increase in ice cream sales for the following three months. Thus, the advertisement was effective.

    A major flaw is that ice cream consumption generally increases in the months of June, July, and August regardless of advertisements. This effect is called a history effect and leads people to interpret outcomes as the result of one variable when another variable (in this case, one having to do with the passage of time) is actually responsible.

    2) The more churches in a city, the more crime there is. Thus, churches lead to crime.

    A major flaw is that both increased churches and increased crime rates can be explained by larger populations. In bigger cities, there are both more churches and more crime. This problem, which we will discuss in more detail in Chapter 6, refers to the third-variable problem. Namely, a third variable can cause both situations; however, people erroneously believe that there is a causal relationship between the two primary variables rather than recognize that a third variable can cause both.

    3) 75% more interracial marriages are occurring this year than 25 years ago. Thus, our society accepts interracial marriages.

    A major flaw is that we don't have the information that we need. What is the rate at which marriages are occurring? Suppose only 1% of marriages 25 years ago were interracial and so now 1.75% of marriages are interracial (1.75 is 75% higher than 1). But this latter number is hardly evidence suggesting the acceptability of interracial marriages. In addition, the statistic provided does not rule out the possibility that the number of interracial marriages has seen dramatic fluctuations over the years and this year is not the highest. Again, there is simply not enough information to understand fully the impact of the statistics.

    As a whole, these examples show that statistics are not only facts and figures ; they are something more than that. In the broadest sense, “statistics” refers to a range of techniques and procedures for analyzing, interpreting, displaying, and making decisions based on data.

    Statistics is the language of science and data. The ability to understand and communicate using statistics enables researchers from different labs, different languages, and different fields articulate to one another exactly what they have found in their work. It is an objective, precise, and powerful tool in science and in everyday life.

    What statistics are not . Many psychology students dread the idea of taking a statistics course, and more than a few have changed majors upon learning that it is a requirement. That is because many students view statistics as a math class, which is actually not true. While many of you will not believe this or agree with it, statistics isn’t math. Although math is a central component of it, statistics is a broader way of organizing, interpreting, and communicating information in an objective manner. Indeed, great care has been taken to eliminate as much math from this course as possible (students who do not believe this are welcome to ask the professor what matrix algebra is). Statistics is a way of viewing reality as it exists around us in a way that we otherwise could not.

    Why do we study statistics? Virtually every student of the behavioral sciences takes some form of statistics class. This is because statistics is how we communicate in science. It serves as the link between a research idea and usable conclusions. Without statistics, we would be unable to interpret the massive amounts of information contained in data. Even small datasets contain hundreds – if not thousands – of numbers, each representing a specific observation we made. Without a way to organize these numbers into a more interpretable form, we would be lost, having wasted the time and money of our participants, ourselves, and the communities we serve.

    Beyond its use in science, however, there is a more personal reason to study statistics. Like most people, you probably feel that it is important to “take control of your life.” But what does this mean? Partly, it means being able to properly evaluate the data and claims that bombard you every day. If you cannot distinguish good from faulty reasoning, then you are vulnerable to manipulation and to decisions that are not in your best interest. Statistics provides tools that you need in order to react intelligently to information you hear or read. In this sense, statistics is one of the most important things that you can study.

    To be more specific, here are some claims that we have heard on several occasions. (We are not saying that each one of these claims is true!)

    • 4 out of 5 dentists recommend Dentine.

    • Almost 85% of lung cancers in men and 45% in women are tobacco-related.

    • Condoms are effective 94% of the time.

    • People tend to be more persuasive when they look others directly in the eye and speak loudly and quickly.

    • Women make 75 cents to every dollar a man makes when they work the same job.

    • A surprising new study shows that eating egg whites can increase one's life span.

    • People predict that it is very unlikely there will ever be another baseball player with a batting average over 400.

    • There is an 80% chance that in a room full of 30 people that at least two people will share the same birthday.

    • 79.48% of all statistics are made up on the spot.

    All of these claims are statistical in character. We suspect that some of them sound familiar; if not, we bet that you have heard other claims like them. Notice how diverse the examples are. They come from psychology, health, law, sports, business, etc. Indeed, data and data interpretation show up in discourse from virtually every facet of contemporary life.

    Statistics are often presented in an effort to add credibility to an argument or advice. You can see this by paying attention to television advertisements. Many of the numbers thrown about in this way do not represent careful statistical analysis. They can be misleading and push you into decisions that you might find cause to regret. For these reasons, learning about statistics is a long step towards taking control of your life. (It is not, of course, the only step needed for this purpose.) The purpose of this course, beyond preparing you for a career in psychology, is to help you learn statistical essentials. It will make you into an intelligent consumer of statistical claims .

    You can take the first step right away. To be an intelligent consumer of statistics, your first reflex must be to question the statistics that you encounter. The British Prime Minister Benjamin Disraeli is quoted by Mark Twain as having said, “There are three kinds of lies -- lies, damned lies, and statistics.” This quote reminds us why it is so important to understand statistics. So let us invite you to reform your statistical habits from now on. No longer will you blindly accept numbers or findings. Instead, you will begin to think about the numbers, their sources, and most importantly, the procedures used to generate them.

    The above section puts an emphasis on defending ourselves against fraudulent claims wrapped up as statistics, but let us look at a more positive note. Just as important as detecting the deceptive use of statistics is the appreciation of the proper use of statistics. You must also learn to recognize statistical evidence that supports a stated conclusion. Statistics are all around you, sometimes used well, sometimes not. We must learn how to distinguish the two cases. In doing so, statistics will likely be the course you use most in your day to day life, even if you do not ever run a formal analysis again.

    Types of Data and How to Collect Them

    In order to use statistics, we need data to analyze. Data come in an amazingly diverse range of formats, and each type gives us a unique type of information. In virtually any form, data represent the measured value of variables. A variable is simply a characteristic or feature of the thing we are interested in understanding. In psychology, we are interested in people, so we might get a group of people together and measure their levels of stress (one variable), anxiety (a second variable), and their physical health (a third variable). Once we have data on these three variables, we can use statistics to understand if and how they are related. Before we do so, we need to understand the nature of our data: what they represent and where they came from.

    Types of Variables When conducting research, experimenters often manipulate variables. For example, an experimenter might compare the effectiveness of four types of antidepressants. In this case, the variable is “type of antidepressant.” When a variable is manipulated by an experimenter, it is called an independent variable. The experiment seeks to determine the effect of the independent variable on relief from depression. In this example, relief from depression is called a dependent variable. In general, the independent variable is manipulated by the experimenter and its effects on the dependent variable are measured.

    Example #1: Can blueberries slow down aging? A study indicates that antioxidants found in blueberries may slow down the process of aging. In this study, 19-month- old rats (equivalent to 60-year-old humans) were fed either their standard diet or a diet supplemented by either blueberry, strawberry, or spinach powder. After eight weeks, the rats were given memory and motor skills tests. Although all supplemented rats showed improvement, those supplemented with blueberry powder showed the most notable improvement.

    1. What is the independent variable? (dietary supplement: none, blueberry, strawberry, and spinach) 2. What are the dependent variables? (memory test and motor skills test)

    Example #2: Does beta-carotene protect against cancer? Beta-carotene supplements have been thought to protect against cancer. However, a study published in the Journal of the National Cancer Institute suggests this is false. The study was conducted with 39,000 women aged 45 and up. These women were randomly assigned to receive a beta-carotene supplement or a placebo, and their health was studied over their lifetime. Cancer rates for women taking the beta- carotene supplement did not differ systematically from the cancer rates of those women taking the placebo.

    1. What is the independent variable? (supplements: beta-carotene or placebo) 2. What is the dependent variable? (occurrence of cancer)

    Example #3: How bright is right? An automobile manufacturer wants to know how bright brake lights should be in order to minimize the time required for the driver of a following car to realize that the car in front is stopping and to hit the brakes.

    1. What is the independent variable? (brightness of brake lights) 2. What is the dependent variable? (time to hit brakes)

    Levels of an Independent Variable If an experiment compares an experimental treatment with a control treatment, then the independent variable (type of treatment) has two levels: experimental and control. If an experiment were comparing five types of diets, then the independent variable (type of diet) would have 5 levels. In general, the number of levels of an independent variable is the number of experimental conditions.

    Qualitative and Quantitative Variables An important distinction between variables is between qualitative variables and quantitative variables. Qualitative variables are those that express a qualitative attribute such as hair color, eye color, religion, favorite movie, gender, and so on. The values of a qualitative variable do not imply a numerical ordering. Values of the variable “religion” differ qualitatively; no ordering of religions is implied. Qualitative variables are sometimes referred to as categorical variables. Quantitative variables are those variables that are measured in terms of numbers. Some examples of quantitative variables are height, weight, and shoe size.

    In the study on the effect of diet discussed previously, the independent variable was type of supplement: none, strawberry, blueberry, and spinach. The variable “type of supplement” is a qualitative variable; there is nothing quantitative about it. In contrast, the dependent variable “memory test” is a quantitative variable since memory performance was measured on a quantitative scale (number correct).

    Discrete and Continuous Variables Variables such as number of children in a household are called discrete variables since the possible scores are discrete points on the scale. For example, a household could have three children or six children, but not 4.53 children. Other variables such as “time to respond to a question” are continuous variables since the scale is continuous and not made up of discrete steps. The response time could be 1.64 seconds, or it could be 1.64237123922121 seconds. Of course, the practicalities of measurement preclude most measured variables from being truly continuous.

    Levels of Measurement Before we can conduct a statistical analysis, we need to measure our dependent variable. Exactly how the measurement is carried out depends on the type of variable involved in the analysis. Different types are measured differently. To measure the time taken to respond to a stimulus, you might use a stop watch. Stop watches are of no use, of course, when it comes to measuring someone's attitude towards a political candidate. A rating scale is more appropriate in this case (with labels like “very favorable,” “somewhat favorable,” etc.). For a dependent variable such as “favorite color,” you can simply note the color-word (like “red”) that the subject offers.

    Although procedures for measurement differ in many ways, they can be classified using a few fundamental categories. In a given category, all of the procedures share some properties that are important for you to know about. The categories are called “scale types,” or just “scales,” and are described in this section.

    Nominal scales When measuring using a nominal scale, one simply names or categorizes responses. Gender, handedness, favorite color, and religion are examples of variables measured on a nominal scale. The essential point about nominal scales is that they do not imply any ordering among the responses. For example, when classifying people according to their favorite color, there is no sense in which green is placed “ahead of” blue. Responses are merely categorized. Nominal scales embody the lowest level of measurement.

    Ordinal scales A researcher wishing to measure consumers' satisfaction with their microwave ovens might ask them to specify their feelings as either “very dissatisfied,” “somewhat dissatisfied,” “somewhat satisfied,” or “very satisfied.” The items in this scale are ordered, ranging from least to most satisfied. This is what distinguishes ordinal from nominal scales. Unlike nominal scales, ordinal scales allow comparisons of the degree to which two subjects possess the dependent variable. For example, our satisfaction ordering makes it meaningful to assert that one person is more satisfied than another with their microwave ovens. Such an assertion reflects the first person's use of a verbal label that comes later in the list than the label chosen by the second person.

    On the other hand, ordinal scales fail to capture important information that will be present in the other scales we examine. In particular, the difference between two levels of an ordinal scale cannot be assumed to be the same as the difference between two other levels. In our satisfaction scale, for example, the difference between the responses “very dissatisfied” and “somewhat dissatisfied” is probably not equivalent to the difference between “somewhat dissatisfied” and “somewhat satisfied.” Nothing in our measurement procedure allows us to determine whether the two differences reflect the same difference in psychological satisfaction. Statisticians express this point by saying that the differences between adjacent scale values do not necessarily represent equal intervals on the underlying scale giving rise to the measurements. (In our case, the underlying scale is the true feeling of satisfaction, which we are trying to measure.)

    What if the researcher had measured satisfaction by asking consumers to indicate their level of satisfaction by choosing a number from one to four? Would the difference between the responses of one and two necessarily reflect the same difference in satisfaction as the difference between the responses two and three? The answer is No. Changing the response format to numbers does not change the meaning of the scale. We still are in no position to assert that the mental step from 1 to 2 (for example) is the same as the mental step from 3 to 4.

    Interval scales Interval scales are numerical scales in which intervals have the same interpretation throughout. As an example, consider the Fahrenheit scale of temperature. The difference between 30 degrees and 40 degrees represents the same temperature difference as the difference between 80 degrees and 90 degrees. This is because each 10-degree interval has the same physical meaning (in terms of the kinetic energy of molecules).

    Interval scales are not perfect, however. In particular, they do not have a true zero point even if one of the scaled values happens to carry the name “zero.” The Fahrenheit scale illustrates the issue. Zero degrees Fahrenheit does not represent the complete absence of temperature (the absence of any molecular kinetic energy). In reality, the label “zero” is applied to its temperature for quite accidental reasons connected to the history of temperature measurement. Since an interval scale has no true zero point, it does not make sense to compute ratios of temperatures. For example, there is no sense in which the ratio of 40 to 20 degrees Fahrenheit is the same as the ratio of 100 to 50 degrees; no interesting physical property is preserved across the two ratios. After all, if the “zero” label were applied at the temperature that Fahrenheit happens to label as 10 degrees, the two ratios would instead be 30 to 10 and 90 to 40, no longer the same! For this reason, it does not make sense to say that 80 degrees is “twice as hot” as 40 degrees. Such a claim would depend on an arbitrary decision about where to “start” the temperature scale, namely, what temperature to call zero (whereas the claim is intended to make a more fundamental assertion about the underlying physical reality). Ratio scales The ratio scale of measurement is the most informative scale. It is an interval scale with the additional property that its zero position indicates the absence of the quantity being measured. You can think of a ratio scale as the three earlier scales rolled up in one. Like a nominal scale, it provides a name or category for each object (the numbers serve as labels). Like an ordinal scale, the objects are ordered (in terms of the ordering of the numbers). Like an interval scale, the same difference at two places on the scale has the same meaning. And in addition, the same ratio at two places on the scale also carries the same meaning.

    The Fahrenheit scale for temperature has an arbitrary zero point and is therefore not a ratio scale. However, zero on the Kelvin scale is absolute zero. This makes the Kelvin scale a ratio scale. For example, if one temperature is twice as high as another as measured on the Kelvin scale, then it has twice the kinetic energy of the other temperature.

    Another example of a ratio scale is the amount of money you have in your pocket right now (25 cents, 55 cents, etc.). Money is measured on a ratio scale because, in addition to having the properties of an interval scale, it has a true zero point: if you have zero money, this implies the absence of money. Since money has a true zero point, it makes sense to say that someone with 50 cents has twice as much money as someone with 25 cents (or that Bill Gates has a million times more money than you do).

    What level of measurement is used for psychological variables? Rating scales are used frequently in psychological research. For example, experimental subjects may be asked to rate their level of pain, how much they like a consumer product, their attitudes about capital punishment, their confidence in an answer to a test question. Typically these ratings are made on a 5-point or a 7-point scale. These scales are ordinal scales since there is no assurance that a given difference represents the same thing across the range of the scale. For example, there is no way to be sure that a treatment that reduces pain from a rated pain level of 3 to a rated pain level of 2 represents the same level of relief as a treatment that reduces pain from a rated pain level of 7 to a rated pain level of 6.

    In memory experiments, the dependent variable is often the number of items correctly recalled. What scale of measurement is this? You could reasonably argue that it is a ratio scale. First, there is a true zero point; some subjects may get no items correct at all. Moreover, a difference of one represents a difference of one item recalled across the entire scale. It is certainly valid to say that someone who recalled 12 items recalled twice as many items as someone who recalled only 6 items.

    But number-of-items recalled is a more complicated case than it appears at first. Consider the following example in which subjects are asked to remember as many items as possible from a list of 10. Assume that (a) there are 5 easy items and 5 difficult items, (b) half of the subjects are able to recall all the easy items and different numbers of difficult items, while (c) the other half of the subjects are unable to recall any of the difficult items but they do remember different numbers of easy items. Some sample data are shown below.

    Subject Easy Items Difficult Items Score

    A 0 0 1 1 0 0 0 0 0 0 2

    B 1 0 1 1 0 0 0 0 0 0 3

    C 1 1 1 1 1 1 1 0 0 0 7

    D 1 1 1 1 1 0 1 1 0 1 8

    Let's compare (i) the difference between Subject A's score of 2 and Subject B's score of 3 and (ii) the difference between Subject C's score of 7 and Subject D's score of 8. The former difference is a difference of one easy item; the latter difference is a difference of one difficult item. Do these two differences necessarily signify the same difference in memory? We are inclined to respond “No” to this question since only a little more memory may be needed to retain the additional easy item whereas a lot more memory may be needed to retain the additional hard item. The general point is that it is often inappropriate to consider psychological measurement scales as either interval or ratio.

    Consequences of level of measurement Why are we so interested in the type of scale that measures a dependent variable? The crux of the matter is the relationship between the variable's level of measurement and the statistics that can be meaningfully computed with that variable. For example, consider a hypothetical study in which 5 children are asked to choose their favorite color from blue, red, yellow, green, and purple. The researcher codes the results as follows:

    Color Code Blue 1 Red 2 Yellow 3 Green 4 Purple 5

    This means that if a child said her favorite color was “Red,” then the choice was coded as “2,” if the child said her favorite color was “Purple,” then the response was coded as 5, and so forth. Consider the following hypothetical data:

    Subject Color Code

    1 Blue 1 2 Blue 1 3 Green 4 4 Green 4 5 Purple 5

    Each code is a number, so nothing prevents us from computing the average code assigned to the children. The average happens to be 3, but you can see that it would be senseless to conclude that the average favorite color is yellow (the color with a code of 3). Such nonsense arises because favorite color is a nominal scale, and taking the average of its numerical labels is like counting the number of letters in the name of a snake to see how long the beast is.

    Does it make sense to compute the mean of numbers measured on an ordinal scale? This is a difficult question, one that statisticians have debated for decades. The prevailing (but by no means unanimous) opinion of statisticians is that for almost all practical situations, the mean of an ordinally-measured variable is a meaningful statistic. However, there are extreme situations in which computing the mean of an ordinally-measured variable can be very misleading.

    Collecting Data We are usually interested in understanding a specific group of people. This group is known as the population of interest, or simply the population. The population is the collection of all people who have some characteristic in common; it can be as broad as “all people” if we have a very general research question about human psychology, or it can be extremely narrow, such as “all freshmen psychology majors at Midwestern public universities” if we have a specific group in mind.

    Populations and samples In statistics, we often rely on a sample --- that is, a small subset of a larger set of data --- to draw inferences about the larger set. The larger set is known as the population from which the sample is drawn.

    Example #1: You have been hired by the National Election Commission to examine how the American people feel about the fairness of the voting procedures in the U.S. Who will you ask?

    It is not practical to ask every single American how he or she feels about the fairness of the voting procedures. Instead, we query a relatively small number of Americans, and draw inferences about the entire country from their responses. The Americans actually queried constitute our sample of the larger population of all Americans.

    A sample is typically a small subset of the population. In the case of voting attitudes, we would sample a few thousand Americans drawn from the hundreds of millions that make up the country. In choosing a sample, it is therefore crucial that it not over-represent one kind of citizen at the expense of others. For example, something would be wrong with our sample if it happened to be made up entirely of Florida residents. If the sample held only Floridians, it could not be used to infer the attitudes of other Americans. The same problem would arise if the sample were comprised only of Republicans. Inferences from statistics are based on the assumption that sampling is representative of the population. If the sample is not representative, then the possibility of sampling bias occurs. Sampling bias means that our conclusions apply only to our sample and are not generalizable to the full population.

    Example #2: We are interested in examining how many math classes have been taken on average by current graduating seniors at American colleges and universities during their four years in school. Whereas our population in the last example included all US citizens, now it involves just the graduating seniors throughout the country. This is still a large set since there are thousands of colleges and universities, each enrolling many students. (New York University, for example, enrolls 48,000 students.) It would be prohibitively costly to examine the transcript of every college senior. We therefore take a sample of college seniors and then make inferences to the entire population based on what we find. To make the sample, we might first choose some public and private colleges and universities across the United States. Then we might sample 50 students from each of these institutions. Suppose that the average number of math classes taken by the people in our sample were 3.2. Then we might speculate that 3.2 approximates the number we would find if we had the resources to examine every senior in the entire population. But we must be careful about the possibility that our sample is non-representative of the population. Perhaps we chose an overabundance of math majors, or chose too many technical institutions that have heavy math requirements. Such bad sampling makes our sample unrepresentative of the population of all seniors.

    To solidify your understanding of sampling bias, consider the following example. Try to identify the population and the sample, and then reflect on whether the sample is likely to yield the information desired.

    Example #3: A substitute teacher wants to know how students in the class did on their last test. The teacher asks the 10 students sitting in the front row to state their latest test score. He concludes from their report that the class did extremely well. What is the sample? What is the population? Can you identify any problems with choosing the sample in the way that the teacher did?

    In Example #3, the population consists of all students in the class. The sample is made up of just the 10 students sitting in the front row. The sample is not likely to be representative of the population. Those who sit in the front row tend to be more interested in the class and tend to perform higher on tests. Hence, the sample may perform at a higher level than the population.

    Example #4: A coach is interested in how many cartwheels the average college freshmen at his university can do. Eight volunteers from the freshman class step forward. After observing their performance, the coach concludes that college freshmen can do an average of 16 cartwheels in a row without stopping.

    In Example #4, the population is the class of all freshmen at the coach's university. The sample is composed of the 8 volunteers. The sample is poorly chosen because volunteers are more likely to be able to do cartwheels than the average freshman; people who can't do cartwheels probably did not volunteer! In the example, we are also not told of the gender of the volunteers. Were they all women, for example? That might affect the outcome, contributing to the non-representative nature of the sample (if the school is co-ed).

    Simple Random Sampling Researchers adopt a variety of sampling strategies. The most straightforward is simple random sampling. Such sampling requires every member of the population to have an equal chance of being selected into the sample. In addition, the selection of one member must be independent of the selection of every other member. That is, picking one member from the population must not increase or decrease the probability of picking any other member (relative to the others). In this sense, we can say that simple random sampling chooses a sample by pure chance. To check your understanding of simple random sampling, consider the following example. What is the population? What is the sample? Was the sample picked by simple random sampling? Is it biased?

    Example #5: A research scientist is interested in studying the experiences of twins raised together versus those raised apart. She obtains a list of twins from the National Twin Registry, and selects two subsets of individuals for her study. First, she chooses all those in the registry whose last name begins with Z. Then she turns to all those whose last name begins with B. Because there are so many names that start with B, however, our researcher decides to incorporate only every other name into her sample. Finally, she mails out a survey and compares characteristics of twins raised apart versus together.

    In Example #5, the population consists of all twins recorded in the National Twin Registry. It is important that the researcher only make statistical generalizations to the twins on this list, not to all twins in the nation or world. That is, the National Twin Registry may not be representative of all twins. Even if inferences are limited to the Registry, a number of problems affect the sampling procedure we described. For instance, choosing only twins whose last names begin with Z does not give every individual an equal chance of being selected into the sample. Moreover, such a procedure risks over-representing ethnic groups with many surnames that begin with Z. There are other reasons why choosing just the Z's may bias the sample. Perhaps such people are more patient than average because they often find themselves at the end of the line! The same problem occurs with choosing twins whose last name begins with B. An additional problem for the B's is that the “every-other-one” procedure disallowed adjacent names on the B part of the list from being both selected. Just this defect alone means the sample was not formed through simple random sampling.

    Sample size matters Recall that the definition of a random sample is a sample in which every member of the population has an equal chance of being selected. This means that the sampling procedure rather than the results of the procedure define what it means for a sample to be random. Random samples, especially if the sample size is small, are not necessarily representative of the entire population. For example, if a random sample of 20 subjects were taken from a population with an equal number of males and females, there would be a nontrivial probability (0.06) that 70% or more of the sample would be female. Such a sample would not be representative, although it would be drawn randomly. Only a large sample size makes it likely that our sample is close to representative of the population. For this reason, inferential statistics take into account the sample size when generalizing results from samples to populations. In later chapters, you'll see what kinds of mathematical techniques ensure this sensitivity to sample size.

    More complex sampling Sometimes it is not feasible to build a sample using simple random sampling. To see the problem, consider the fact that both Dallas and Houston are competing to be hosts of the 2012 Olympics. Imagine that you are hired to assess whether most Texans prefer Houston to Dallas as the host, or the reverse. Given the impracticality of obtaining the opinion of every single Texan, you must construct a sample of the Texas population. But now notice how difficult it would be to proceed by simple random sampling. For example, how will you contact those individuals who don’t vote and don’t have a phone? Even among people you find in the telephone book, how can you identify those who have just relocated to California (and had no reason to inform you of their move)? What do you do about the fact that since the beginning of the study, an additional 4,212 people took up residence in the state of Texas? As you can see, it is sometimes very difficult to develop a truly random procedure. For this reason, other kinds of sampling techniques have been devised. We now discuss two of them.

    Stratified Sampling Since simple random sampling often does not ensure a representative sample, a sampling method called stratified random sampling is sometimes used to make the sample more representative of the population. This method can be used if the population has a number of distinct “strata” or groups. In stratified sampling, you first identify members of your sample who belong to each group. Then you randomly sample from each of those subgroups in such a way that the sizes of the subgroups in the sample are proportional to their sizes in the population.

    Let's take an example: Suppose you were interested in views of capital punishment at an urban university. You have the time and resources to interview 200 students. The student body is diverse with respect to age; many older people work during the day and enroll in night courses (average age is 39), while younger students generally enroll in day classes (average age of 19). It is possible that night students have different views about capital punishment than day students. If 70% of the students were day students, it makes sense to ensure that 70% of the sample consisted of day students. Thus, your sample of 200 students would consist of 140 day students and 60 night students. The proportion of day students in the sample and in the population (the entire university) would be the same. Inferences to the entire population of students at the university would therefore be more secure.

    Convenience Sampling Not all sampling methods are perfect, and sometimes that’s okay. For example, if we are beginning research into a completely unstudied area, we may sometimes take some shortcuts to quickly gather data and get a general idea of how things work before fully investing a lot of time and money into well-designed research projects with proper sampling. This is known as convenience sampling, named for its ease of use. In limited cases, such as the one just described, convenience sampling is okay because we intend to follow up with a representative sample. Unfortunately, sometimes convenience sampling is used due only to its convenience without the intent of improving on it in future work.

    Type of Research Designs Research studies come in many forms, and, just like with the different types of data we have, different types of studies tell us different things. The choice of research design is determined by the research question and the logistics involved. Though a complete understanding of different research designs is the subject for at least one full class, if not more, a basic understanding of the principles is useful here. There are three types of research designs we will discuss: experimental, quasi- experimental, and non-experimental.

    Experimental Designs If we want to know if a change in one variable causes a change in another variable, we must use a true experiment. An experiment is defined by the use of random assignment to treatment conditions and manipulation of the independent variable. To understand what this means, let’s look at an example:

    A clinical researcher wants to know if a newly developed drug is effective in treating the flu. Working with collaborators at several local hospitals, she randomly samples 40 flu patients and randomly assigns each one to one of two conditions: Group A receives the new drug and Group B received a placebo. She measures the symptoms of all participants after 1 week to see if there is a difference in symptoms between the groups.

    In the example, the independent variable is the drug treatment; we manipulate it into 2 levels: new drug or placebo. Without the researcher administering the drug (i.e. manipulating the independent variable), there would be no difference between the groups. Each person, after being randomly sampled to be in the research, was then randomly assigned to one of the 2 groups. That is, random sampling and random assignment are not the same thing and cannot be used interchangeably. For research to be a true experiment, random assignment must be used. For research to be representative of the population, random sampling must be used. The use of both techniques helps ensure that there are no systematic differences between the groups, thus eliminating the potential for sampling bias.

    The dependent variable in the example is flu symptoms. Barring any other intervention, we would assume that people in both groups, on average, get better at roughly the same rate. Because there are no systematic differences between the 2 groups, if the researcher does find a difference in symptoms, she can confidently attribute it to the effectiveness of the new drug.

    Quasi-Experimental Designs Quasi-experimental research involves getting as close as possible to the conditions of a true experiment when we cannot meet all requirements. Specifically, a quasi- experiment involves manipulating the independent variable but not randomly assigning people to groups. There are several reasons this might be used. First, it may be unethical to deny potential treatment to someone if there is good reason to believe it will be effective and that the person would unduly suffer if they did not receive it. Alternatively, it may be impossible to randomly assign people to groups. Consider the following example:

    A professor wants to test out a new teaching method to see if it improves student learning. Because he is teaching two sections of the same course, he decides to teach one section the traditional way and the other section using the new method. At the end of the semester, he compares the grades on the final for each class to see if there is a difference.

    In this example, the professor has manipulated his teaching method, which is the independent variable, hoping to find a difference in student performance, the dependent variable. However, because students enroll in courses, he cannot randomly assign the students to a particular group, thus precluding using a true experiment to answer his research question. Because of this, we cannot know for sure that there are no systematic differences between the classes other than teaching style and therefore cannot determine causality.

    Non-Experimental Designs Finally, non-experimental research (sometimes called correlational research) involves observing things as they occur naturally and recording our observations as data. Consider this example:

    A data scientist wants to know if there is a relation between how conscientious a person is and whether that person is a good employee. She hopes to use this information to predict the job performance of future employees by measuring their personality when they are still job applicants. She randomly samples volunteer employees from several different companies, measuring their conscientiousness and having their bosses rate their performance on the job. She analyzes this data to find a relation.

    Here, it is not possible to manipulate conscientious, so the researcher must gather data from employees as they are in order to find a relation between her variables.

    Although this technique cannot establish causality, it can still be quite useful. If the relation between conscientiousness and job performance is consistent, then it doesn’t necessarily matter is conscientiousness causes good performance or if they are both caused by something else – she can still measure conscientiousness to predict future performance. Additionally, these studies have the benefit of reflecting reality as it actually exists since we as researchers do not change anything.

    Types of Statistical Analyses Now that we understand the nature of our data, let’s turn to the types of statistics we can use to interpret them. There are 2 types of statistics: descriptive and inferential.

    Descriptive Statistics Descriptive statistics are numbers that are used to summarize and describe data. The word “data” refers to the information that has been collected from an experiment, a survey, an historical record, etc. (By the way, “data” is plural. One piece of information is called a “datum.”) If we are analyzing birth certificates, for example, a descriptive statistic might be the percentage of certificates issued in New York State, or the average age of the mother. Any other number we choose to compute also counts as a descriptive statistic for the data from which the statistic is computed. Several descriptive statistics are often used at one time to give a full picture of the data. Descriptive statistics are just descriptive. They do not involve generalizing beyond the data at hand. Generalizing from our data to another set of cases is the business of inferential statistics, which you'll be studying in another section. Here we focus on (mere) descriptive statistics.

    Some descriptive statistics are shown in Table 1. The table shows the average salaries for various occupations in the United States in 1999.

    Salary Occupation

    $112,760 pediatricians

    $106,130 dentists

    $100,090 podiatrists

    $76,140 physicists

    $53,410 architects,

    $49,720 school, clinical, and

    counseling psychologists

    $47,910 flight attendants

    $39,560 elementary school teachers

    $38,710 police officers

    $18,980 floral designers Table 1. Average salaries for various occupations in 1999.

    Descriptive statistics like these offer insight into American society. It is interesting to note, for example, that we pay the people who educate our children and who protect our citizens a great deal less than we pay people who take care of our feet or our teeth.

    For more descriptive statistics, consider Table 2. It shows the number of unmarried men per 100 unmarried women in U.S. Metro Areas in 1990. From this table we see that men outnumber women most in Jacksonville, NC, and women outnumber men most in Sarasota, FL. You can see that descriptive statistics can be useful if we are looking for an opposite-sex partner! (These data come from the Information Please Almanac.)

    Cities Cities with mostly

    Men per

    with mostly men

    100 Women

    women

    Men per 100 Women

    1. Jacksonville, NC 224 1. Sarasota, FL 66

    2. Killeen-Temple,

    TX 123 2. Bradenton, FL 68

    3. Fayetteville, NC 118 3. Altoona, PA 69

    4. Brazoria, TX 117 4. Springfield, IL 70

    5. Lawton, OK 116 5. Jacksonville, TN 70

    6. State College, PA 113 6. Gadsden, AL 70

    7. Clarksville- Hopkinsville, TN-KY 113 7. Wheeling, WV 70

    8. Anchorage, Alaska 112 8. Charleston, WV 71

    9. Salinas-Seaside-

    Monterey, CA 112 9. St. Joseph, MO 71

    10. Bryan-College

    Station, TX 111 10. Lynchburg, VA 71 Table 2. Number of unmarried men per 100 unmarried women in U.S. Metro Areas in 1990. NOTE: Unmarried includes never-married, widowed, and divorced persons, 15 years or older.

    These descriptive statistics may make us ponder why the numbers are so disparate in these cities. One potential explanation, for instance, as to why there are more women in Florida than men may involve the fact that elderly individuals tend to move down to the Sarasota region and that women tend to outlive men. Thus, more women might live in Sarasota than men. However, in the absence of proper data, this is only speculation.

    pg. 28

    You probably know that descriptive statistics are central to the world of sports. Every sporting event produces numerous statistics such as the shooting percentage of players on a basketball team. For the Olympic marathon (a foot race of 26.2 miles), we possess data that cover more than a century of competition. (The first modern Olympics took place in 1896.) The following table shows the winning times for both men and women (the latter have only been allowed to compete since 1984).

    Women

    Year Winner Country Time

    1984 Joan Benoit USA 2:24:52

    1988 Rosa Mota POR 2:25:40

    1992 Valentina Yegorova UT 2:32:41

    1996 Fatuma Roba ETH 2:26:05

    2000 Naoko Takahashi JPN 2:23:14

    2004 Mizuki Noguchi JPN 2:26:20

    Men

    Year Winner Country Time

    1896 Spiridon Louis GRE 2:58:50

    1900 Michel Theato FRA 2:59:45

    1904 Thomas Hicks USA 3:28:53

    1906 Billy Sherring CAN 2:51:23

    1908 Johnny Hayes USA 2:55:18

    1912 Kenneth McArthur S. Afr. 2:36:54

    1920 Hannes Kolehmainen FIN 2:32:35

    1924 Albin Stenroos FIN 2:41:22

    pg. 29

    1928 Boughra El Ouafi FRA 2:32:57

    1932 Juan Carlos Zabala ARG 2:31:36

    1936 Sohn Kee-Chung JPN 2:29:19

    1948 Delfo Cabrera ARG 2:34:51

    1952 Emil Ztopek CZE 2:23:03

    1956 Alain Mimoun FRA 2:25:00

    1960 Abebe Bikila ETH 2:15:16

    1964 Abebe Bikila ETH 2:12:11

    1968 Mamo Wolde ETH 2:20:26

    1972 Frank Shorter USA 2:12:19

    1976 Waldemar Cierpinski E.Ger 2:09:55

    1980 Waldemar Cierpinski E.Ger 2:11:03

    1984 Carlos Lopes POR 2:09:21

    1988 Gelindo Bordin ITA 2:10:32

    1992 Hwang Young-Cho S. Kor 2:13:23

    1996 Josia Thugwane S. Afr. 2:12:36

    2000 Gezahenge Abera ETH 2:10.10

    2004 Stefano Baldini ITA 2:10:55 Table 3. Winning Olympic marathon times.

    There are many descriptive statistics that we can compute from the data in the table. To gain insight into the improvement in speed over the years, let us divide the men's times into two pieces, namely, the first 13 races (up to 1952) and the second 13 (starting from 1956). The mean winning time for the first 13 races is 2 hours, 44 minutes, and 22 seconds (written 2:44:22). The mean winning time for the second 13 races is 2:13:18. This is quite a difference (over half an hour). Does

    pg. 30

    this prove that the fastest men are running faster? Or is the difference just due to chance, no more than what often emerges from chance differences in performance from year to year? We can't answer this question with descriptive statistics alone. All we can affirm is that the two means are “suggestive.”

    Examining Table 3 leads to many other questions. We note that Takahashi (the lead female runner in 2000) would have beaten the male runner in 1956 and all male runners in the first 12 marathons. This fact leads us to ask whether the gender gap will close or remain constant. When we look at the times within each gender, we also wonder how far they will decrease (if at all) in the next century of the Olympics. Might we one day witness a sub-2 hour marathon? The study of statistics can help you make reasonable guesses about the answers to these questions.

    It is also important to differentiate what we use to describe populations vs what we use to describe samples. A population is described by a parameter; the parameter is the true value of the descriptive in the population, but one that we can never know for sure. For example, the Bureau of Labor Statistics reports that the average hourly wage of chefs is $23.87. However, even if this number was computed using information from every single chef in the United States (making it a parameter), it would quickly become slightly off as one chef retires and a new chef enters the job market. Additionally, as noted above, there is virtually no way to collect data from every single person in a population. In order to understand a variable, we estimate the population parameter using a sample statistic. Here, the term “statistic” refers to the specific number we compute from the data (e.g. the average), not the field of statistics. A sample statistic is an estimate of the true population parameter, and if our sample is representative of the population, then the statistic is considered to be a good estimator of the parameter.

    Even the best sample will be somewhat off from the full population, earlier referred to as sampling bias, and as a result, there will always be a tiny discrepancy between the parameter and the statistic we use to estimate it. This difference is known as sampling error, and, as we will see throughout the course, understanding sampling error is the key to understanding statistics. Every observation we make about a variable, be it a full research study or observing an individual’s behavior, is incapable of being completely representative of all possibilities for that variable. Knowing where to draw the line between an unusual observation and a true difference is what statistics is all about.

    pg. 31

    Inferential Statistics Descriptive statistics are wonderful at telling us what our data look like. However, what we often want to understand is how our data behave. What variables are related to other variables? Under what conditions will the value of a variable change? Are two groups different from each other, and if so, are people within each group different or similar? These are the questions answered by inferential statistics, and inferential statistics are how we generalize from our sample back up to our population. Units 2 and 3 are all about inferential statistics, the formal analyses and tests we run to make conclusions about our data.

    For example, we will learn how to use a t statistic to determine whether people change over time when enrolled in an intervention. We will also use an F statistic to determine if we can predict future values on a variable based on current known values of a variable. There are many types of inferential statistics, each allowing us insight into a different behavior of the data we collect. This course will only touch on a small subset (or a sample ) of them, but the principles we learn along the way will make it easier to learn new tests, as most inferential statistics follow the same structure and format.

    Mathematical Notation As noted above, statistics is not math. It does, however, use math as a tool. Many statistical formulas involve summing numbers. Fortunately there is a convenient notation for expressing summation. This section covers the basics of this summation notation.

    Let's say we have a variable X that represents the weights (in grams) of 4 grapes:

    Grape X

    1 4.6 2 5.1 3 4.9 4 4.4

    We label Grape 1's weight X 1 , Grape 2's weight X 2 , etc. The following formula means to sum up the weights of the four grapes:

    pg. 32

    The Greek letter Σ indicates summation. The “i = 1” at the bottom indicates that the summation is to start with X 1 and the 4 at the top indicates that the summation will end with X 4 . The “X i ” indicates that X is the variable to be summed as i goes from 1 to 4. Therefore,

    The symbol

    indicates that only the first 3 scores are to be summed. The index variable i goes from 1 to 3.

    When all the scores of a variable (such as X) are to be summed, it is often convenient to use the following abbreviated notation:

    Thus, when no values of i are shown, it means to sum all the values of X.

    Many formulas involve squaring numbers before they are summed. This is indicated as

    = 21.16 + 26.01 + 24.01 + 19.36 = 90.54

    Notice that:

    pg. 33

    because the expression on the left means to sum up all the values of X and then square the sum (192 = 361), whereas the expression on the right means to square the numbers and then sum the squares (90.54, as shown).

    Some formulas involve the sum of cross products. Below are the data for variables X and Y. The cross products (XY) are shown in the third column. The sum of the cross products is 3 + 4 + 21 = 28.

    X Y XY 1 3 3 2 2 4 3 7 21

    In summation notation, this is written as:

    Exercises – Ch. 1

    1. In your own words, describe why we study statistics. 2. For each of the following, determine if the variable is continuous or discrete:

    a. Time taken to read a book chapter b. Favorite food c. Cognitive ability d. Temperature e. Letter grade received in a class 3. For each of the following, determine the level of measurement:

    a. T-shirt size b. Time taken to run 100 meter race c. First, second, and third place in 100 meter race d. Birthplace e. Temperature in Celsius 4. What is the difference between a population and a sample? Which is

    described by a parameter and which is described by a statistic? 5. What is sampling bias? What is sampling error? 6. What is the difference between a simple random sample and a stratified

    random sample? 7. What are the two key characteristics of a true experimental design? 8. When would we use a quasi-experimental design?

    pg. 34

    9. Use the following dataset for the computations below:

    X Y 2 8 3 8 7 4 5 1 9 4 a. ΣX b. ΣY 2 c. ΣXY d. (ΣY) 2 10. What are the most common measures of central tendency and spread?

    Answers to Odd-Numbered Exercises – Ch. 1

    1. Your answer could take many forms but should include information about objectively interpreting information and/or communicating results and research conclusions 3. For each of the following, determine the level of measurement:

    a. Ordinal b. Ratio c. Ordinal d. Nominal e. Interval 5. Sampling bias is the difference in demographic characteristics between a

    sample and the population it should represent. Sampling error is the difference between a population parameter and sample statistic that is caused by random chance due to sampling bias. 7. Random assignment to treatment conditions and manipulation of the

    independent variable 9. Use the following dataset for the computations below:

    a. 26 b. 161 c. 109 d. 625

    pg. 35

    Chapter 2: Describing Data using Distributions and Graphs

    Before we can understand our analyses, we must first understand our data. The first step in doing this is using tables, charts, graphs, plots, and other visual tools to see what our data look like.

    Graphing Qualitative Variables When Apple Computer introduced the iMac computer in August 1998, the company wanted to learn whether the iMac was expanding Apple’s market share. Was the iMac just attracting previous Macintosh owners? Or was it purchased by newcomers to the computer market and by previous Windows users who were switching over? To find out, 500 iMac customers were interviewed. Each customer was categorized as a previous Macintosh owner, a previous Windows owner, or a new computer purchaser.

    This section examines graphical methods for displaying the results of the interviews. We’ll learn some general lessons about how to graph data that fall into a small number of categories. A later section will consider how to graph numerical data in which each observation is represented by a number in some range. The key point about the qualitative data that occupy us in the present section is that they do not come with a pre-established ordering (the way numbers are ordered). For example, there is no natural sense in which the category of previous Windows users comes before or after the category of previous Macintosh users. This situation may be contrasted with quantitative data, such as a person’s weight. People of one weight are naturally ordered with respect to people of a different weight.

    Frequency Tables All of the graphical methods shown in this section are derived from frequency tables. Table 1 shows a frequency table for the results of the iMac study; it shows the frequencies of the various response categories. It also shows the relative frequencies, which are the proportion of responses in each category. For example, the relative frequency for “none” of 0.17 = 85/500.

    pg. 36

    Previous Ownership Frequency Relative Frequency

    None 85 0.17

    Windows 60 0.12

    Macintosh 355 0.71

    Total 500 1 Table 1. Frequency Table for the iMac Data.

    Pie Charts The pie chart in Figure 1 shows the results of the iMac study. In a pie chart, each category is represented by a slice of the pie. The area of the slice is proportional to the percentage of responses in the category. This is simply the relative frequency multiplied by 100. Although most iMac purchasers were Macintosh owners, Apple was encouraged by the 12% of purchasers who were former Windows users, and by the 17% of purchasers who were buying a computer for the first time.

    Figure 1. Pie chart of iMac purchases illustrating frequencies of previous

    computer ownership.

    Pie charts are effective for displaying the relative frequencies of a small number of categories. They are not recommended, however, when you have a large number of categories. Pie charts can also be confusing when they are used to compare the outcomes of two different surveys or experiments. In an influential book on the use

    pg. 37

    of graphs, Edward Tufte asserted “The only worse design than a pie chart is several of them.” Here is another important point about pie charts. If they are based on a small number of observations, it can be misleading to label the pie slices with percentages. For example, if just 5 people had been interviewed by Apple Computers, and 3 were former Windows users, it would be misleading to display a pie chart with the Windows slice showing 60%. With so few people interviewed, such a large percentage of Windows users might easily have occurred since chance can cause large errors with small samples. In this case, it is better to alert the user of the pie chart to the actual numbers involved. The slices should therefore be labeled with the actual frequencies observed (e.g., 3) instead of with percentages.

    Bar charts Bar charts can also be used to represent frequencies of different categories. A bar chart of the iMac purchases is shown in Figure 2. Frequencies are shown on the Y- axis and the type of computer previously owned is shown on the X-axis. Typically, the Y-axis shows the number of observations in each category rather than the percentage of observations in each category as is typical in pie charts.

    Figure 2. Bar chart of iMac purchases as a function of previous computer

    ownership.

    pg. 38

    Comparing Distributions Often we need to compare the results of different surveys, or of different conditions within the same overall survey. In this case, we are comparing the “distributions” of responses between the surveys or conditions. Bar charts are often excellent for illustrating differences between two distributions. Figure 3 shows the number of people playing card games at the Yahoo web site on a Sunday and on a Wednesday in the spring of 2001. We see that there were more players overall on Wednesday compared to Sunday. The number of people playing Pinochle was nonetheless the same on these two days. In contrast, there were about twice as many people playing hearts on Wednesday as on Sunday. Facts like these emerge clearly from a well-designed bar chart.

    Figure 3. A bar chart of the number of people playing different card games

    on Sunday and Wednesday.

    pg. 39

    The bars in Figure 3 are oriented horizontally rather than vertically. The horizontal format is useful when you have many categories because there is more room for the category labels. We’ll have more to say about bar charts when we consider numerical quantities later in this chapter.

    Some graphical mistakes to avoid Don’t get fancy! People sometimes add features to graphs that don’t help to convey their information. For example, 3-dimensional bar charts such as the one shown in Figure 4 are usually not as effective as their two-dimensional counterparts.

    Figure 4. A three-dimensional version of Figure 2.

    Here is another way that fanciness can lead to trouble. Instead of plain bars, it is tempting to substitute meaningful images. For example, Figure 5 presents the iMac data using pictures of computers. The heights of the pictures accurately represent the number of buyers, yet Figure 5 is misleading because the viewer's attention will be captured by areas. The areas can exaggerate the size differences between the groups. In terms of percentages, the ratio of previous Macintosh owners to previous Windows owners is about 6 to 1. But the ratio of the two areas in Figure 5 is about 35 to 1. A biased person wishing to hide the fact that many Windows owners purchased iMacs would be tempted to use Figure 5 instead of Figure 2! Edward Tufte coined the term “lie factor” to refer to the ratio of the size of the

    pg. 40

    effect shown in a graph to the size of the effect shown in the data. He suggests that lie factors greater than 1.05 or less than 0.95 produce unacceptable distortion.

    Figure 5. A redrawing of Figure 2 with a lie factor greater than 8.

    Another distortion in bar charts results from setting the baseline to a value other than zero. The baseline is the bottom of the Y-axis, representing the least number of cases that could have occurred in a category. Normally, but not always, this number should be zero. Figure 6 shows the iMac data with a baseline of 50. Once again, the differences in areas suggests a different story than the true differences in percentages. The number of Windows-switchers seems minuscule compared to its true value of 12%.

    pg. 41

    Figure 6. A redrawing of Figure 2 with a baseline of 50.

    Finally, we note that it is a serious mistake to use a line graph when the X-axis contains merely qualitative variables. A line graph is essentially a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed). Figure 7 inappropriately shows a line graph of the card game data from Yahoo. The drawback to Figure 7 is that it gives the false impression that the games are naturally ordered in a numerical way when, in fact, they are ordered alphabetically.

    pg. 42

    Figure 7. A line graph used inappropriately to depict the number of people

    playing different card games on Sunday and Wednesday.

    Summary Pie charts and bar charts can both be effective methods of portraying qualitative data. Bar charts are better when there are more than just a few categories and for comparing two or more distributions. Be careful to avoid creating misleading graphs.

    Graphing Quantitative Variables

    As discussed in the section on variables in Chapter 1, quantitative variables are variables measured on a numeric scale. Height, weight, response time, subjective rating of pain, temperature, and score on an exam are all examples of quantitative variables. Quantitative variables are distinguished from categorical (sometimes called qualitative) variables such as favorite color, religion, city of birth, favorite sport in which there is no ordering or measuring involved.

    There are many types of graphs that can be used to portray distributions of quantitative variables. The upcoming sections cover the following types of graphs: (1) stem and leaf displays, (2) histograms, (3) frequency polygons, (4) box plots, (5) bar charts, (6) line graphs, (7) dot plots, and (8) scatter plots (discussed in a different chapter). Some graph types such as stem and leaf displays are best-suited for small to moderate amounts of data, whereas others such as histograms are best- suited for large amounts of data. Graph types such as box plots are good at

    pg. 43

    depicting differences between distributions. Scatter plots are used to show the relationship between two variables.

    Stem and Leaf Displays A stem and leaf display is a graphical method of displaying data. It is particularly useful when your data are not too numerous. In this section, we will explain how to construct and interpret this kind of graph.

    As usual, we will start with an example. Consider Table 2 that shows the number of touchdown passes (TD passes) thrown by each of the 31 teams in the National Football League in the 2000 season.

    37, 33, 33, 32, 29, 28, 28, 23, 22, 22, 22, 21, 21, 21, 20, 20, 19, 19, 18, 18, 18, 18, 16, 15, 14, 14, 14, 12, 12, 9, 6

    Table 2. Number of touchdown passes.

    A stem and leaf display of the data is shown in Figure 7. The left portion of Figure 1 contains the stems. They are the numbers 3, 2, 1, and 0, arranged as a column to the left of the bars. Think of these numbers as 10’s digits. A stem of 3, for example, can be used to represent the 10’s digit in any of the numbers from 30 to 39. The numbers to the right of the bar are leaves, and they represent the 1’s digits. Every leaf in the graph therefore stands for the result of adding the leaf to 10 times its stem.

    3|2337 2|001112223889 1|2244456888899 0|69 Figure 7. Stem and leaf display of the number of touchdown passes.

    To make this clear, let us examine Figure 1 more closely. In the top row, the four leaves to the right of stem 3 are 2, 3, 3, and 7. Combined with the stem, these leaves represent the numbers 32, 33, 33, and 37, which are the numbers of TD

    pg. 44

    passes for the first four teams in Table 1. The next row has a stem of 2 and 12 leaves. Together, they represent 12 data points, namely, two occurrences of 20 TD passes, three occurrences of 21 TD passes, three occurrences of 22 TD passes, one occurrence of 23 TD passes, two occurrences of 28 TD passes, and one occurrence of 29 TD passes. We leave it to you to figure out what the third row represents. The fourth row has a stem of 0 and two leaves. It stands for the last two entries in Table 1, namely 9 TD passes and 6 TD passes. (The latter two numbers may be thought of as 09 and 06.)

    One purpose of a stem and leaf display is to clarify the shape of the distribution. You can see many facts about TD passes more easily in Figure 1 than in Table 1. For example, by looking at the stems and the shape of the plot, you can tell that most of the teams had between 10 and 29 passing TD's, with a few having more and a few having less. The precise numbers of TD passes can be determined by examining the leaves.

    We can make our figure even more revealing by splitting each stem into two parts. Figure 2 shows how to do this. The top row is reserved for numbers from 35 to 39 and holds only the 37 TD passes made by the first team in Table 2. The second row is reserved for the numbers from 30 to 34 and holds the 32, 33, and 33 TD passes made by the next three teams in the table. You can see for yourself what the other rows represent.

    3|7 3|233 2|889 2|001112223 1|56888899 1|22444 0|69 Figure 8. Stem and leaf display with the stems split in two.

    Figure 8 is more revealing than Figure 7 because the latter figure lumps too many values into a single row. Whether you should split stems in a display depends on the exact form of your data. If rows get too long with single stems, you might try splitting them into two or more parts.

    There is a variation of stem and leaf displays that is useful for comparing distributions. The two distributions are placed back to back along a common

    pg. 45

    column of stems. The result is a “back-to-back stem and leaf display.” Figure 9 shows such a graph. It compares the numbers of TD passes in the 1998 and 2000 seasons. The stems are in the middle, the leaves to the left are for the 1998 data, and the leaves to the right are for the 2000 data. For example, the second-to-last row shows that in 1998 there were teams with 11, 12, and 13 TD passes, and in 2000 there were two teams with 12 and three teams with 14 TD passes.

    11

    332 8865 44331110 987776665 321 7

    4 3 3 2 2 1 1 0

    7 233 889 001112223 56888899 22444 69 Figure 9. Back-to-back stem and leaf display. The left side shows the 1998

    TD data and the right side shows the 2000 TD data.

    Figure 9 helps us see that the two seasons were similar, but that only in 1998 did any teams throw more than 40 TD passes.

    There are two things about the football data that make them easy to graph with stems and leaves. First, the data are limited to whole numbers that can be represented with a one-digit stem and a one-digit leaf. Second, all the numbers are positive. If the data include numbers with three or more digits, or contain decimals, they can be rounded to two-digit accuracy. Negative values are also easily handled. Let us look at another example.

    Table 3 shows data from the case study Weapons and Aggression. Each value is the mean difference over a series of trials between the times it took an experimental subject to name aggressive words (like “punch”) under two conditions. In one condition, the words were preceded by a non-weapon word such as “bug.” In the second condition, the same words were preceded by a weapon word such as “gun” or “knife.” The issue addressed by the experiment was whether a preceding weapon word would speed up (or prime) pronunciation of the aggressive word compared to a non-weapon priming word. A positive difference implies greater priming of the aggressive word by the weapon word. Negative differences imply that the priming by the weapon word was less than for a neutral word.

    pg. 46

    43.2, 42.9, 35.6, 25.6, 25.4, 23.6, 20.5, 19.9, 14.4, 12.7, 11.3, 10.2, 10.0, 9.1, 7.5, 5.4, 4.7, 3.8, 2.1, 1.2, -0.2, -6.3, -6.7, -8.8, -10.4, -10.5, -14.9, -14.9, -15.0, -18.5, -27.4

    Table 3. The effects of priming (thousandths of a second).

    You see that the numbers range from 43.2 to -27.4. The first value indicates that one subject was 43.2 milliseconds faster pronouncing aggressive words when they were preceded by weapon words than when preceded by neutral words. The value - 27.4 indicates that another subject was 27.4 milliseconds slower pronouncing aggressive words when they were preceded by weapon words.

    The data are displayed with stems and leaves in Figure 10. Since stem and leaf displays can only portray two whole digits (one for the stem and one for the leaf) the numbers are first rounded. Thus, the value 43.2 is rounded to 43 and represented with a stem of 4 and a leaf of 3. Similarly, 42.9 is rounded to 43. To represent negative numbers, we simply use negative stems. For example, the bottom row of the figure represents the number –27. The second-to-last row represents the numbers -10, -10, -15, etc. Once again, we have rounded the original values from Table 3.

    4|33 3|6 2|00456 1|00134 0|1245589 -0|0679 -1|005559 -2|7 Figure 10. Stem and leaf display with negative numbers and rounding.

    Observe that the figure contains a row headed by “0” and another headed by “-0.” The stem of 0 is for numbers between 0 and 9, whereas the stem of -0 is for numbers between 0 and -9. For example, the fifth row of the table holds the numbers 1, 2, 4, 5, 5, 8, 9 and the sixth row holds 0, -6, -7, and -9. Values that are exactly 0 before rounding should be split as evenly as possible between the “0” and “-0” rows. In Table 3, none of the values are 0 before rounding. The “0” that appears in the “-0” row comes from the original value of -0.2 in the table.

    pg. 47

    Although stem and leaf displays are unwieldy for large data sets, they are often useful for data sets with up to 200 observations. Figure 11 portrays the distribution of populations of 185 US cities in 1998. To be included, a city had to have between 100,000 and 500,000 residents.

    Figure 11. Stem and leaf display of populations of 185 US cities with

    populations between 100,000 and 500,000 in 1988.

    Since a stem and leaf plot shows only two-place accuracy, we had to round the numbers to the nearest 10,000. For example the largest number (493,559) was rounded to 490,000 and then plotted with a stem of 4 and a leaf of 9. The fourth highest number (463,201) was rounded to 460,000 and plotted with a stem of 4 and a leaf of 6. Thus, the stems represent units of 100,000 and the leaves represent units of 10,000. Notice that each stem value is split into five parts: 0-1, 2-3, 4-5, 6- 7, and 8-9.

    Whether your data can be suitably represented by a stem and leaf display depends on whether they can be rounded without loss of important information. Also, their extreme values must fit into two successive digits, as the data in Figure 11 fit into the 10,000 and 100,000 places (for leaves and stems, respectively). Deciding what

    pg. 48

    kind of graph is best suited to displaying your data thus requires good judgment. Statistics is not just recipes!

    Histograms A histogram is a graphical method for displaying the shape of a distribution. It is particularly useful when there are a large number of observations. We begin with an example consisting of the scores of 642 students on a psychology test. The test consists of 197 items each graded as “correct” or “incorrect.” The students' scores ranged from 46 to 167.

    The first step is to create a frequency table. Unfortunately, a simple frequency table would be too big, containing over 100 rows. To simplify the table, we group scores together as shown in Table 4.

    Interval's Lower Limit

    Interval's Upper Limit

    Class Frequency

    Class Frequency

    39.5 49.5 3

    49.5 59.5 10

    59.5 69.5 53

    69.5 79.5 107

    79.5 89.5 147

    89.5 99.5 130

    99.5 109.5 78

    109.5 119.5 59

    119.5 129.5 36

    129.5 139.5 11

    139.5 149.5 6

    149.5 159.5 1

    159.5 169.5 1 Table 4. Grouped Frequency Distribution of Psychology Test Scores

    pg. 49

    To create this table, the range of scores was broken into intervals, called class intervals. The first interval is from 39.5 to 49.5, the second from 49.5 to 59.5, etc. Next, the number of scores falling into each interval was counted to obtain the class frequencies. There are three scores in the first interval, 10 in the second, etc.

    Class intervals of width 10 provide enough detail about the distribution to be revealing without making the graph too “choppy.” More information on choosing the widths of class intervals is presented later in this section. Placing the limits of the class intervals midway between two numbers (e.g., 49.5) ensures that every score will fall in an interval rather than on the boundary between intervals.

    In a histogram, the class frequencies are represented by bars. The height of each bar corresponds to its class frequency. A histogram of these data is shown in Figure 12.

    Figure 12. Histogram of scores on a psychology test.

    The histogram makes it plain that most of the scores are in the middle of the distribution, with fewer scores in the extremes. You can also see that the distribution is not symmetric: the scores extend to the right farther than they do to the left. The distribution is therefore said to be skewed. (We'll have more to say about shapes of distributions in Chapter 3.)

    pg. 50

    In our example, the observations are whole numbers. Histograms can also be used when the scores are measured on a more continuous scale such as the length of time (in milliseconds) required to perform a task. In this case, there is no need to worry about fence sitters since they are improbable. (It would be quite a coincidence for a task to require exactly 7 seconds, measured to the nearest thousandth of a second.) We are therefore free to choose whole numbers as boundaries for our class intervals, for example, 4000, 5000, etc. The class frequency is then the number of observations that are greater than or equal to the lower bound, and strictly less than the upper bound. For example, one interval might hold times from 4000 to 4999 milliseconds. Using whole numbers as boundaries avoids a cluttered appearance, and is the practice of many computer programs that create histograms. Note also that some computer programs label the middle of each interval rather than the end points.

    Histograms can be based on relative frequencies instead of actual frequencies. Histograms based on relative frequencies show the proportion of scores in each interval rather than the number of scores. In this case, the Y-axis runs from 0 to 1 (or somewhere in between if there are no extreme proportions). You can change a histogram based on frequencies to one based on relative frequencies by (a) dividing each class frequency by the total number of observations, and then (b) plotting the quotients on the Y-axis (labeled as proportion).

    There is more to be said about the widths of the class intervals, sometimes called bin widths. Your choice of bin width determines the number of class intervals. This decision, along with the choice of starting point for the first interval, affects the shape of the histogram. The best advice is to experiment with different choices of width, and to choose a histogram according to how well it communicates the shape of the distribution.

    Frequency Polygons Frequency polygons are a graphical device for understanding the shapes of distributions. They serve the same purpose as histograms, but are especially helpful for comparing sets of data. Frequency polygons are also a good choice for displaying cumulative frequency distributions.

    To create a frequency polygon, start just as for histograms, by choosing a class interval. Then draw an X-axis representing the values of the scores in your data. Mark the middle of each class interval with a tick mark, and label it with the middle value represented by the class. Draw the Y-axis to indicate the frequency of each class. Place a point in the middle of each class interval at the height

    pg. 51

    corresponding to its frequency. Finally, connect the points. You should include one class interval below the lowest value in your data and one above the highest value. The graph will then touch the X-axis on both sides.

    A frequency polygon for 642 psychology test scores shown in Figure 12 was constructed from the frequency table shown in Table 5.

    Lower Limit

    Upper Limit Count

    Cumulative Count

    Cumulative Count

    29.5 39.5 0 0

    39.5 49.5 3 3

    49.5 59.5 10 13

    59.5 69.5 53 66

    69.5 79.5 107 173

    79.5 89.5 147 320

    89.5 99.5 130 450

    99.5 109.5 78 528

    109.5 119.5 59 587

    119.5 129.5 36 623

    129.5 139.5 11 634

    139.5 149.5 6 640

    149.5 159.5 1 641

    159.5 169.5 1 642

    169.5 170.5 0 642 Table 5. Frequency Distribution of Psychology Test Scores

    The first label on the X-axis is 35. This represents an interval extending from 29.5 to 39.5. Since the lowest test score is 46, this interval has a frequency of 0. The

    pg. 52

    point labeled 45 represents the interval from 39.5 to 49.5. There are three scores in this interval. There are 147 scores in the interval that surrounds 85.

    You can easily discern the shape of the distribution from Figure 13. Most of the scores are between 65 and 115. It is clear that the distribution is not symmetric inasmuch as good scores (to the right) trail off more gradually than poor scores (to the left). In the terminology of Chapter 3 (where we will study shapes of distributions more systematically), the distribution is skewed.

    Figure 13. Frequency polygon for the psychology test scores.

    A cumulative frequency polygon for the same test scores is shown in Figure 14. The graph is the same as before except that the Y value for each point is the number of students in the corresponding class interval plus all numbers in lower intervals. For example, there are no scores in the interval labeled “35,” three in the interval “45,” and 10 in the interval “55.” Therefore, the Y value corresponding to “55” is 13. Since 642 students took the test, the cumulative frequency for the last interval is 642.

    pg. 53

    Figure 14. Cumulative frequency polygon for the psychology test scores.

    Frequency polygons are useful for comparing distributions. This is achieved by overlaying the frequency polygons drawn for different data sets. Figure 3 provides an example. The data come from a task in which the goal is to move a computer cursor to a target on the screen as fast as possible. On 20 of the trials, the target was a small rectangle; on the other 20, the target was a large rectangle. Time to reach the target was recorded on each trial. The two distributions (one for each target) are plotted together in Figure 15. The figure shows that, although there is some overlap in times, it generally took longer to move the cursor to the small target than to the large one.

    pg. 54

    Figure 15. Overlaid frequency polygons.

    It is also possible to plot two cumulative frequency distributions in the same graph. This is illustrated in Figure 16 using the same data from the cursor task. The difference in distributions for the two targets is again evident.

    Figure 16. Overlaid cumulative frequency polygons.

    pg. 55

    Box Plots We have already discussed techniques for visually representing data (see histograms and frequency polygons). In this section we present another important graph, called a box plot. Box plots are useful for identifying outliers and for comparing distributions. We will explain box plots with the help of data from an in-class experiment. Students in Introductory Statistics were presented with a page containing 30 colored rectangles. Their task was to name the colors as quickly as possible. Their times (in seconds) were recorded. We'll compare the scores for the 16 men and 31 women who participated in the experiment by making separate box plots for each gender. Such a display is said to involve parallel box plots.

    There are several steps in constructing a box plot. The first relies on the 25th, 50th, and 75th percentiles in the distribution of scores. Figure 17 shows how these three statistics are used. For each gender we draw a box extending from the 25th percentile to the 75th percentile. The 50th percentile is drawn inside the box. Therefore, the bottom of each box is the 25th percentile, the top is the 75th percentile, and the line in the middle is the 50th percentile. The data for the women in our sample are shown in Table 6.

    14, 15, 16, 16, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 19, 19, 19 20, 20, 20, 20, 20, 20, 21, 21, 22, 23, 24, 24, 29 Table 6. Women's times.

    For these data, the 25th percentile is 17, the 50th percentile is 19, and the 75th percentile is 20. For the men (whose data are not shown), the 25th percentile is 19, the 50th percentile is 22.5, and the 75th percentile is 25.5.

    Figure 17. The first step in creating box plots.

    pg. 56

    Before proceeding, the terminology in Table 7 is helpful.

    Name Formula Value

    Upper Hinge 75th Percentile 20

    Lower Hinge 25th Percentile 17

    H-Spread Upper Hinge - Lower Hinge 3

    Step 1.5 x H-Spread 4.5

    Upper Inner

    Fence Upper Hinge + 1 Step 24.5

    Lower Inner

    Fence Lower Hinge - 1 Step 12.5

    Upper Outer

    Fence Upper Hinge + 2 Steps 29

    Lower Outer

    Fence Lower Hinge - 2 Steps 8

    Upper Adjacent Largest value below Upper Inner Fence 24

    Lower Adjacent Smallest value above Lower Inner

    Fence 14

    Outside Value A value beyond an Inner Fence but not

    beyond an Outer Fence 29

    Far Out Value A value beyond an Outer Fence None Table 7. Box plot terms and values for women's times.

    Continuing with the box plots, we put “whiskers” above and below each box to give additional information about the spread of data. Whiskers are vertical lines that end in a horizontal stroke. Whiskers are drawn from the upper and lower hinges to the upper and lower adjacent values (24 and 14 for the women's data), as shown in Figure 18.

    pg. 57

    Figure 18. The box plots with the whiskers drawn.

    Although we don't draw whiskers all the way to outside or far out values, we still wish to represent them in our box plots. This is achieved by adding additional marks beyond the whiskers. Specifically, outside values are indicated by small “o's” and far out values are indicated by asterisks (*). In our data, there are no far- out values and just one outside value. This outside value of 29 is for the women and is shown in Figure 19.

    pg. 58

    Figure 19. The box plots with the outside value shown.

    There is one more mark to include in box plots (although sometimes it is omitted). We indicate the mean score for a group by inserting a plus sign. Figure 20 shows the result of adding means to our box plots.

    Figure 20. The completed box plots.

    pg. 59

    Figure 20 provides a revealing summary of the data. Since half the scores in a distribution are between the hinges (recall that the hinges are the 25th and 75th percentiles), we see that half the women's times are between 17 and 20 seconds whereas half the men's times are between 19 and 25.5 seconds. We also see that women generally named the colors faster than the men did, although one woman was slower than almost all of the men. Figure 21 shows the box plot for the women's data with detailed labels.

    Figure 21. The box plots for the women's data with detailed labels.

    Box plots provide basic information about a distribution. For example, a distribution with a positive skew would have a longer whisker in the positive direction than in the negative direction. A larger mean than median would also indicate a positive skew. Box plots are good at portraying extreme values and are especially good at showing differences between distributions. However, many of the details of a distribution are not revealed in a box plot and to examine these details one should use create a histogram and/or a stem and leaf display.

    Bar Charts In the section on qualitative variables, we saw how bar charts could be used to illustrate the frequencies of different categories. For example, the bar chart shown in Figure 22 shows how many purchasers of iMac computers were previous Macintosh users, previous Windows users, and new computer purchasers.

    pg. 60

    Figure 22. iMac buyers as a function of previous computer ownership.

    In this section we show how bar charts can be used to present other kinds of quantitative information, not just frequency counts. The bar chart in Figure 23 shows the percent increases in the Dow Jones, Standard and Poor 500 (S & P), and Nasdaq stock indexes from May 24th 2000 to May 24th 2001. Notice that both the S & P and the Nasdaq had “negative increases” which means that they decreased in value. In this bar chart, the Y-axis is not frequency but rather the signed quantity percentage increase .

    Figure 23. Percent increase in three stock indexes from May 24th 2000 to

    May 24th 2001.

    pg. 61

    Bar charts are particularly effective for showing change over time. Figure 24, for example, shows the percent increase in the Consumer Price Index (CPI) over four three-month periods. The fluctuation in inflation is apparent in the graph.

    Figure 24. Percent change in the CPI over time. Each bar represents percent

    increase for the three months ending at the date indicated.

    Bar charts are often used to compare the means of different experimental conditions. Figure 4 shows the mean time it took one of us (DL) to move the cursor to either a small target or a large target. On average, more time was required for small targets than for large ones.

    Figure 25. Bar chart showing the means for the two conditions.

    pg. 62

    Although bar charts can display means, we do not recommend them for this purpose. Box plots should be used instead since they provide more information than bar charts without taking up more space. For example, a box plot of the cursor-movement data is shown in Figure 26. You can see that Figure 26 reveals more about the distribution of movement times than does Figure 25.

    Figure 26. Box plots of times to move the cursor to the small and large

    targets.

    The section on qualitative variables presented earlier in this chapter discussed the use of bar charts for comparing distributions. Some common graphical mistakes were also noted. The earlier discussion applies equally well to the use of bar charts to display quantitative variables.

    Line Graphs A line graph is a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed). For example, Figure 27 was presented in the section on bar charts and shows changes in the Consumer Price Index (CPI) over time.

    pg. 63

    Figure 27. A bar chart of the percent change in the CPI over time. Each bar

    represents percent increase for the three months ending at the date indicated.

    A line graph of these same data is shown in Figure 28. Although the figures are similar, the line graph emphasizes the change from period to period.

    Figure 28. A line graph of the percent change in the CPI over time. Each

    point represents percent increase for the three months ending at the date indicated.

    pg. 64

    Line graphs are appropriate only when both the X- and Y-axes display ordered (rather than qualitative) variables. Although bar charts can also be used in this situation, line graphs are generally better at comparing changes over time. Figure 29, for example, shows percent increases and decreases in five components of the CPI. The figure makes it easy to see that medical costs had a steadier progression than the other components. Although you could create an analogous bar chart, its interpretation would not be as easy.

    Figure 29. A line graph of the percent change in five components of the CPI

    over time.

    Let us stress that it is misleading to use a line graph when the X-axis contains merely qualitative variables. Figure 30 inappropriately shows a line graph of the card game data from Yahoo, discussed in the section on qualitative variables. The defect in Figure 30 is that it gives the false impression that the games are naturally ordered in a numerical way.

    pg. 65

    Figure 30. A line graph, inappropriately used, depicting the number of people playing different card games on Wednesday and Sunday.

    The Shape of Distribution Finally, it is useful to present discussion on how we describe the shapes of distributions, which we will revisit in the next chapter to learn how different shapes affect our numerical descriptors of data and distributions.

    The primary characteristic we are concerned about when assessing the shape of a distribution is whether the distribution is symmetrical or skewed. A symmetrical distribution, as the name suggests, can be cut down the center to form 2 mirror images. Although in practice we will never get a perfectly symmetrical distribution, we would like our data to be as close to symmetrical as possible for reasons we delve into in Chapter 3. Many types of distributions are symmetrical, but by far the most common and pertinent distribution at this point is the normal distribution, shown in Figure 31. Notice that although the symmetry is not perfect (for instance, the bar just to the right of the center is taller than the one just to the left), the two sides are roughly the same shape. The normal distribution has a single peak, known as the center, and two tails that extend out equally, forming what is known as a bell shape or bell curve.

    pg. 66

    Figure 31. A symmetrical distribution

    Symmetrical distributions can also have multiple peaks. Figure 32 shows a bimodal distribution, named for the two peaks that lie roughly symmetrically on either side of the center point. As we will see in the next chapter, this is not a particularly desirable characteristic of our data, and, worse, this is a relatively difficult characteristic to detect numerically. Thus, it is important to visualize your data before moving ahead with any formal analyses.

    Figure 32. A bimodal distribution

    pg. 67

    Distributions that are not symmetrical also come in many forms, more than can be described here. The most common asymmetry to be encountered is referred to as skew, in which one of the two tails of the distribution is disproportionately longer than the other. This property can affect the value of the averages we use in our analyses and make them an inaccurate representation of our data, which causes many problems.

    Skew can either be positive or negative (also known as right or left, respectively), based on which tail is longer. It is very easy to get the two confused at first; many students want to describe the skew by where the bulk of the data (larger portion of the histogram, known as the body) is placed, but the correct determination is based on which tail is longer. You can think of the tail as an arrow: whichever direction the arrow is pointing is the direction of the skew. Figures 33 and 34 show positive (right) and negative (left) skew, respectively.

    Figure 33. A positively skewed distribution

    pg. 68

    Figure 34. A negatively skewed distribution

    Exercises – Ch. 2

    1. Name some ways to graph quantitative variables and some ways to graph

    qualitative variables. 2. Given the following data, construct a pie chart and a bar chart. Which do

    you think is the more appropriate or useful way to display the data?

    Favorite Movie Genre Freq.

    Comedy 14 Horror 9 Romance 8

    Action 12 3. Pretend you are constructing a histogram for describing the distribution of

    salaries for individuals who are 40 years or older, but are not yet retired.

    a. What is on the Y-axis? Explain. b. What is on the X-axis? Explain. c. What would be the probable shape of the salary distribution? Explain

    why.

    pg. 69

    4. A graph appears below showing the number of adults and children who

    prefer each type of soda. There were 130 adults and kids surveyed. Discuss some ways in which the graph below could be improved.

    5. Which of the box plots on the graph has a large positive skew? Which has a

    large negative skew?

    6. Create a histogram of the following data representing how many shows

    children said they watch each day:

    pg. 70

    Number of TV Shows Frequency

    0 2 1 18 2 36 3 7 4 3 7. Explain the differences between bar charts and histograms. When would

    each be used? 8. Draw a histogram of a distribution that is

    a. Negatively skewed b. Symmetrical c. Positively skewed 9. Based on the pie chart below, which was made from a sample of 300

    students, construct a frequency table of college majors.

    10. Create a histogram of the following data. Label the tails and body and

    determine if it is skewed (and direction, if so) or symmetrical.

    pg. 71

    Hours worked per week Proportion

    0-10 4 10-20 8 20-30 11 30-40 51 40-50 12 50-60 9 60+ 5

    Answers to Odd-Numbered Exercises – Ch. 2 1. Qualitative variables are displayed using pie charts and bar charts.

    Quantitative variables are displayed as box plots, histograms, etc. 3. [You do not need to draw the histogram, only describe it below]

    a. The Y-axis would have the frequency or proportion because this is

    always the case in histograms b. The X-axis has income, because this is out quantitative variable of

    interest c. Because most income data are positively skewed, this histogram

    would likely be skewed positively too 5. Chart b has the positive skew because the outliers (dots and asterisks) are on the upper (higher) end; chart c has the negative skew because the outliers are on the lower end. 7. In bar charts, the bars do not touch; in histograms, the bars do touch. Bar

    charts are appropriate for qualitative variables, whereas histograms are better for quantitative variables. 9. Use the following dataset for the computations below:

    Major Freq Psychology 144 Biology 120 Chemistry 24 Physics 12

    pg. 72

    Chapter 3: Measures of Central Tendency and Spread

    Now that we have visualized our data to understand its shape, we can begin with numerical analyses. The descriptive statistics presented in this chapter serve to describe the distribution of our data objectively and mathematically – out first step into statistical analysis! The topics here will serve as the basis for everything we do in the rest of the course.

    What is Central Tendency? What is “central tendency,” and why do we want to know the central tendency of a group of scores? Let us first try to answer these questions intuitively. Then we will proceed to a more formal discussion.

    Imagine this situation: You are in a class with just four other students, and the five of you took a 5-point pop quiz. Today your instructor is walking around the room, handing back the quizzes. She stops at your desk and hands you your paper. Written in bold black ink on the front is “3/5.” How do you react? Are you happy with your score of 3 or disappointed? How do you decide? You might calculate your percentage correct, realize it is 60%, and be appalled. But it is more likely that when deciding how to react to your performance, you will want additional information. What additional information would you like?

    If you are like most students, you will immediately ask your neighbors, “Whad'ja get?” and then ask the instructor, “How did the class do?” In other words, the additional information you want is how your quiz score compares to other students' scores. You therefore understand the importance of comparing your score to the class distribution of scores. Should your score of 3 turn out to be among the higher scores, then you'll be pleased after all. On the other hand, if 3 is among the lower scores in the class, you won't be quite so happy.

    This idea of comparing individual scores to a distribution of scores is fundamental to statistics. So let's explore it further, using the same example (the pop quiz you took with your four classmates). Three possible outcomes are shown in Table 1. They are labeled “Dataset A,” “Dataset B,” and “Dataset C.” Which of the three datasets would make you happiest? In other words, in comparing your score with

    pg. 73

    your fellow students' scores, in which dataset would your score of 3 be the most impressive?

    In Dataset A, everyone's score is 3. This puts your score at the exact center of the distribution. You can draw satisfaction from the fact that you did as well as everyone else. But of course it cuts both ways: everyone else did just as well as you.

    Student Dataset A Dataset B Dataset C

    You 3 3 3

    John's 3 4 2

    Maria's 3 4 2

    Shareecia's 3 4 2

    Luther's 3 5 1 Table 1. Three possible datasets for the 5-point make-up quiz.

    Now consider the possibility that the scores are described as in Dataset B. This is a depressing outcome even though your score is no different than the one in Dataset A. The problem is that the other four students had higher grades, putting yours below the center of the distribution.

    Finally, let's look at Dataset C. This is more like it! All of your classmates score lower than you so your score is above the center of the distribution.

    Now let's change the example in order to develop more insight into the center of a distribution. Figure 1 shows the results of an experiment on memory for chess positions. Subjects were shown a chess position and then asked to reconstruct it on an empty chess board. The number of pieces correctly placed was recorded. This was repeated for two more chess positions. The scores represent the total number of chess pieces correctly placed for the three chess positions. The maximum possible score was 89.

    pg. 74

    Figure 1. Back-to-back stem and leaf display. The left side shows the

    memory scores of the non-players. The right side shows the scores of the tournament players.

    Two groups are compared. On the left are people who don't play chess. On the right are people who play a great deal (tournament players). It is clear that the location of the center of the distribution for the non-players is much lower than the center of the distribution for the tournament players.

    We're sure you get the idea now about the center of a distribution. It is time to move beyond intuition. We need a formal definition of the center of a distribution. In fact, we'll offer you three definitions! This is not just generosity on our part. There turn out to be (at least) three different ways of thinking about the center of a distribution, all of them useful in various contexts. In the remainder of this section we attempt to communicate the idea behind each concept. In the succeeding sections we will give statistical measures for these concepts of central tendency.

    Definitions of Center Now we explain the three different ways of defining the center of a distribution. All three are called measures of central tendency.

    Balance Scale One definition of central tendency is the point at which the distribution is in balance. Figure 2 shows the distribution of the five numbers 2, 3, 4, 9, 16 placed upon a balance scale. If each number weighs one pound, and is placed at its position along the number line, then it would be possible to balance them by placing a fulcrum at 6.8.

    pg. 75

    Figure 2. A balance scale.

    For another example, consider the distribution shown in Figure 3. It is balanced by placing the fulcrum in the geometric middle.

    Figure 3. A distribution balanced on the tip of a triangle.

    Figure 4 illustrates that the same distribution can't be balanced by placing the fulcrum to the left of center.

    Figure 4. The distribution is not balanced.

    pg. 76

    Figure 5 shows an asymmetric distribution. To balance it, we cannot put the fulcrum halfway between the lowest and highest values (as we did in Figure 3). Placing the fulcrum at the “half way” point would cause it to tip towards the left.

    Figure 5. An asymmetric distribution balanced on the tip of a triangle.

    Smallest Absolute Deviation Another way to define the center of a distribution is based on the concept of the sum of the absolute deviations (differences). Consider the distribution made up of the five numbers 2, 3, 4, 9, 16. Let's see how far the distribution is from 10 (picking a number arbitrarily). Table 2 shows the sum of the absolute deviations of these numbers from the number 10.

    Values Absolute Deviations from 10

    2 3 4 9 16

    8 7 6 1 6

    Sum 28 Table 2. An example of the sum of absolute deviations

    The first row of the table shows that the absolute value of the difference between 2 and 10 is 8; the second row shows that the absolute difference between 3 and 10 is 7, and similarly for the other rows. When we add up the five absolute deviations,

    pg. 77


    Untitled Page 01 is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?