1: Sampling and Descriptive Statistics
Included in this chapter are the basic ideas and words of probability and statistics. You will soon understand that statistics and probability work together. You will also learn how data are gathered and what "good" data can be distinguished from "bad."
-
- 1.3: Data, Sampling, and Variation in Data and Sampling
- Data are individual items of information that come from a population or sample. Data may be classified as qualitative, quantitative continuous, or quantitative discrete. Because it is not practical to measure the entire population in a study, researchers use samples to represent the population. A random sample is a representative group from the population chosen by using a method that gives each individual in the population an equal chance of being included in the sample.
-
- 1.4: Frequency, Frequency Tables, and Levels of Measurement
- Some calculations generate numbers that are artificially precise. It is not necessary to report a value to eight decimal places when the measures that generated that value were only accurate to the nearest tenth. Round off your final answer to one more decimal place than was present in the original data. This means that if you have data measured to the nearest tenth of a unit, report the final statistic to the nearest hundredth.
-
- 1.5: Experimental Design and Ethics
- A poorly designed study will not produce reliable data. There are certain key components that must be included in every experiment. To eliminate lurking variables, subjects must be assigned randomly to different treatment groups. One of the groups must act as a control group, demonstrating what happens when the active treatment is not applied. Participants in the control group receive a placebo treatment that looks exactly like the active treatments but cannot influence the response variable.
-
- 1.6: Prelude to Descriptive Statistics
- In this chapter, you will study numerical and graphical ways to describe and display your data. This area of statistics is called "Descriptive Statistics." You will learn how to calculate, and even more importantly, how to interpret these measurements and graphs. In this chapter, we will briefly look at stem-and-leaf plots, line graphs, and bar graphs, as well as frequency polygons, and time series graphs. Our emphasis will be on histograms and box plots.
-
- 1.7: Stem-and-Leaf Graphs (Stemplots), Line Graphs, and Bar Graphs
- A stem-and-leaf plot is a way to plot data and look at the distribution, where all data values within a class are visible. The advantage in a stem-and-leaf plot is that all values are listed, unlike a histogram, which gives classes of data values. A line graph is often used to represent a set of data values in which a quantity varies with time. These graphs are useful for finding trends. A bar graph is a chart that uses either horizontal or vertical bars to show comparisons among categories.
-
- 1.8: Histograms, Frequency Polygons, and Time Series Graphs
- A histogram is a graphic version of a frequency distribution. The graph consists of bars of equal width drawn adjacent to each other. The horizontal scale represents classes of quantitative data values and the vertical scale represents frequencies. The heights of the bars correspond to frequency values. Histograms are typically used for large, continuous, quantitative data sets. A frequency polygon can also be used when graphing large data sets with data points that repeat.
-
- 1.9: Box Plots
- Box plots are a type of graph that can help visually organize data. To graph a box plot the following data points must be calculated: the minimum value, the first quartile, the median, the third quartile, and the maximum value. Once the box plot is graphed, you can display and compare distributions of data.
-
- 1.10: Measures of the Location of the Data
- The values that divide a rank-ordered set of data into 100 equal parts are called percentiles and are used to compare and interpret data. For example, an observation at the 50th percentile would be greater than 50 % of the other obeservations in the set. Quartiles divide data into quarters. The first quartile is the 25th percentile, the second quartile is 50th percentile, and the third quartile is the the 75th percentile. The interquartile range is the range of the middle 50 % of the data values
-
- 1.11: Measures of the Center of the Data
- The mean and the median can be calculated to help you find the "center" of a data set. The mean is the best estimate for the actual data set, but the median is the best measurement when a data set contains several outliers or extreme values. The mode will tell you the most frequently occurring datum (or data) in your data set. The mean, median, and mode are extremely helpful when you need to analyze your data.
-
- 1.12: Measures of the Spread of the Data
- An important characteristic of any set of data is the variation in the data. In some data sets, the data values are concentrated closely near the mean; in other data sets, the data values are more widely spread out from the mean. The most common measure of variation, or spread, is the standard deviation. The standard deviation is a number that measures how far data values are from their mean.
-
- 1.13: Skewness and the Mean, Median, and Mode
- Looking at the distribution of data can reveal a lot about the relationship between the mean, the median, and the mode. There are three types of distributions. A right (or positive) skewed distribution, a left (or negative) skewed distribution and a symmetrical distribution.
Contributors and Attributions
Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by
OpenStax College
is licensed under a Creative Commons Attribution License 4.0 license. Download for free at
http://cnx.org/contents/30189442-699...b91b9de@18.114
.