Skip to main content
Statistics LibreTexts

4.1: Empirical Probability

  • Page ID
    5178
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    One story about how probability theory was developed is that a gambler wanted to know when to bet more and when to bet less. He talked to a couple of friends of his that happened to be mathematicians. Their names were Pierre de Fermat and Blaise Pascal. Since then many other mathematicians have worked to develop probability theory.

    Understanding probabilities are important in life. Examples of mundane questions that probability can answer for you are if you need to carry an umbrella or wear a heavy coat on a given day. More important questions that probability can help with are your chances that the car you are buying will need more maintenance, your chances of passing a class, your chances of winning the lottery, your chances of being in a car accident, and the chances that the U.S. will be attacked by terrorists. Most people do not have a very good understanding of probability, so they worry about being attacked by a terrorist but not about being in a car accident. The probability of being in a terrorist attack is much smaller than the probability of being in a car accident, thus it actually would make more sense to worry about driving. Also, the chance of you winning the lottery is very small, yet many people will spend the money on lottery tickets. Yet, if instead they saved the money that they spend on the lottery, they would have more money. In general, events that have a low probability (under 5%) are unlikely to occur. Whereas if an event has a high probability of happening (over 80%), then there is a good chance that the event will happen. This chapter will present some of the theory that you need to help make a determination of whether an event is likely to happen or not.

    First you need some definitions.

    Definition \(\PageIndex{1}\)

    Experiment: an activity that has specific result that can occur, but it is unknown which results will occur.

    Definition \(\PageIndex{2}\)

    Outcomes: the result of an experiment.

    Definition \(\PageIndex{3}\)

    Event: a set of certain outcomes of an experiment that you want to have happen.

    Definition \(\PageIndex{4}\)

    Sample Space: collection of all possible outcomes of the experiment. Usually denoted as SS.

    Definition \(\PageIndex{5}\)

    Event Space: the set of outcomes that make up an event. The symbol is usually a capital letter.

    Start with an experiment. Suppose that the experiment is rolling a die. The sample space is {1, 2, 3, 4, 5, 6}. The event that you want is to get a 6, and the event space is {6}. To do this, roll a die 10 times. When you do that, you get a 6 two times. Based on this experiment, the probability of getting a 6 is 2 out of 10 or 1/5. To get more accuracy, repeat the experiment more times. It is easiest to put this in a table, where n represents the number of times the experiment is repeated. When you put the number of 6s found over the number of times you repeat the experiment, this is the relative frequency.

    n Number of 6s Relative Frequency
    10 2 0.2
    50 6 0.12
    100 18 0.18
    500 81 0.162
    1000 163 0.163
    Table \(\PageIndex{1}\): Trials for Die Experiment

    Notice that as n increased, the relative frequency seems to approach a number. It looks like it is approaching 0.163. You can say that the probability of getting a 6 is approximately 0.163. If you want more accuracy, then increase n even more.

    These probabilities are called experimental probabilities since they are found by actually doing the experiment. They come about from the relative frequencies and give an approximation of the true probability. The approximate probability of an event A, P(A), is

    Definition \(\PageIndex{6}\)

    Experimental Probabilities

    \(P(A)=\dfrac{\text { number of times } A \text { occurs }}{\text { number of times the experiment was repeated }}\)

    For the event of getting a 6, the probability would by \(\dfrac{163}{1000}=0.163\).

    You must do experimental probabilities whenever it is not possible to calculate probabilities using other means. An example is if you want to find the probability that a family has 5 children, you would have to actually look at many families, and count how many have 5 children. Then you could calculate the probability. Another example is if you want to figure out if a die is fair. You would have to roll the die many times and count how often each side comes up. Make sure you repeat an experiment many times, because otherwise you will not be able to estimate the true probability. This is due to the law of large numbers.

    Definition \(\PageIndex{7}\)

    Law of large numbers: as n increases, the relative frequency tends towards the actual probability value.

    Note

    Probability, relative frequency, percentage, and proportion are all different words for the same concept. Also, probabilities can be given as percentages, decimals, or fractions.

    Homework

    Exercise \(\PageIndex{1}\)

    1. Example \(\PageIndex{2}\) contains the number of M&M’s of each color that were found in a case (Madison, 2013). Find the probability of choosing each color based on this experiment.
      Blue Brown Green Orange Red Yellow Total
      481 371 483 544 372 369 2620
      Table \(\PageIndex{2}\): M&M Distribution
    2. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made the time period of January 1 to March 31. Example \(\PageIndex{3}\) gives the defect and the number of defects. Find the probability of each defect type based on this data.
      Defect type Number of defects
      Scratch 5865
      Right shaped - small 4613
      Flaked 1992
      Wrong axis 1838
      Chamfer wrong 1596
      Crazing, cracks 1546
      Wrong shape 1485
      Wrong PD 1398
      Spots and bubbles 1371
      Wrong height 1130
      Right shape - big 1105
      Lost in lab 976
      Spots/bubble - intern 976
      Table \(\PageIndex{3}\): Number of Defective Lenses
    3. In Australia in 1995, of the 2907 indigenous people in prison 17 of them died. In that same year, of the 14501 non-indigenous people in prison 42 of them died ("Aboriginal deaths in," 2013). Find the probability that an indigenous person dies in prison and the probability that a non-indigenous person dies in prison. Compare these numbers and discuss what the numbers may mean.
    4. A project conducted by the Australian Federal Office of Road Safety asked people many questions about their cars. One question was the reason that a person chooses a given car, and that data is in Example \(\PageIndex{4}\) ("Car preferences," 2013). Find the probability a person chooses a car for each of the given reasons.
      Safety Reliability Cost Performance Comfort Looks
      84 62 46 34 47 27
      Table \(\PageIndex{4}\): Reason for Choosing a Car
    Answer

    1. P(blue) = 0.184, P(brow) = 0.142, P(green) = 0.184, P(orange) = 0.208, P(red) = 0.142, P(yellow) = 0.141

    3. P(indigenous person dies) = 0.0058, P(non-indigenous person dies) = 0.0029, see solutions


    This page titled 4.1: Empirical Probability is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Kathryn Kozak via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.