Skip to main content
Statistics LibreTexts

7.5: Odds

  • Page ID
    64113

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In some research, probabilities are computed in terms of an equivalent measure called odds, which relates the probability that an outcome occurs to the probability that the outcome does not occur. Odds may be familiar to those who enjoy playing games of chance. For example, the probability against rolling a double six—that is, not rolling a double six—when rolling two fair six-sided dice is \(35/36=0.9722\). In odds this same probability is said to be 35 to 1. What this says is that the probability of not rolling a double six is 35 times that of rolling a double six, which conveys the idea that a double six will not be rolled very frequently. Noting that the probability of rolling a double six is \(1/36\) so that

    \[\text{probability of not rolling a double 6} = \frac{35}{36} = 35\times\frac{1}{36} = \text{probability of rolling a double 6}. \nonumber\]

    When there is a total of \(n\) outcomes that are equally likely to occur, and \(m\) of those outcomes correspond to an outcome we are interested in, then the odds of that outcome is said to be \(m\) to \(n-m\) where \(m\leq n\). In the case of rolling two six-sided dice, there are 36 possible outcomes that all have an equal chance of occurring so that \(n=36\) (Table 7.2). Of these, \(m=35\) of the outcomes correspond to not rolling a double six. Hence the odds of not rolling a double six are \(m=35\) to \(n-m=36-35=1\).

    Table 7.2. The 36 possible outcomes from rolling two six-sided dice, in this case a blue die and a red die. For each pair \((a,b)\) shown in the table, the value of \(a\) reflects the observed outcome of the blue die while the value of \(b\) reflects the observed outcome of the red die.

       

    Red Die

       

    1

    2

    3

    4

    5

    6

    Blue

    Die

    1

    (1,1)

    (1,2)

    (1,3)

    (1,4)

    (1,5)

    (1,6)

    2

    (2,1)

    (2,2)

    (2,3)

    (2,4)

    (2,5)

    (2,6)

    3

    (3,1)

    (3,2)

    (3,3)

    (3,4)

    (3,5)

    (3,6)

    4

    (4,1)

    (4,2)

    (4,3)

    (4,4)

    (4,5)

    (4,6)

    5

    (5,1)

    (5,2)

    (5,3)

    (5,4)

    (5,6)

    (5,6)

    6

    (6,1)

    (6,2)

    (6,3)

    (6,4)

    (6,5)

    (6,6)

    If we just know the probability that an outcome from a random experiment is \(p\), it follows that the probability that the outcome does not occur is \(1-p\). Therefore, if we know the probability of an outcome, we can compute the odds because we also know the probability that the outcome does not occur. Recall that if an outcome has probability \(p\), then this is like rolling a one using a fair die with \(1/p\) sides. Hence, we can take \(m=1\) and \(n\) to be approximately equal to \(1/p\), and therefore \(n-m\) is approximately \(1/p-1\).

    Theorem \(\PageIndex{1}\)

    Suppose that an outcome from a random experiment has probability equal to \(p\), then the odds of the outcome are 1 to \(1/p-1\).

    Consider flipping a fair coin. We know that the probability of flipping heads is \(p=1/2\), so that \(1/p-1=2-1=1\). Therefore, the odds of flipping heads is 1 to 1. This means that the probability of flipping heads is equal to not flipping heads, that is, flipping tails. Because of this, the odds of flipping tails is also 1 to 1.

    Earlier we considered a case where seven equally qualified individuals apply for a job. Three of the applicants are Hispanic, two are African American, and two are white, and the two white individuals are hired. In that example we calculated the probability the two white candidates would be selected is \(p=1/21\), and therefore the odds that the white candidates are selected is 1 to 20. In the second example, based on selecting two from a pool of twenty-five applicants, the probability was equal to \(p=1/300\), and therefore the odds that the two white candidates are selected is 1 to 299.

    We can also convert odds back to a probability. If the odds of an outcome are \(m\) to \(n-m\), then the probability that the outcome occurs is \(m/n\). For the simpler cases where one of the numbers in the odds is 1, the formula for computing the probability is simplified. For example, if the odds of an outcome is 1 to \(n-1\), then the probability of the outcome is \(1/n\), whereas if the odds of an outcome is \(m\) to 1, then the probability of the outcome is \(m/(m+1)\).

    Theorem \(\PageIndex{2}\)

    If the odds of an outcome are \(m\) to \(n-m\), the probability that the outcome occurs is \(m/n\).

    Once again consider flipping a fair coin. We know that the odds of flipping heads is 1 to 1 so that \(m=1\) and \(n-m=n-1=1\) then \(n=2\), and the probability of flipping heads is \(m/n=1/2\).

    A roulette wheel is a gambling device that consists of a horizontal wheel that has numbered and colored slots around its outer edge (Figure \(\PageIndex{1}\)). The wheel is spun, and a ball is dropped in the middle. Eventually the ball settles in one of the slots. If the wheel is fair, each slot should have an equal chance of being selected for each spin of the wheel. Roulette wheels may differ by the number of slots they contain. Our discussion here will focus on the type of roulette wheel picture in Figure \(\PageIndex{1}\), which has slots numbered 0–36. The slot numbered 0 is colored green, while the remaining slots are colored black and red; there are 18 red slots and 18 black slots. Players can bet on a single number, groups of numbers, or colors. The odds of winning when betting on 0 is 1 to 36. Therefore, \(m=1\) and \(n-m=36\), which means that \(n=37\). Then it follows that the probability of winning when betting on 0 is \(m/n=1/37\). Betting on a split refers to betting on any two numbers that are next to one another on the betting table (Figure \(\PageIndex{2}\)). For example, you could bet on 1 and 2, or 7 and 10. The odds of winning when betting on a split is 2 to 35. Therefore, \(m=2\) and \(n-m=35\), which means that \(n=37\). Then it follows that the probability of winning when betting on a split is \(m/n=2/37\).

    A roulette wheel with red and black numbers

AI-generated content may be incorrect.
    Figure \(\PageIndex{1}\): A roulette wheel (public domain photograph created by Alan M. Polansky).
    A green card with white numbers on it

AI-generated content may be incorrect.
    Figure \(\PageIndex{2}\): A roulette betting table (public domain photograph created by Alan M. Polansky).

    Odds are easiest to interpret when one of the numbers is a 1. For example, 2 to 1 odds means that the probability that the event occurs is twice that of it not occurring, or in terms or probabilities, the probability that the event occurs is \(2/3\) and the probability that the event does not occur is \(1/3\). Similarly, 1 to 2 odds means that the odds that the event does not occur is twice that it does occur, or in terms of probabilities, the probability that the event occurs is \(1/3\) and the probability that the event does not occur is \(2/3\). While odds are often reported in this way, there are some instances where one of the numbers is not a 1. The meaning of odds is still the same, but the interpretation may require some additional thought. For instance, we may have a situation where the probability of an event occurring is \(2/5\) and thus, the probability it does not occur is \(3/5\). This gives us the odds of 2 to 3. What this means is that three times the probability that the event occurs is the same as two times the probability that the event does not occur. That is,

    \[ 3\times\text{probability that event occurs}=3\times\frac{2}{5}=\frac{6}{5}=2\times\frac{3}{5}=2\times\text{probability that event does not occur} \]

    While more complicated to interpret, there are two guidelines that can be helpful with these types of odds. The first is that the small number is associated with the smaller probability. In this example the smaller number is the first number, which is associated with the probability that the event occurs. Therefore, we know that the probability that the event occurs is smaller than the probability that is does not occur. The second guideline is that the closer two numbers are to one another, the closer the probabilities are to one another. Remember that 1 to 1 odds means that the probability that the event occurs is equal to the probability that it does not occur, which corresponds to a probability of \(1/2\). Hence, for 2 to 3 odds, the probability that the event occurs is much closer to \(1/2\) than if the odds were 2 to 10.

    In medical studies, odds are often used to describe the probability that an individual develops a disease or condition when they have been exposed to certain circumstances. In these studies, it is often important to compare the odds of two sets of circumstances to show how risk changes under different conditions. However, comparing odds directly is difficult unless both odds start with a 1. For example, if one outcome has 1 to 3 odds and a different outcome has 1 to 10 odds, clearly the outcome with 1 to 10 odds happens less often than the outcome with 1 to 3 odds. In fact, we can use what we developed above to see that 1 to 3 odds corresponds to a probability of \(1/4\), whereas 1 to 10 odds corresponds to a probability of \(1/11\). If the odds are a bit different, say 3 to 5 odds compared to 4 to 7 odds, the comparison is much more difficult without converting both odds to probabilities. For this example, 3 to 5 odds corresponds to the probability of \(3/8=0.375\), and 4 to 7 odds corresponds to the probability \(4/11\approx 0.3636\), so that the second outcome has a slightly smaller probability.

    To make comparing odds easier, an odds ratio is used. Suppose that the odds of one outcome is \(m\) to \(n-m\), and the odds of another outcome is \(k\) to \(l-k\). For each set of odds, compute the ratio of the probability that the outcome occurs to the probability that it does not occur. For the odds above, these ratios are \(m/(n-m)\) and \(k/(l-k)\). The odds ratio is then the ratio of the first ratio to the second.

    Definition: Odds Ratio

    If the odds of one outcome is \(m\) to \(n-m\) and the odds of another outcome is \(k\) to \(l-k\), then the odds ratio of the first outcome compared to the second outcome is

    \[ \text{odds ratio} = \frac{m/(n-m)}{k/(l-k)} = \frac{m(l-k)}{k(n-m)}. \]

    If the odds ratio is less than one, we can conclude that the odds of the first outcome are lower than the odds of the second outcome. Similarly, if the odds ratio is greater than 1, we can conclude the odds of the first ratio is greater than the odds of the second. If the odds ratio is equal to 1, the odds are the same. In the first example above, we compared 1 to 3 odds to 1 to 10 odds, where we have \(m=1\), \(n=4\), \(k=1\), and \(l=11\). The corresponding odds ratio is:

    \[ \text{odds ratio} = \frac{m(l-k)}{k(n-m)} = \frac{1\times 10}{1\times 3} = \frac{10}{3} \approx 3.33, \]

    which indicates that the odds of the first outcome are greater than the odds of the second outcome, as we concluded above. Moreover, the odds ratio tells us that the odds of the first outcome are a little more than three times the odds of the second outcome.

    Let us now consider a real-world application of odds ratios. In the early 2000s, the tobacco industry promoted the use of electronic cigarettes to replace traditional cigarettes as a safer delivery device for nicotine. The idea was that the aerosolized liquid used in such products does not involve the combustion of tobacco and contains lesser amounts of toxic substances than tobacco smoke (Sapru et al. 2020; Collins et al. 2019). The use of tobacco products is most often associated with lung cancer, and it has been of interest to study the effect of introducing electronic cigarettes to someone who still smokes traditional cigarettes. One case control study compared cigarette smoking and use of electronic cigarettes among 4,975 cases with lung cancer to 27,294 control subjects without cancer (Bittoni et al. 2024). Statistical techniques were employed to use the data from a case control study to compare the odds of developing lung cancer for those who smoke compared to those who use electronic cigarettes and smoke. The odds ratios reported from this study have been adjusted for gender, age, and race. The researchers reported that the odds ratio for developing lung cancer for chronic smokers who also used vaping products compared to those who only smoked is 13.9. This indicates that the odds of developing lung cancer when combining vaping with chronic smoking is almost 14 times the odds of developing lung cancer when smoking alone. The researchers concluded that the results suggest the addition of vaping to smoking accelerates the risk of developing lung cancer.

    As shown in the example above, odds ratios are often used in medical studies to compare the risks associated with two conditions, usually a treatment and some type of control. The data from these studies are often reported in a \(2\times 2\) table that includes the number of cases where an associated outcome such as a disease is present, and the number of cases where the outcome is not observed for the treatment group and the control group. An example of this type of table is shown in Table 7.3, where the number of cases is represented by \(a\), \(b\), \(c\), and \(d)\. In Table 7.3, \(a\) represents the number of cases in the treatment group that developed the disease, \(b\) represents the number of cases in the treatment group that did not develop the disease, \(c\) represents the number of cases in the control group that developed the disease, and \(d\) represents the number of cases in the control group that did not develop the disease. The odds ratio for this data can be computed as:

    \[\text{odds ratio}=\frac{a\times d}{b\times c}.\]

    Table 7.3 Data for comparing the development of a disease for a treatment and control group.

     

    Disease Developed

     

    Yes

    No

    Treatment

    𝑎

    𝑏

    Control

    𝑐

    𝑑

    For example, consider the data given in Table 7.4, which considers a study of the use of taking vitamin C and its effect on the common cold (Anderson et al. 1972). In this study, 818 subjects were divided randomly into a group of 407 subjects who took vitamin C over for a period, and another group of 411 subjects who took a placebo over the same time period. Based on the data given in Table 7.3, the odds ratio is 0.6252. This indicates that the odds of developing a cold-related illness while taking vitamin C are less than the odds of developing a cold-related illness when not taking vitamin C, indicating evidence that vitamin C may reduce the chance of developing a cold-related illness.

    Table 7.4 Data comparing the occurrence of illness for subjects taking Vitamin C against those who took a placebo (Anderson et al. 1972).

     

    Illness

     

    Yes

    No

    Vitamin C

    302

    105

    Placebo

    335

    76


    This page titled 7.5: Odds is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by .

    • Was this article helpful?