Skip to main content
Statistics LibreTexts

10.2.1: Prediction

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Recall the third exam/final exam example. We examined the scatter plot and showed that the correlation coefficient is significant. We found the equation of the best-fit line for the final exam grade as a function of the grade on the third-exam. We can now use the least-squares regression line for prediction.

    Suppose you want to estimate, or predict, the mean final exam score of statistics students who received 73 on the third exam. The exam scores (\(x\)-values) range from 65 to 75. Since 73 is between the \(x\)-values 65 and 75, substitute \(x = 73\) into the equation. Then:

    \[\hat{y} = -173.51 + 4.83(73) = 179.08\nonumber \]

    We predict that statistics students who earn a grade of 73 on the third exam will earn a grade of 179.08 on the final exam, on average.

    Example \(\PageIndex{1}\)

    Recall the third exam/final exam example.

    1. What would you predict the final exam score to be for a student who scored a 66 on the third exam?
    2. What would you predict the final exam score to be for a student who scored a 90 on the third exam?


    a. 145.27

    b. The \(x\) values in the data are between 65 and 75. Ninety is outside of the domain of the observed \(x\) values in the data (independent variable), so you cannot reliably predict the final exam score for this student. (Even though it is possible to enter 90 into the equation for \(x\) and calculate a corresponding \(y\) value, the \(y\) value that you get will not be reliable.)

    To understand really how unreliable the prediction can be outside of the observed \(x\)-values observed in the data, make the substitution \(x = 90\) into the equation.

    \[\hat{y} = -173.51 + 4.83(90) = 261.19\nonumber \]

    The final-exam score is predicted to be 261.19. The largest the final-exam score can be is 200.

    The process of predicting inside of the observed \(x\) values observed in the data is called interpolation. The process of predicting outside of the observed \(x\)-values observed in the data is called extrapolation.

    Exercise \(\PageIndex{1}\)

    Data are collected on the relationship between the number of hours per week practicing a musical instrument and scores on a math test. The line of best fit is as follows:

    \[\hat{y} = 72.5 + 2.8x \nonumber \]

    What would you predict the score on a math test would be for a student who practices a musical instrument for five hours a week?




    After determining the presence of a strong correlation coefficient and calculating the line of best fit, you can use the least squares regression line to make predictions about your data.

    If your hypothesis test does not show a significant correlation (if \(r\) is not strong), then you cannot use the line of best fit to predict anything. Instead, the best predicted value for a specific \(x\) value is the mean of the \(y\) values of the original data set.


    1. Data from the Centers for Disease Control and Prevention.
    2. Data from the National Center for HIV, STD, and TB Prevention.
    3. Data from the United States Census Bureau. Available online at
    4. Data from the National Center for Health Statistics.

    Contributors and Attributions

    • Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at

    This page titled 10.2.1: Prediction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.