Skip to main content
Statistics LibreTexts

15.3: Hypothesis Testing- Slope to ANOVAs

  • Page ID
    17422
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In regression, we are interested in predicting \(Y\) scores and explaining variance using a line, the slope of which is what allows us to get closer to our observed scores than the mean of \(Y\) can. Thus, our hypotheses can concern the slope of the line, which is estimated in the prediction equation by \(b\).

    Research Hypothesis

    Specifically, we want to test that the slope is not zero.  The research hypothesis will be that there is an explanatory relation between the variables.

    • RH: \(\beta>0\ \)
    • RH: \(\beta<0\ \)
    • RH: \(\beta \neq 0\ \)

    A non-zero slope indicates that we can explain values in \(Y\) based on \(X\) and therefore predict future values of \(Y\) based on \(X\).

    Null Hypothesis

    Thus, the null hypothesis is that the slope is zero, that there is no explanatory relation between our variables

    \[\text{Null Hypothesis}: \beta=0 \nonumber \]

    Regression Uses a ANOVA Summary Table

    DId you notice that we don't have a test statistic yet (like \(t\), F of ANOVA, or Pearson's \(r\) yet?  To test the null hypothesis, we use the \(F\) statistic of ANOVA from an ANOVA Summary Table compared to a critical value from the \(F\) distribution table.

    Our ANOVA table in regression follows the exact same format as it did for ANOVA (Table \(\PageIndex{1}\)). Our top row is our observed effect, our middle row is our error, and our bottom row is our total. The columns take on the same interpretations as well: from left to right, we have our sums of squares, our degrees of freedom, our mean squares, and our \(F\) statistic.

    Table \(\PageIndex{1}\): ANOVA Table for Regression
    Source \(SS\) \(df\) \(MS\) \(F\)
    Model \(\sum(\widehat{Y}-\overline{Y})^{2}\) 1 \(SS_M / df_M\) \(MS_M / MS_E\)
    Error \(\sum(Y-\widehat{Y})^{2}\) \(N-2\) \(SS_E/ df_E\) N/A
    Total \(\sum(Y-\overline{Y})^{2}\) \(N-1\) N/A N/A

    As with ANOVA, getting the values for the \(SS\) column is a straightforward but somewhat arduous process. First, you take the raw scores of \(X\) and \(Y\) and calculate the means, variances, and covariance using the sum of products table introduced in our chapter on correlations. Next, you use the variance of \(X\) and the covariance of \(X\) and \(Y\) to calculate the slope of the line, \(b\), the formula for which is given above. After that, you use the means and the slope to find the intercept, \(a\), which is given alongside \(b\). After that, you use the full prediction equation for the line of best fit to get predicted \(Y\) scores (\(\widehat{Y}\)) for each person. Finally, you use the observed \(Y\) scores, predicted \(Y\) scores, and mean of \(Y\) to find the appropriate deviation scores for each person for each sum of squares source in the table and sum them to get the Sum of Squares Model, Sum of Squares Error, and Sum of Squares Total. As with ANOVA, you won’t be required to compute the \(SS\) values by hand, but you will need to know what they represent and how they fit together.

    The other columns in the ANOVA table are all familiar. The degrees of freedom column still has \(N – 1\) for our total, but now we have \(N – 2\) for our error degrees of freedom and 1 for our model degrees of freedom; this is because simple linear regression only has one predictor, so our degrees of freedom for the model is always 1 and does not change. The total degrees of freedom must still be the sum of the other two, so our degrees of freedom error will always be \(N – 2\) for simple linear regression. The mean square columns are still the \(SS\) column divided by the \(df\) column, and the test statistic \(F\) is still the ratio of the mean squares. Based on this, it is now explicitly clear that not only do regression and ANOVA have the same goal but they are, in fact, the same analysis entirely. The only difference is the type of data we have for the IV (predictor): a quantitative variable for for regression and groups (qualitative) for ANOVA.  The DV is quantitative for both ANOVAs and regressions/correlations.  

    With a completed ANOVA Table, we follow the same process of null hypothesis significance testing by comparing our calculated F-score to a critical F-score to determine if we retain or reject the null hypothesis.  In ANOVAs, the null hypothesis was that all of the means would be similar, but with correlations (which are what regression is based on), the null hypothesis says that there is no linear relationship.  However, what we are really testing is how much variability in the criterion variable (y) can be explained by variation in the predictor variable (y). So, for regression using ANOVA, the null hypothesis is saying that the predictor variable does not explain variation in the criterion variable.  

    This is a little confusing, so let's take a look at an example of regression in action.

    Contributors and Attributions


    This page titled 15.3: Hypothesis Testing- Slope to ANOVAs is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Michelle Oja.