Skip to main content
Statistics LibreTexts

14.7: College Sports

  • Page ID
    7182
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We will follow the same 4 step procedure as we have since chapter 7.

    Step 1: State the Hypotheses

    Our null hypothesis of no difference will state that there is no relation between our variables, and our alternative will state that our variables are related:

    \[\mathrm{H}_{0}: \text { College choice criteria is independent of college sports viewership as a child } \nonumber \]

    \[\mathrm{H}_{\mathrm{A}}: \text { College choice criteria is related to college sports viewership as a child } \nonumber \]

    Step 2: Find the Critical Value

    Our critical value will come from the same table that we used for the goodness-of-fit test, but our degrees of freedom will change. Because we now have rows and columns (instead of just columns) our new degrees of freedom use information on both:

    \[d f=(R-1)(C-1)\]

    In our example:

    \[d f=(2-1)(3-1)=1 * 2=2 \nonumber \]

    Based on our 2 degrees of freedom, our critical value from our table is 5.991.

    Step 3: Calculate the Test Statistic

    The same formula for \(\chi^{2}\) is used once again:

    \[\chi^{2}=\sum \dfrac{(0-\mathrm{E})^{2}}{\mathrm{E}} \]

    \[\begin{aligned} \chi^{2} &=\dfrac{(47-35.21)^{2}}{35.21}+\dfrac{(26-25.38)^{2}}{25.38}+\dfrac{(14-26.41)^{2}}{26.41}+ \dfrac{(21-32.79)^{2}}{32.79}+\dfrac{(23-23.62)^{2}}{23.62}+\dfrac{(37-24.59)^{2}}{24.59} \end{aligned} \nonumber \]

    Step 4: Make the Decision

    The final decision for our test of independence is still based on our observed value (20.31) and our critical value (5.991). Because our observed value is greater than our critical value, we can reject the null hypothesis.

    Reject \(H_0\). Based on our data from 168 people, we can say that there is a statistically significant relation between whether or not someone watches college sports growing up and how much a college’s sports team factor in to that person’s decision on which college to attend, \(\chi^{2}(2)=20.31, p<0.05\).

    Effect Size for \(\chi^{2}\)

    Like all other significance tests, \(\chi^{2}\) tests – both goodness-of-fit and tests for independence – have effect sizes that can and should be calculated for statistically significant results. There are many options for which effect size to use, and the ultimate decision is based on the type of data, the structure of your frequency or contingency table, and the types of conclusions you would like to draw. For the purpose of our introductory course, we will focus only on a single effect size that is simple and flexible: Cramer’s \(V\).

    Cramer’s \(V\) is a type of correlation coefficient that can be computed on categorical data. Like any other correlation coefficient (e.g. Pearson’s \(r\)), the cutoffs for small, medium, and large effect sizes of Cramer’s \(V\) are 0.10, 0.30, and 0.50, respectively. The calculation of Cramer’s \(V\) is very simple:

    \[V=\sqrt{\dfrac{\chi^{2}}{N(k-1)}} \]

    For this calculation, \(k\) is the smaller value of either \(R\) (the number of rows) or \(C\) (the number of columns). The numerator is simply the test statistic we calculate during step 3 of the hypothesis testing procedure. For our example, we had 2 rows and 3 columns, so \(k = 2\):

    \[V=\sqrt{\dfrac{\chi^{2}}{N(k-1)}}=\sqrt{\dfrac{20.38}{168(2-1)}}=\sqrt{0.12}=0.35 \nonumber \]

    So the statistically significant relation between our variables was moderately strong.


    This page titled 14.7: College Sports is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Foster et al. (University of Missouri’s Affordable and Open Access Educational Resources Initiative) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?