Skip to main content
Statistics LibreTexts

13.3: Testing the Significance of the Correlation Coefficient

  • Page ID
    14758
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The correlation coefficient, \(r\), tells us about the strength and direction of the linear relationship between \(X_1\) and \(X_2\).

    The sample data are used to compute \(r\), the correlation coefficient for the sample. If we had data for the entire population, we could find the population correlation coefficient. But because we have only sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, r, is our estimate of the unknown population correlation coefficient.

    • The hypothesis test lets us decide whether the value of the population correlation coefficient \rho is "close to zero" or "significantly different from zero". We decide this based on the sample correlation coefficient \(r\) and the sample size \(n\).

      If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is "significant."

      • What the Hypotheses Mean in Words
        • Drawing a Conclusion There are two methods of making the decision concerning the hypothesis. The test statistic to test this hypothesis is:

          \[t_{c}=\frac{r}{\sqrt{\left(1-r^{2}\right) /(n-2)}}\nonumber\]

          \[t_{c}=\frac{r \sqrt{n-2}}{\sqrt{1-r^{2}}}\nonumber\]

          Where the second formula is an equivalent form of the test statistic, \(n\) is the sample size and the degrees of freedom are \(n-2\). This is a \(t\)-statistic and operates in the same way as other \(t\) tests. Calculate the \(t\)-value and compare that with the critical value from the \(t\)-table at the appropriate degrees of freedom and the level of confidence you wish to maintain. If the calculated value is in the tail then reject the null hypothesis that there is no linear relationship between these two independent random variables. If the calculated \(t\)-value is NOT in the tailed then cannot reject the null hypothesis that there is no linear relationship between the two variables.

          A quick shorthand way to test correlations is the relationship between the sample size and the correlation. If:

          \[|r| \geq \frac{2}{\sqrt{n}}\nonumber\]

          then this implies that the correlation between the two variables demonstrates that a linear relationship exists and is statistically significant at approximately the 0.05 level of significance. As the formula indicates, there is an inverse relationship between the sample size and the required correlation for significance of a linear relationship. With only 10 observations, the required correlation for significance is 0.6325, for 30 observations the required correlation for significance decreases to 0.3651 and at 100 observations the required level is only 0.2000.

          Correlations may be helpful in visualizing the data, but are not appropriately used to "explain" a relationship between two variables. Perhaps no single statistic is more misused than the correlation coefficient. Citing correlations between health conditions and everything from place of residence to eye color have the effect of implying a cause and effect relationship. This simply cannot be accomplished with a correlation coefficient. The correlation coefficient is, of course, innocent of this misinterpretation. It is the duty of the analyst to use a statistic that is designed to test for cause and effect relationships and report only those results if they are intending to make such a claim. The problem is that passing this more rigorous test is difficult so lazy and/or unscrupulous "researchers" fall back on correlations when they cannot make their case legitimately.


    This page titled 13.3: Testing the Significance of the Correlation Coefficient is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.