Skip to main content
Statistics LibreTexts

7.4: The Central Limit Theorem for Proportions

  • Page ID
    14675
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The Central Limit Theorem tells us that the point estimate for the sample mean, \(\overline x\), comes from a normal distribution of \(\overline x\)'s. This theoretical distribution is called the sampling distribution of \(\overline x\)'s. We now investigate the sampling distribution for another important parameter we wish to estimate; \(p\) from the binomial probability density function.

    If the random variable is discrete, such as for categorical data, then the parameter we wish to estimate is the population proportion. This is, of course, the probability of drawing a success in any one random draw. Unlike the case just discussed for a continuous random variable where we did not know the population distribution of \(X\)'s, here we actually know the underlying probability density function for these data; it is binomial. The random variable is \(X =\) the number of successes and the parameter we wish to know is \(p\), the probability of drawing a success which is of course the proportion of successes in the population. The question at issue is: from what distribution was the sample proportion, \(p^{\prime}=\frac{x}{n}\) drawn? The sample size is \(n\) and \(X\) is the number of successes found in that sample. This is a parallel question that was just answered by the Central Limit Theorem: from what distribution was the sample mean, \(\overline x\), drawn? We saw that once we knew that the distribution was the Normal distribution then we were able to create confidence intervals for the population parameter, \(\mu\). We will also use this same information to test hypotheses about the population mean later. We wish now to be able to develop confidence intervals for the population parameter "\(p\)" from the binomial probability density function.

    In order to find the distribution from which sample proportions come we need to develop the sampling distribution of sample proportions just as we did for sample means. So again imagine that we randomly sample say 50 people and ask them if they support the new school bond issue. From this, we find a sample proportion, \(p^{\prime}\), and graph it on the axis of \(p\)'s. We do this, again and again, etc., etc. until we have the theoretical distribution of \(p\)'s. Some sample proportions will show high favorability toward the bond issue and others will show low favorability because random sampling will reflect the variation of views within the population. What we have done can be seen in Figure \(\PageIndex{9}\). The top panel is the population distributions of probabilities for each possible value of the random variable \(X\). While we do not know what the specific distribution looks like because we do not know \(p\), the population parameter, we do know that it must look something like this. In reality, we do not know either the mean or the standard deviation of this population distribution, the same difficulty we faced when analyzing the \(X\)'s previously.

    21c44d51b88f341d372670af23a3c523d777a209

    Figure \(\PageIndex{9}\)

    Figure \(\PageIndex{9}\) places the mean on the distribution of population probabilities as \(\mu=np\) but of course we do not actually know the population mean because we do not know the population probability of success, \(p\). Below the distribution of the population values is the sampling distribution of \(p\)'s. Again the Central Limit Theorem tells us that this distribution is normally distributed just like the case of the sampling distribution for \(\overline x\)'s. This sampling distribution also has a mean, the mean of the \(p\)'s, and a standard deviation, \(\sigma_{p^{\prime}}\).

    Importantly, in the case of the analysis of the distribution of sample means, the Central Limit Theorem told us the expected value of the mean of the sample means in the sampling distribution, and the standard deviation of the sampling distribution. Again the Central Limit Theorem provides this information for the sampling distribution for proportions. The answers are:

    1. The expected value of the mean of sampling distribution of sample proportions, \(\mu_{p^{\prime}}\), is the population proportion, \(p\).
    2. The standard deviation of the sampling distribution of sample proportions, \(\sigma_{p^{\prime}}\), is the population standard deviation divided by the square root of the sample size, \(n\).

    Both these conclusions are the same as we found for the sampling distribution for sample means. However in this case, because the mean and standard deviation of the binomial distribution both rely upon pp, the formula for the standard deviation of the sampling distribution requires algebraic manipulation to be useful. We will take that up in the next chapter. The proof of these important conclusions from the Central Limit Theorem is provided below.

    \[E\left(p^{\prime}\right)=E\left(\frac{x}{n}\right)=\left(\frac{1}{n}\right) E(x)=\left(\frac{1}{n}\right) n p=p\nonumber\]

    (The expected value of \(X\), \(E(x)\), is simply the mean of the binomial distribution which we know to be np.)

    \[\sigma_{\mathrm{p}}^{2}=\operatorname{Var}\left(p^{\prime}\right)=\operatorname{Var}\left(\frac{x}{n}\right)=\frac{1}{n^{2}}(\operatorname{Var}(x))=\frac{1}{n^{2}}(n p(1-p))=\frac{p(1-p)}{n}\nonumber\]

    The standard deviation of the sampling distribution for proportions is thus:

    \[\sigma_{\mathrm{p}},=\sqrt{\frac{p(1-P)}{n}}\nonumber\]

    Parameter Population distribution Sample Sampling distribution of \(p\)'s
    Mean \(\mu = np\) \(p^{\prime}=\frac{x}{n}\) \(p^{\prime} \text { and } E(p^{\prime})=p\)
    Standard Deviation \(\sigma=\sqrt{n p q}\)   \(\sigma_{p^{\prime}}=\sqrt{\frac{p(1-p)}{n}}\)
    Table \(\PageIndex{2}\)

    Table \(\PageIndex{2}\) summarizes these results and shows the relationship between the population, sample and sampling distribution. Notice the parallel between this Table and Table \(\PageIndex{1}\) for the case where the random variable is continuous and we were developing the sampling distribution for means.

    Reviewing the formula for the standard deviation of the sampling distribution for proportions we see that as \(n\) increases the standard deviation decreases. This is the same observation we made for the standard deviation for the sampling distribution for means. Again, as the sample size increases, the point estimate for either \(\mu\) or \(p\) is found to come from a distribution with a narrower and narrower distribution. We concluded that with a given level of probability, the range from which the point estimate comes is smaller as the sample size, \(n\), increases. Figure \(\PageIndex{8}\) shows this result for the case of sample means. Simply substitute \(p^{\prime}\) for \(\overline x\) and we can see the impact of the sample size on the estimate of the sample proportion.


    This page titled 7.4: The Central Limit Theorem for Proportions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax.

    • Was this article helpful?