Skip to main content
Statistics LibreTexts

Mostly Harmless Statistics Formula Packet

  • Page ID
    34990
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Chapter 3 Formulas

    Sample Mean: \(\bar{x} = \frac{\sum x}{n}\)le Population Mean: \(\mu = \frac{\sum x}{N}\)
    Weighted Mean: \(\bar{x} = \frac{\sum (xw)}{\sum w}\) Range = \(\text{Max} - \text{Min}\)
    Sample Standard Deviation: \(s = \sqrt{\frac{\sum \left(x - \bar{x}\right)^{2}}{n-1}}\) Population Standard Deviation = \(\sigma\)
    Sample Variance: \(s^{2} = \frac{\sum \left(x - \bar{x}\right)^{2}}{n-1}\) Population Variance = \(\sigma^{2}\)
    Coefficient of Variation: \(\text{CVar} = \left(\frac{s}{\bar{x}} \cdot 100\right) %\) \(Z\)-Score: \(z = \frac{x - \bar{x}}{s}\)
    Percentile Index: \(i = \frac{(n+1) \cdot p}{100}\) Interquartile Range: \(\text{IQR} = Q_{3} - Q_{1}\)
    Empirical Rule: \(z = 1, 2, 3 \Rightarrow 68%, 95%, 99.7%\) Outlier Lower Limit: \(Q_{1} - (1.5 \cdot \text{IQR})\)
    Chebyshev’s Inequality: \(\left(\left(1 - \frac{1}{(z)^{2}}\right) \cdot 100r \right) %\) Outlier Upper Limit: \(Q_{3} + (1.5 \cdot \text{IQR})\)

    TI-84: Enter the data in a list and then press [STAT]. Use cursor keys to highlight CALC. Press 1 or [ENTER] to select 1:1-Var Stats. Press [2nd], then press the number key corresponding to your data list. Press [Enter] to calculate the statistics. Note: the calculator always defaults to L1 if you do not specify a data list.

    Screenshots of a TI-84 calculator to enter data in a list, select 1-Var Stats, select the list, and calculate statistics for the list.

    \(s_{x}\) is the sample standard deviation. You can arrow down and find more statistics. Use the min and max to calculate the range by hand. To find the variance simply square the standard deviation.

    Chapter 4 Formulas

    Complement Rules: \(\begin{array}{l} \text{P}(A) + \text{P}(A^{C}) = 1 \\ \text{P}(A) = 1 - \text{P}(A^{C}) \\ \text{P}(A^{C}) = 1 - \text{P}(A) \end{array}\) Mutually Exclusive Events: \(\text{P}(A \cup B) = 0\)
    Union Rule: \(\text{P} (A \cup B) = \text{P}(A) + \text{P}(B) – \text{P}(A \cap B)\) Independent Events: \(\text{P} (A \cup B) = \text{P}(A) \cdot \text{P}(B)\)
    Intersection Rule: \(\text{P} (A \cap B) = \text{P}(A) \cdot \text{P} (A|B)\) Conditional Probability Rule: \(\text{P} (A|B) = \frac{\text{P} (A \cap B)}{\text{P} (B)}\)
    Fundamental Counting Rule: \(m_{1} \cdot m_{2} \cdots m_{n}\) Factorial Rule: \(n! = n \cdot (n-1) \cdot (n-2) \cdots 3 \cdot 2 \cdot 1\)
    Combination Rule: \({}_{n} C_{r} = \frac{n!}{(r! (n-r)!)}\) Permutation Rule: \({}_{n} P_{r} = \frac{n!}{(n-r)!}\)

    All 52 playing card values from a standard pack.

    Table of sums of the rolls of two 6-sided dice. Logic tree for determining whether to apply the Fundamental Counting Rule, factorials, permutations, or combinations to a situation.

    Chapter 5 Formulas

    Discrete Distribution Table:
    \(0 \leq \text{P} (x_{i}) \leq 1 \quad\quad\quad \sum \text{P} (x_{i}) = 1\)
    Discrete Distribution Mean: \(\mu = \sum \left(x_{i} \cdot \text{P} \left(x_{i}\right) \right)\)
    Discrete Distribution Variance:
    \(\sigma^{2} = \sum \left(x_{i}^{2} \cdot \text{P} \left(x_{i}\right)\right) - \mu^{2}\)
    Discrete Distribution Standard Deviation: \(\sigma = \sqrt{\sigma^{2}}\)
    Geometric Distribution:
    \(\text{P} (X=x) = p \cdot q^{x-1}, x = 1,2,3, \ldots\)

    Geometric Distribution Mean: \(\mu = \frac{1}{p}\)

    Variance: \(\sigma^{2} = \frac{1-p}{p^{2}}\)

    Standard Deviation: \(\sigma = \sqrt{\frac{1 - p}{p^{2}}}\)

    Binomial Distribution:
    \(\text{P} (X=x) = {}_{n} C_{x} p^{x} \cdot q^{(n-x)}, x=0, 1, 2, \ldots, n\)

    Binomial Distribution Mean: \(\mu = n \cdot p\)

    Variance: \(sigma^{2} = n \cdot p \cdot q\)

    Standard Deviation: \(\sigma = \sqrt{n \cdot p \cdot q}\)

    Hypergeometric Distribution:
    \(\text{P} (X=x) = \frac{\,_{a} C_{x} \cdot \,_{b} C_{n-x}}{\,_{N} C_{n}}\)
    \(p = \text{P(success)} \quad\quad\quad p = \text{P(failure)} = 1 - p\)
    \(n = \text{sample size} \quad\quad\quad N = \text{population size}\)
    Unit Change for Poisson Distribution:
    \(\text{New } \mu = \text{old } \mu \left(\frac{\text{new units}}{\text{old units}}\right)\)
    Poisson Distribution:
    \(\text{P} (X=x) = \frac{e^{- \mu} \mu^{x}}{x!}\)
    \(\text{P} (X=x)\) \(\text{P} (X \leq x)\) \(\text{P} (X \geq x)\)
    Is the same as Is less than or equal to Is greater than or equal to
    Is equal to Is at most Is at least
    Is exactly the same as Is not greater than Is not less than
    Has not changed from Within Is more than or equal to
         
    Excel
    \(=\text{binom.dist}(x,n,p,0)\)
    \(=\text{HYPGEOM.DIST}(x,n,a,N,0)\)
    \(=\text{POISSON.DIST}(x,\mu,0)\)
    Excel
    \(=\text{binom.dist}(x,n,p,1)\)
    \(= \text{HYPGEOM.DIST}(x,n,a,N,1)\)
    \(=\text{POISSON.DIST}(x,\mu,1)\)
    Excel
    \(=1-\text{binom.dist}(x-1,n,p,1)\)
    \(=1- \text{HYPGEOM.DIST}(x-1,n,a,N,1)\)
    \(=1-\text{POISSON.DIST}(x-1,\mu,1)\)
    TI Calculator
    \(\text{geometpdf}(p,x)\)
    \(\text{binompdf}(n,p,x)\)
    \(\text{poissonpdf}(\mu,x)\)
    TI Calculator
    \(\text{binomcdf}(n,p,x)\)
    \(\text{poissoncdf}(\mu,x)\)
    TI Calculator
    \(1-\text{binomcdf}(n,p,x-1)\)
    \(1-\text{poissoncdf}(\mu,x-1)\)
    \(\text{P} (X>x)\) \(\text{P} (X<x)\)

    How do you tell them apart?

    • Geometric – A percent or proportion is given. There is no set sample size until a success is achieved.
    • Binomial – A percent or proportion is given. A sample size is given.
    • Hypergeometric – Usually frequencies of successes are given instead of percentages. A sample size is given.
    • Poisson – An average or mean is given. There is no set sample size until a success is achieved.
    x)\)">More than Less than
    x)\)">Greater than Below
    x)\)">Above Lower than
    x)\)">Higher than Shorter than
    x)\)">Longer than Smaller than
    x)\)">Bigger than Decreased
    x)\)">Increased Reduced
    x)\)">   
    x)\)">Excel
    \(=1-\text{binom.dist}(x,n,p,1)\)
    \(=1- \text{HYPGEOM.DIST}(x,n,a,N,1)\)
    \(=1-\text{POISSON.DIST}(x,\mu,1)\)
    Excel
    \(=\text{binom.dist}(x-1,n,p,1)\)
    \(=\text{HYPGEOM.DIST}(x-1,n,a,N,1)\)
    \(=\text{POISSON.DIST}(x-1,\mu,1)\)
    x)\)">TI Calculator
    \(1-\text{binomcdf}(n,p,x)\)
    \(1-\text{poissoncdf}(\mu,x)\)
    TI Calculator
    \(\text{binomcdf}(n,p,x-1)\)
    \(\text{poissoncdf}(\mu,x-1)\)

    Chapter 6 Formulas

    Uniform Distribution
    \(f(x) = \frac{1}{b-a}, \text{ for } a \leq x \leq b\)
    \(\text{P}(X \geq x) = \text{P} (X>x) = \left(\frac{1}{b-a}\right) \cdot (b-x)\)
    \(\text{P}(X \leq x) = \text{P} (X<x) = \left(\frac{1}{b-a}\right) \cdot (x-a)\)
    \(\text{P}\left(x_{1} \leq X \leq x_{2}\right) = \text{P} \left(x_{1} < X < x_{2}\right) = \left(\frac{1}{b-a}\right) \cdot \left(x_{2}-x_{1}\right)\)
    Exponential Distribution
    \(f(x) = \frac{1}{\mu} e^{(-x / \mu)}, \text{ for } x \geq 0\)
    \(\text{P}(X \geq x) = \text{P} (X > x) = e^{-x / \mu}\)
    \(\text{P}(X \leq x) = \text{P} (X < x) = 1 - e^{-x / \mu}\)
    \(\text{P}\left(x_{1} \leq X \leq x_{2}\right) = \text{P} \left(x_{1} < X < x_{2}\right) = e^{(-x_{1} / \mu)} - e^{(-x_{2} / \mu)}\)
    Standard Normal Distribution
    \(\mu = 0, \sigma = 1\)
    \(z\)-score: \(z = \frac{x - \mu}{\sigma}\)
    \(x = z \sigma + \mu\)
    Central Limit Theorem
    Z-score: \(z = \frac{\bar{x} - \mu}{\left( \frac{\sigma}{\sqrt{n}} \right)}\)

    In the table below, note that when \(\mu = 0\) and \(\sigma = 1\) use the \(\text{NORM.S. DIST}\) or \(\text{NORM.S.INV}\) function in Excel for a standard normal distribution.

    \(\text{P} (X \leq x)\) or \(\text{P} (X < x)\) \(\text{P} \left(x_{1} < X < x_{2}\right)\) or \(\text{P} \left(x_{1} \leq X \leq x_{2}\right)\) \(\text{P} (X \geq x)\) or \(\text{P} (X > x)\)
    Is less than or equal to Between x)\)">Is greater than or equal to
    Is at most   x)\)">Is at least
    Is not greater than   x)\)">Is not less than
    Within   x)\)">More than
    Less than   x)\)">Greater than
    Below   x)\)">Above
    Lower than   x)\)">Higher than
    Shorter than   x)\)">Longer than
    Smaller than   x)\)">Bigger than
    Decreased   x)\)">Increased
    Reduced   x)\)">Larger
    A probability distribution with shaded region under the curve to the left of a desired value. A probability distribution with shaded region under the curve between two desired values. x)\)">A probability distribution with shaded region under the curve to the right of a desired value.
    Excel
    Finding a Probability:
    \(=\text{NORM.DIST}(x, \mu, \sigma, \text{true})\)
    Finding a Percentile:
    \(=\text{NORM.INV}(\text{area}, \mu, \sigma)\)
    Excel
    Finding a Probability:

    \(=\text{NORM.DIST}(x_{2},\mu,\sigma,\text{true}) - \text{NORM.DIST}(x_{1},\mu,\sigma,\text{true})\)
    Finding a Percentile:
    \(x_{1} = \text{NORM.INV}((1-\text{area})/2,\mu, \sigma)\)
    \(x_{2} = \text{NORM.INV}(1-((1-\text{area})/2),\mu,\sigma)\)
    x)\)">Excel
    Finding a Probability:
    \(= 1-\text{NORM.DIST}(x, \mu, \sigma, \text{true})\)
    Finding a Percentile:
    \(= \text{NORM.INV}(1-\text{area}, \mu, \sigma)\)
    TI Calculator
    Finding a Probability:

    \(=\text{normalcdf}(-1\text{E}99,x,\mu,\sigma)\)
    Finding a Percentile:
    \(=\text{invNorm}(\text{area},\mu,\sigma)\)
    TI Calculator
    Finding a Probability:
    \(=\text{normalcdf}(x_{1}, x_{2}, \mu, \sigma)\)
    Finding a Percentile:
    \(x_{1} = \text{invNorm}((1-\text{area})/2, \mu, \sigma)\)
    \(x_{2} = \text{invNorm}(1-((1-\text{area})/2), \mu, \sigma)\)
    x)\)">TI Calculator
    Finding a Probability:
    \(=\text{normalcdf}(x, 1\text{E}99, \mu, \sigma)\)
    Finding a Percentile:
    \(=\text{invNorm}(1-\text{area}, \mu, \sigma)\)

    Chapter 7 Formulas

    Confidence Interval for One Proportion
    \(\hat{p} \pm z_{\alpha/2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}\)
    \(\hat{p} = \frac{x}{n}\)
    \(\hat{q} = 1 - \hat{p}\)
    TI-84: \(1 - \text{PropZInt}\)
    Sample Size for Proportion
    \(n = p^{*} \cdot q^{*} \left(\frac{z_{\alpha/2}}{E}\right)^{2}\)
    Always round up to whole number.
    If \(p\) is not given use \(p^{*} = 0.5\).
    \(E\) = Margin of Error
    Confidence Interval for One Mean
    Use z-interval when \(\sigma\) is given.
    Use t-interval when \(s\) is given.
    If \(n < 30\), population needs to be normal.
    Z-Confidence Interval
    \(\bar{x} \pm z_{\alpha/2} \left(\frac{\sigma}{\sqrt{n}}\right)\)
    TI-84: \(\text{ZInterval}\)
    Z-Critical Values
    Excel: \(z_{\alpha/2} = \text{NORM.INV}(1-\text{area}/2, 0, 1)\)
    TI-84: \(z_{\alpha/2} = \text{invNorm}(1-\text{area}/2, 0, 1)\)
    t-Critical Values
    Excel: \(t_{\alpha/2} = \text{T.INV}(1-\text{area}/2, df)\)
    TI-84: \(t_{\alpha/2} = \text{invT}(1-\text{area}/2, df)\)
    t-Confidence Interval
    \(\bar{x} \pm t_{\alpha/2} \left(\frac{s}{\sqrt{n}}\right)\)
    \(df = n-1\)
    TI-84: \(\text{TInterval}\)
    Sample Size for Mean
    \(n = \left(\frac{z_{\alpha/2} \cdot \sigma}{E}\right)^{2}\)
    Always round up to whole number.
    \(E\) = Margin of Error

    Chapter 8 Formulas

    Hypothesis Test for One Mean
    Use z-test when \(\sigma\) is given.
    Use t-test when \(s\) is given.
    If \(n < 30\), population needs to be normal.
    Type I Error -
    Reject \(H_{0}\) when \(H_{0}\) is true.
    Type II Error -
    Fail to reject \(H_{0}\) when \(H_{0}\) is false.
    Z-Test:
    \(H_{0}: \mu = \mu_{0}\)
    \(H_{1}: \mu \neq \mu_{0}\)
    \(z = \frac{\bar{x} - \mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}\) TI-84: \(\text{Z-Test}\)
    t-Test:
    \(H_{0}: \mu = \mu_{0}\)
    \(H_{1}: \mu \neq \mu_{0}\)
    \(t = \frac{\bar{x} - \mu_{0}}{\left(\frac{s}{\sqrt{n}}\right)}\) TI-84: \(\text{T-Test}\)
    z-Critical Values
    Excel:
    Two-tail: \(z_{\alpha/2} = \text{NORM.INV}(1-\alpha/2, 0, 1)\)
    Right-tail: \(z_{1 - \alpha} = \text{NORM.INV}(1-\alpha, 0, 1)\)
    Left-tail: \(z_{\alpha} = NORM.INV(\alpha, 0, 1)\)

    TI-84:
    Two-tail: \(z_{\alpha/2} = \text{invNorm}(1-\alpha/2, 0, 1)\)
    Right-tail: \(z_{1-\alpha} = \text{invNorm}(1-\alpha, 0, 1)\)
    Left-tail: \(z_{\alpha} = \text{invNorm}(\alpha, 0, 1)\)
    t-Critical Values
    Excel:
    Two-tail: \(t_{\alpha/2} = \text{T.INV}(1-\alpha/2, df)\)
    Right-tail: \(t_{1-\alpha} = \text{T.INV}(1-\alpha, df)\)
    Left-tail: \(t_{\alpha} = \text{T.INV}(\alpha, df)\)

    TI-84:
    Two-tail: \(t_{\alpha/2} = \text{invT}(1-\alpha/2, df)\)
    Right-tail: \(t_{1-\alpha} = \text{invT}(1-\alpha, df)\)
    Left-tail: \(t_{\alpha} = \text{invT}(\alpha, df)\)
    Hypothesis Test for One Proportion
    \(H_{0}: p = p_{0}\)
    \(H_{1}: p \neq p_{0}\)
    \(z = \frac{\hat{p} - p_{0}}{\sqrt{\left(\frac{p_{0} q_{0}}{n}\right)}}\)
    TI-84: \(1\text{-PropZTest}\)
    Rejection Rules:
    P-value method: reject \(H_{0}\) when the p-value \(\leq \alpha\).
    Critical value method: reject \(H_{0}\) when the test statistic is in the critical region (shaded tails).
    Two-tailed Test Right-tailed Test Left-tailed Test
    \(H_{0}: \mu = \mu_{0}\) or \(H_{0}: p = p_{0}\)
    \(H_{1}: \mu \neq \mu_{0}\) or \(H_{0}: p \neq p_{0}\)
    \(H_{0}: \mu = \mu_{0}\) or \(H_{0}: p = p_{0}\)
    \(H_{1}: \mu > \mu_{0}\) or \(H_{0}: p > p_{0}\)
    \(H_{0}: \mu = \mu_{0}\) or \(H_{0}: p = p_{0}\)
    \(H_{1}: \mu < \mu_{0}\) or \(H_{0}: p < p_{0}\)
    A probability distribution with the area under both tails shaded. A probability distribution with the area under the right tail shaded. A probability distribution with the area under the left tail shaded.
    Claim is in the Null Hypothesis
    = \(\leq\) \(\geq\)
    Is equal to Is less than or equal to Is greater than or equal to
    Is exactly the same as Is at most Is at least
    Has not changed from Is not more than Is not less than
    Is the same as Within Is more than or equal to
    Claim is in the Alternative Hypothesis
    \(\neq\) > <
    Is not More than Less than
    Is not equal to Greater than Below
    Is different from Above Lower than
    Has changed from Higher than Shorter than
    Is not the same as Longer than Smaller than
      Bigger than Decreased
      Increased Reduced

    Chapter 9 Formulas

    Hypothesis Test for Two Dependent Means
    \(H_{0}: \mu_{D} = 0\)
    \(H_{1}: \mu_{D} \neq 0\)
    \(t = \frac{\bar{D} - \mu_{D}}{\left(\frac{s_{D}}{\sqrt{n}}\right)}\)
    TI-84: \(\text{T-Test}\)
    Confidence Interval for Two Dependent Means
    \(\bar{D} \pm t_{\alpha/2} \left(\frac{s_{D}}{\sqrt{n}}\right)\)
    TI-84: \(\text{TInterval}\)
    Hypothesis Test for Two Independent Means
    Z-Test: \(H_{0}: \mu_{1} = \mu_{2}\)
    \(H_{1}: \mu_{1} \neq \mu_{2}\)
    \(z = \frac{\left(\bar{x}_{1} - \bar{x}_{2}\right) - \left(\mu_{1} - \mu_{2}\right)_{0}}{\sqrt{\left( \frac{\sigma_{1}^{2}}{n_{1}} + \frac{\sigma_{2}^{2}}{n_{2}} \right)}}\)
    TI-84: \(2\text{-SampZTest}\)
    Confidence Interval for Two Independent Means Z-Interval
    \(\left(\bar{x}_{1} - \bar{x}_{2}\right) \pm z_{\alpha/2} \sqrt{\left( \frac{\sigma_{1}^{2}}{n_{1}} + \frac{\sigma_{2}^{2}}{n_{2}}\right)}\)
    TI-84: \(2\text{-SampZInt}\)
    Hypothesis Test for Two Independent Means
    \(H_{0}: \mu_{1} = \mu_{2}\)
    \(H_{1}: \mu_{1} \neq \mu_{2}\)

    T-Test: Assume variances are unequal
    \(t = \dfrac{\left(\bar{x}_{1} - \bar{x}_{2}\right) - \left(\mu_{1} - \mu_{2}\right)_{0}}{\sqrt{\left( \frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}} \right)}}\)
    TI-84: \(2\text{-SampTTest}\)
    \(df = \dfrac{\left( \frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}} \right)^2}{\left( \left(\frac{s_{1}^{2}}{n_{1}}\right)^{2} \left(\frac{1}{n_{1}-1}\right) + \left(\frac{s_{2}^{2}}{n_{2}}\right)^{2} \left(\frac{1}{n_{2}-1}\right) \right)}\)

    T-Test: Assume variances are equal
    \(t = \dfrac{\left(\bar{x}_{1} - \bar{x}_{2}\right) - \left(\mu_{1} - \mu_{2}\right)}{\sqrt{ \left(\frac{\left(n_{1} - 1\right) s_{1}^{2} + \left(n_{2} - 1\right) s_{2}^{2}}{\left(n_{1} + n_{2} - 2\right)} \right) \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) }}\)
    \(df = n_{1} - n_{2} - 2\)
    Confidence Interval for Two Independent Means
    T-Interval: Assume variances are unequal

    \(\left(\bar{x}_{1} - \bar{x}_{2}\right) \pm t_{\alpha/2} \sqrt{\left(\frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}}\right)}\)
    TI-84: \(2\text{-SampTInt}\)
    \(df = \dfrac{\left( \frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}} \right)^2}{\left( \left(\frac{s_{1}^{2}}{n_{1}}\right)^{2} \left(\frac{1}{n_{1}-1}\right) + \left(\frac{s_{2}^{2}}{n_{2}}\right)^{2} \left(\frac{1}{n_{2}-1}\right) \right)}\)

    T-Interval: Assume variances are equal
    \(\left(\bar{x}_{1} - \bar{x}_{2}\right) \pm t_{\alpha/2} \sqrt{\left( \left(\frac{\left(n_{1} - 1\right) s_{1}^{2} + \left(n_{2} - 1\right) s_{2}^{2}}{\left(n_{1} - n_{2} - 2\right)}\right) \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right)}\)
    \(df = n_{1} - n_{2} - 2\)
    Hypothesis Test for Two Proportions
    \(H_{0}: p_{1} = p_{2}\)
    \(H_{1}: p_{1} \neq p_{2}\)
    \(z = \dfrac{\left(\hat{p}_{1} - \hat{p}_{2}\right) - \left(p_{1} - p_{2}\right)}{\sqrt{ \left( \hat{p} \cdot \hat{q} \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right) }}\)

    \(\hat{p} = \frac{\left(x_{1} + x_{2}\right)}{\left(n_{1} + n_{2}\right)} = \frac{\left(\hat{p}_{1} \cdot n_{1} + \hat{p}_{2} \cdot n_{2}\right)}{\left(n_{1} + n_{2}\right)}\)
    \(\hat{q} = 1 - \hat{p}\)
    \(\hat{p}_{1} = \frac{x_{1}}{n_{1}}, \quad\quad \hat{p}_{2} = \frac{x_{2}}{n_{2}}\)
    TI-84: \(2\text{-PropZTest}\)
    Confidence Interval for Two Proportions
    \(\left(\hat{p}_{1} - \hat{p}_{2}\right) \pm z_{\alpha/2} \sqrt{\left( \frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}} + \frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}} \right)}\)
    \(\hat{p}_{1} = \frac{x_{1}}{n_{1}} \quad\quad\quad\. \hat{p}_{2} = \frac{x_{2}}{n_{2}}\)
    \(\hat{q}_{1} = 1 - \hat{p}_{1} \quad\quad \hat{q}_{2} = 1 - \hat{p}_{2}\)
    TI-84: \(2\text{-PropZInt}\)
    Hypothesis Test for Two Variances
    \(H_{0}: \sigma_{1}^{2} = \sigma_{2}^{2}\)
    \(H_{1}: \sigma_{1}^{2} \neq \sigma_{2}^{2}\)
    \(F = \frac{s_{1}^{2}}{s_{2}^{2}}\)
    \(df_{\text{N}} = n_{1} - 1, \quad\quad df_{\text{D}} = n_{2} - 1\)
    TI-84: \(2\text{-SampFTest}}\)
    Hypothesis Test for Two Standard Deviations
    \(H_{0}: \sigma_{1} = \sigma_{2}\)
    \(H_{1}: \sigma_{1} \neq \sigma_{2}\)
    \(F = \frac{s_{1}^{2}}{s_{2}^{2}}\)
    \(df_{\text{N}} = n_{1} - 1, \quad\quad df_{\text{D}} = n_{2} - 1\)
    TI-84: \(2\text{-SampFTest}}\)
    F-Critical Values
    Excel:
    Two-tail: \(F_{\alpha/2} = \text{F.INV}(1 - \alpha/2, 0, 1)\)
    Right-tail: \(F_{1-\alpha} = \text{F.INV}(1 - \alpha, 0, 1)\)
    Left-tail: \(F_{\alpha} = \text{F.INV}(\alpha, 0, 1)\)
    For z and t-Critical Values refer back to Chapter 8

    TI-84: invF program can be downloaded at http://www.MostlyHarmlessStatistics.com.

    Flowchart for deciding which type of test to use, based on what information is given: proportions or means, the number of samples, etc.

    Chapter 10 Formulas

    Goodness of Fit Test
    \(H_{0}: p_{1} = p_{0}, p_{2} = p_{0}, \ldots, p_{k} = p_{0}\)
    \(H_{1}:\) At least one proportion is different.
    \(\chi^{2} = \sum \frac{(O-E)^{2}}{E}\)
    \(df = k-1, p_{0} = 1/k \text{ or given %}\)
    TI-84: \(\chi^{2} \text{ GOF-Test}\)
    Test for Independence
    \(H_{0}:\) Variable 1 and Variable 2 are independent.
    \(H_{1}:\) Variable 1 and Variable 2 are dependent.
    \(\chi^{2} = \sum \frac{(O-E)^{2}}{E}\)
    \(df = (R-1)(C-1)\)
    TI-84: \(\chi^{2} \text{-Test}\)

    Chapter 11 Formulas

    One-Way ANOVA:
    \(H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \ldots = \mu_{k} \quad\quad k = \text{number of groups}\)
    \(H_{1}:\) At least one mean is different.
    One-way ANOVA table showing the formulas for sum of squares, degrees of freedom, mean squares, and F-values for factor and error.

    \(\bar{x}_{i}\) = sample mean from the \(i^{th}\) group
    \(n_{i}\) = sample size of the \(i^{th}\) group
    \(s_{i}^{2}\) = sample variance from the \(i^{th}\) group
    \(N = n_{1} + n_{2} + \cdots + n_{k}\)
    \(\bar{x}_{GM} = \frac{\sum x_{i}}{N}\)
    Bonferroni test statistic: \(t = \dfrac{\bar{x}_{i} - \bar{x}_{j}}{\sqrt{\left( MSW \left(\frac{1}{n_{i}} + \frac{1}{n_{j}}\right) \right)}}\)
    \(H_{0}: \mu_{i} = \mu_{j}\)
    \(H_{1}: \mu_{i} \neq \mu_{j}\)
    Multiply p-value by \(m = {}_{k} C_{2}\), divide area for critical value by \(m = {}_{k} C_{2}\)
    Two-Way ANOVA:
    Row Effect (Factor A): \(H_{0}:\) The row variable has no effect on the average ______________.
    \(H_{1}:\) The row variable has an effect on the average ______________.

    Column Effect (Factor B): \(H_{0}:\) The column variable has no effect on the average ______________.
    \(H_{1}:\) The column variable has an effect on the average ______________.

    Interaction Effect (A \(\times\) B\):
    \(H_{0}:\) There is no interaction effect between row variable and column variable on the average ______________.
    \(H_{1}:\) There is an interaction effect between row variable and column variable on the average ______________.

    Two-way ANOVA table showing equations for SS, df, MS, and F-values for the row factor, column factor, interaction, and error.

    Chapter 12 Formulas

    \(SS_{xx} = (n-1) s_{x}^{2}\)
    \(SS_{yy} = (n-1) s_{y}^{2}\)
    \(SS_{xy} = \sum (xy) - n \cdot \bar{x} \cdot \bar{y}\)
    Correlation Coefficient
    \(r = \frac{SS_{xy}}{\sqrt{\left( SS_{xx} \cdot SS_{yy} \right)}}\)
    Slope = \(b_{1} = \frac{SS_{xy}}{SS_{xx}}\)

    y-intercept = \(b_{0} = \bar{y} - b_{1} \bar{x}\)

    Regression Equation (Line of Best Fit): \(\hat{y} = b_{0} + b_{1} x\)
    Correlation t-test
    \(H_{0}: \rho = 0; \ H_{1}: \rho \neq 0 \quad\quad\quad t = r \sqrt{\left(\frac{n-2}{1-r^{2}}\right)} \quad df = n-2\)

    Slope t-test
    \(H_{0}: \beta_{1} = 0; \ H_{1}: \beta_{1} \neq 0 \quad\quad\quad t = \frac{b_{1}}{\sqrt{\left( \frac{MSE}{SS_{xx}} \right)}} \quad df = n - p - 1 = n-2\)
    Residual
    \(e_{i} = y_{i} - \hat{y}_{i}\) (Residual plots should have no patterns.)

    Standard Error of Estimate
    \(s_{est} = \sqrt{\frac{\sum \left(y_{i} - \hat{y}_{i}\right)^{2}}{n - 2}} = \sqrt{MSE}\)

    Prediction Interval
    \(\hat{y} = t_{\alpha/2} \cdot s_{est} \sqrt{\left(1 + \frac{1}{n} + \frac{\left(x - \bar{x}\right)^{2}}{SS_{xx}}\right)}\)
    Slope/Model F-test
    \(H_{0}: \beta_{1} = 0; \ H_{1}: \beta_{1} \neq 0\)
    Table showing equations to calculate SS, df, MS, and F-value for regression and error.
    Multiple Linear Regression Equation
    \(\hat{y} = b_{0} + b_{1} x_{1} + b_{2} x_{2} + \cdots + b_{p} x_{p}\)
    Coefficient of Determination
    \(R^{2} = (r)^{2} = \frac{SSR}{SST}\)
    Model F-Test for Multiple Regression
    \(H_{0}: \beta_{1} = \beta_{2} = \cdots \beta_{p} = 0\)
    \(H_{1}:\) At least one slope is not zero.
    Adjusted Coefficient of Determination
    \(R_{adj}^{2} = 1 - \left(\frac{\left(1 - R^{2}\right) (n-1)}{(n - p - 1)}\right)\)

    Chapter 13 Formulas

    Ranking Data

    • Order the data from smallest to largest.
    • The smallest value gets a rank of 1.
    • The next smallest gets a rank of 2, etc.
    • If there are any values that tie, then each of the tied values gets the average of the corresponding ranks.

    Sign Test

    \(H_{0}:\) Median \(= MD_{0}\)
    \(H_{1}:\) Median \(\neq MD_{0}\)
    p-value uses binomial distribution with \(p = 0.5\) and \(n\) is the sample size not including ties with the median or differences of 0.

    • For a 2-tailed test, the test statistic, \(x\), is the smaller of the plus or minus signs. If \(x\) is the test statistic, the p-value for a two-tailed test is \(2 \cdot \text{P}(X \leq x)\).
    • For a right-tailed test, the test statistic, \(x\), is the number of plus signs. For a left-tailed test, the test statistic, \(x\), is the number of minus signs. The p-value for a one-tailed test is \(\text{P}(X \geq x) \)or \(\text{P}(X \leq x)\).
    Wilcoxon Signed-Rank Test

    \(n\) is the sample size not including a difference of 0. When \(n < 30\), use test statistic \(w_{s}\), which is the absolute value of the smaller of the sum of ranks. CV uses table below.

    If critical value is not in table then use an online calculator: http://www.socscistatistics.com/tests/signedranks

    When \(n \geq 30\), use z-test statistic: \(z = \frac{\left(w_{s} - \left(\frac{n (n+1)}{4}\right) \right)}{\sqrt{\left( \frac{n(n+1)(2n+1)}{24} \right)}}\)
    Mann-Whitney U Test

    When \(n_{1} \leq 20\) and \(n_{2} \leq 20\)
    \(U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2}, \ U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2}\).
    \(U = \text{Min} \left(U_{1}, U_{2}\right)\)

    CV uses tables below. If critical value is not in tables then use an online calculator: https://www.socscistatistics.com/tests/mannwhitney/default.aspx

    When \(n_{1} > 20\) and \(n_{2} > 20\), use z-test statistic: \(z = \frac{\left( U - \left(\frac{n_{1} \cdot n_{2}}{2}\right) \right)}{\sqrt{\left( \frac{n_{1} \cdot n_{2} \left(n_{1} + n_{2} + 1\right)}{12} \right)}}\)

    Wilcoxon Signed-Rank Critical Values

    Table of Wilcoxon signed-rank critical values for both 1-tailed and 2-tailed tests, with alpha values of 0.01, 0.05, and 0.10.

    Mann-Whitney U Critical Values

    Table of critical values for 2-tailed Mann-Whitney U Test for alpha = 0.05.

    Table of critical values for 2-tailed Mann-Whitney U Test for alpha = 0.01.

    • Was this article helpful?