Processing math: 73%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Statistics LibreTexts

Mostly Harmless Statistics Formula Packet

( \newcommand{\kernel}{\mathrm{null}\,}\)

Chapter 3 Formulas

Sample Mean: ˉx=xnle Population Mean: μ=xN
Weighted Mean: ˉx=(xw)w Range = MaxMin
Sample Standard Deviation: s=(xˉx)2n1 Population Standard Deviation = σ
Sample Variance: s2=(xˉx)2n1 Population Variance = σ2
Coefficient of Variation: CVar=(sˉx100) Z-Score: z=xˉxs
Percentile Index: i=(n+1)p100 Interquartile Range: IQR=Q3Q1
Empirical Rule: z=1,2,368 Outlier Lower Limit: Q1(1.5IQR)
Chebyshev’s Inequality: ((11(z)2)100r) Outlier Upper Limit: Q3+(1.5IQR)

TI-84: Enter the data in a list and then press [STAT]. Use cursor keys to highlight CALC. Press 1 or [ENTER] to select 1:1-Var Stats. Press [2nd], then press the number key corresponding to your data list. Press [Enter] to calculate the statistics. Note: the calculator always defaults to L1 if you do not specify a data list.

Screenshots of a TI-84 calculator to enter data in a list, select 1-Var Stats, select the list, and calculate statistics for the list.

sx is the sample standard deviation. You can arrow down and find more statistics. Use the min and max to calculate the range by hand. To find the variance simply square the standard deviation.

Chapter 4 Formulas

Complement Rules: P(A)+P(AC)=1P(A)=1P(AC)P(AC)=1P(A) Mutually Exclusive Events: P(AB)=0
Union Rule: P(AB)=P(A)+P(B)P(AB) Independent Events: P(AB)=P(A)P(B)
Intersection Rule: P(AB)=P(A)P(A|B) Conditional Probability Rule: P(A|B)=P(AB)P(B)
Fundamental Counting Rule: m1m2mn Factorial Rule: n!=n(n1)(n2)321
Combination Rule: nCr=n!(r!(nr)!) Permutation Rule: nPr=n!(nr)!

All 52 playing card values from a standard pack.

Table of sums of the rolls of two 6-sided dice. Logic tree for determining whether to apply the Fundamental Counting Rule, factorials, permutations, or combinations to a situation.

Chapter 5 Formulas

Discrete Distribution Table:
0P(xi)1P(xi)=1
Discrete Distribution Mean: μ=(xiP(xi))
Discrete Distribution Variance:
σ2=(x2iP(xi))μ2
Discrete Distribution Standard Deviation: σ=σ2
Geometric Distribution:
P(X=x)=pqx1,x=1,2,3,

Geometric Distribution Mean: μ=1p

Variance: σ2=1pp2

Standard Deviation: σ=1pp2

Binomial Distribution:
P(X=x)=nCxpxq(nx),x=0,1,2,,n

Binomial Distribution Mean: μ=np

Variance: sigma2=npq

Standard Deviation: σ=npq

Hypergeometric Distribution:
P(X=x)=aCxbCnxNCn
p=P(success)p=P(failure)=1p
n=sample sizeN=population size
Unit Change for Poisson Distribution:
New μ=old μ(new unitsold units)
Poisson Distribution:
P(X=x)=eμμxx!
P(X=x) P(Xx) P(Xx)
Is the same as Is less than or equal to Is greater than or equal to
Is equal to Is at most Is at least
Is exactly the same as Is not greater than Is not less than
Has not changed from Within Is more than or equal to
     
Excel
=binom.dist(x,n,p,0)
=HYPGEOM.DIST(x,n,a,N,0)
=POISSON.DIST(x,μ,0)
Excel
=binom.dist(x,n,p,1)
=HYPGEOM.DIST(x,n,a,N,1)
=POISSON.DIST(x,μ,1)
Excel
=1binom.dist(x1,n,p,1)
=1HYPGEOM.DIST(x1,n,a,N,1)
=1POISSON.DIST(x1,μ,1)
TI Calculator
geometpdf(p,x)
binompdf(n,p,x)
poissonpdf(μ,x)
TI Calculator
binomcdf(n,p,x)
poissoncdf(μ,x)
TI Calculator
1binomcdf(n,p,x1)
1poissoncdf(μ,x1)
P(X>x) P(X<x)

How do you tell them apart?

  • Geometric – A percent or proportion is given. There is no set sample size until a success is achieved.
  • Binomial – A percent or proportion is given. A sample size is given.
  • Hypergeometric – Usually frequencies of successes are given instead of percentages. A sample size is given.
  • Poisson – An average or mean is given. There is no set sample size until a success is achieved.
x)\)">More than Less than
x)\)">Greater than Below
x)\)">Above Lower than
x)\)">Higher than Shorter than
x)\)">Longer than Smaller than
x)\)">Bigger than Decreased
x)\)">Increased Reduced
x)\)">   
x)\)">Excel
=1binom.dist(x,n,p,1)
=1HYPGEOM.DIST(x,n,a,N,1)
=1POISSON.DIST(x,μ,1)
Excel
=binom.dist(x1,n,p,1)
=HYPGEOM.DIST(x1,n,a,N,1)
=POISSON.DIST(x1,μ,1)
x)\)">TI Calculator
1binomcdf(n,p,x)
1poissoncdf(μ,x)
TI Calculator
binomcdf(n,p,x1)
poissoncdf(μ,x1)

Chapter 6 Formulas

Uniform Distribution
f(x)=1ba, for axb
P(Xx)=P(X>x)=(1ba)(bx)
P(Xx)=P(X<x)=(1ba)(xa)
P(x1Xx2)=P(x1<X<x2)=(1ba)(x2x1)
Exponential Distribution
f(x)=1μe(x/μ), for x0
P(Xx)=P(X>x)=ex/μ
P(Xx)=P(X<x)=1ex/μ
P(x1Xx2)=P(x1<X<x2)=e(x1/μ)e(x2/μ)
Standard Normal Distribution
μ=0,σ=1
z-score: z=xμσ
x=zσ+μ
Central Limit Theorem
Z-score: z=ˉxμ(σn)

In the table below, note that when μ=0 and σ=1 use the NORM.S. DIST or NORM.S.INV function in Excel for a standard normal distribution.

P(Xx) or P(X<x) P(x1<X<x2) or P(x1Xx2) P(Xx) or P(X>x)
Is less than or equal to Between x)\)">Is greater than or equal to
Is at most   x)\)">Is at least
Is not greater than   x)\)">Is not less than
Within   x)\)">More than
Less than   x)\)">Greater than
Below   x)\)">Above
Lower than   x)\)">Higher than
Shorter than   x)\)">Longer than
Smaller than   x)\)">Bigger than
Decreased   x)\)">Increased
Reduced   x)\)">Larger
A probability distribution with shaded region under the curve to the left of a desired value. A probability distribution with shaded region under the curve between two desired values. x)\)">A probability distribution with shaded region under the curve to the right of a desired value.
Excel
Finding a Probability:
=NORM.DIST(x,μ,σ,true)
Finding a Percentile:
=NORM.INV(area,μ,σ)
Excel
Finding a Probability:

=NORM.DIST(x2,μ,σ,true)NORM.DIST(x1,μ,σ,true)
Finding a Percentile:
x1=NORM.INV((1area)/2,μ,σ)
x2=NORM.INV(1((1area)/2),μ,σ)
x)\)">Excel
Finding a Probability:
=1NORM.DIST(x,μ,σ,true)
Finding a Percentile:
=NORM.INV(1area,μ,σ)
TI Calculator
Finding a Probability:

=normalcdf(1E99,x,μ,σ)
Finding a Percentile:
=invNorm(area,μ,σ)
TI Calculator
Finding a Probability:
=normalcdf(x1,x2,μ,σ)
Finding a Percentile:
x1=invNorm((1area)/2,μ,σ)
x2=invNorm(1((1area)/2),μ,σ)
x)\)">TI Calculator
Finding a Probability:
=normalcdf(x,1E99,μ,σ)
Finding a Percentile:
=invNorm(1area,μ,σ)

Chapter 7 Formulas

Confidence Interval for One Proportion
ˆp±zα/2(ˆpˆqn)
ˆp=xn
ˆq=1ˆp
TI-84: 1PropZInt
Sample Size for Proportion
n=pq(zα/2E)2
Always round up to whole number.
If p is not given use p=0.5.
E = Margin of Error
Confidence Interval for One Mean
Use z-interval when σ is given.
Use t-interval when s is given.
If n<30, population needs to be normal.
Z-Confidence Interval
ˉx±zα/2(σn)
TI-84: ZInterval
Z-Critical Values
Excel: zα/2=NORM.INV(1area/2,0,1)
TI-84: zα/2=invNorm(1area/2,0,1)
t-Critical Values
Excel: tα/2=T.INV(1area/2,df)
TI-84: tα/2=invT(1area/2,df)
t-Confidence Interval
ˉx±tα/2(sn)
df=n1
TI-84: TInterval
Sample Size for Mean
n=(zα/2σE)2
Always round up to whole number.
E = Margin of Error

Chapter 8 Formulas

Hypothesis Test for One Mean
Use z-test when σ is given.
Use t-test when s is given.
If n<30, population needs to be normal.
Type I Error -
Reject H0 when H0 is true.
Type II Error -
Fail to reject H0 when H0 is false.
Z-Test:
H0:μ=μ0
H1:μμ0
z=ˉxμ0(σn) TI-84: Z-Test
t-Test:
H0:μ=μ0
H1:μμ0
t=ˉxμ0(sn) TI-84: T-Test
z-Critical Values
Excel:
Two-tail: zα/2=NORM.INV(1α/2,0,1)
Right-tail: z1α=NORM.INV(1α,0,1)
Left-tail: zα=NORM.INV(α,0,1)

TI-84:
Two-tail: zα/2=invNorm(1α/2,0,1)
Right-tail: z1α=invNorm(1α,0,1)
Left-tail: zα=invNorm(α,0,1)
t-Critical Values
Excel:
Two-tail: tα/2=T.INV(1α/2,df)
Right-tail: t1α=T.INV(1α,df)
Left-tail: tα=T.INV(α,df)

TI-84:
Two-tail: tα/2=invT(1α/2,df)
Right-tail: t1α=invT(1α,df)
Left-tail: tα=invT(α,df)
Hypothesis Test for One Proportion
H0:p=p0
H1:pp0
z=ˆpp0(p0q0n)
TI-84: 1-PropZTest
Rejection Rules:
P-value method: reject H0 when the p-value α.
Critical value method: reject H0 when the test statistic is in the critical region (shaded tails).
Two-tailed Test Right-tailed Test Left-tailed Test
H0:μ=μ0 or H0:p=p0
H1:μμ0 or H0:pp0
H0:μ=μ0 or H0:p=p0
H1:μ>μ0 or H0:p>p0
H0:μ=μ0 or H0:p=p0
H1:μ<μ0 or H0:p<p0
A probability distribution with the area under both tails shaded. A probability distribution with the area under the right tail shaded. A probability distribution with the area under the left tail shaded.
Claim is in the Null Hypothesis
=
Is equal to Is less than or equal to Is greater than or equal to
Is exactly the same as Is at most Is at least
Has not changed from Is not more than Is not less than
Is the same as Within Is more than or equal to
Claim is in the Alternative Hypothesis
> <
Is not More than Less than
Is not equal to Greater than Below
Is different from Above Lower than
Has changed from Higher than Shorter than
Is not the same as Longer than Smaller than
  Bigger than Decreased
  Increased Reduced

Chapter 9 Formulas

Hypothesis Test for Two Dependent Means
H0:μD=0
H1:μD0
t=ˉDμD(sDn)
TI-84: T-Test
Confidence Interval for Two Dependent Means
ˉD±tα/2(sDn)
TI-84: TInterval
Hypothesis Test for Two Independent Means
Z-Test: H0:μ1=μ2
H1:μ1μ2
z=(ˉx1ˉx2)(μ1μ2)0(σ21n1+σ22n2)
TI-84: 2-SampZTest
Confidence Interval for Two Independent Means Z-Interval
(ˉx1ˉx2)±zα/2(σ21n1+σ22n2)
TI-84: 2-SampZInt
Hypothesis Test for Two Independent Means
H0:μ1=μ2
H1:μ1μ2

T-Test: Assume variances are unequal
t=(ˉx1ˉx2)(μ1μ2)0(s21n1+s22n2)
TI-84: 2-SampTTest
df=(s21n1+s22n2)2((s21n1)2(1n11)+(s22n2)2(1n21))

T-Test: Assume variances are equal
t=(ˉx1ˉx2)(μ1μ2)((n11)s21+(n21)s22(n1+n22))(1n1+1n2)
df = n_{1} - n_{2} - 2
Confidence Interval for Two Independent Means
T-Interval: Assume variances are unequal

\left(\bar{x}_{1} - \bar{x}_{2}\right) \pm t_{\alpha/2} \sqrt{\left(\frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}}\right)}
TI-84: 2\text{-SampTInt}
df = \dfrac{\left( \frac{s_{1}^{2}}{n_{1}} + \frac{s_{2}^{2}}{n_{2}} \right)^2}{\left( \left(\frac{s_{1}^{2}}{n_{1}}\right)^{2} \left(\frac{1}{n_{1}-1}\right) + \left(\frac{s_{2}^{2}}{n_{2}}\right)^{2} \left(\frac{1}{n_{2}-1}\right) \right)}

T-Interval: Assume variances are equal
\left(\bar{x}_{1} - \bar{x}_{2}\right) \pm t_{\alpha/2} \sqrt{\left( \left(\frac{\left(n_{1} - 1\right) s_{1}^{2} + \left(n_{2} - 1\right) s_{2}^{2}}{\left(n_{1} - n_{2} - 2\right)}\right) \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right)}
df = n_{1} - n_{2} - 2
Hypothesis Test for Two Proportions
H_{0}: p_{1} = p_{2}
H_{1}: p_{1} \neq p_{2}
z = \dfrac{\left(\hat{p}_{1} - \hat{p}_{2}\right) - \left(p_{1} - p_{2}\right)}{\sqrt{ \left( \hat{p} \cdot \hat{q} \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right) }}

\hat{p} = \frac{\left(x_{1} + x_{2}\right)}{\left(n_{1} + n_{2}\right)} = \frac{\left(\hat{p}_{1} \cdot n_{1} + \hat{p}_{2} \cdot n_{2}\right)}{\left(n_{1} + n_{2}\right)}
\hat{q} = 1 - \hat{p}
\hat{p}_{1} = \frac{x_{1}}{n_{1}}, \quad\quad \hat{p}_{2} = \frac{x_{2}}{n_{2}}
TI-84: 2\text{-PropZTest}
Confidence Interval for Two Proportions
\left(\hat{p}_{1} - \hat{p}_{2}\right) \pm z_{\alpha/2} \sqrt{\left( \frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}} + \frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}} \right)}
\hat{p}_{1} = \frac{x_{1}}{n_{1}} \quad\quad\quad\. \hat{p}_{2} = \frac{x_{2}}{n_{2}}
\hat{q}_{1} = 1 - \hat{p}_{1} \quad\quad \hat{q}_{2} = 1 - \hat{p}_{2}
TI-84: 2\text{-PropZInt}
Hypothesis Test for Two Variances
H_{0}: \sigma_{1}^{2} = \sigma_{2}^{2}
H_{1}: \sigma_{1}^{2} \neq \sigma_{2}^{2}
F = \frac{s_{1}^{2}}{s_{2}^{2}}
df_{\text{N}} = n_{1} - 1, \quad\quad df_{\text{D}} = n_{2} - 1
TI-84: 2\text{-SampFTest}}
Hypothesis Test for Two Standard Deviations
H_{0}: \sigma_{1} = \sigma_{2}
H_{1}: \sigma_{1} \neq \sigma_{2}
F = \frac{s_{1}^{2}}{s_{2}^{2}}
df_{\text{N}} = n_{1} - 1, \quad\quad df_{\text{D}} = n_{2} - 1
TI-84: 2\text{-SampFTest}}
F-Critical Values
Excel:
Two-tail: F_{\alpha/2} = \text{F.INV}(1 - \alpha/2, 0, 1)
Right-tail: F_{1-\alpha} = \text{F.INV}(1 - \alpha, 0, 1)
Left-tail: F_{\alpha} = \text{F.INV}(\alpha, 0, 1)
For z and t-Critical Values refer back to Chapter 8

TI-84: invF program can be downloaded at http://www.MostlyHarmlessStatistics.com.

Flowchart for deciding which type of test to use, based on what information is given: proportions or means, the number of samples, etc.

Chapter 10 Formulas

Goodness of Fit Test
H_{0}: p_{1} = p_{0}, p_{2} = p_{0}, \ldots, p_{k} = p_{0}
H_{1}: At least one proportion is different.
\chi^{2} = \sum \frac{(O-E)^{2}}{E}
df = k-1, p_{0} = 1/k \text{ or given %}
TI-84: \chi^{2} \text{ GOF-Test}
Test for Independence
H_{0}: Variable 1 and Variable 2 are independent.
H_{1}: Variable 1 and Variable 2 are dependent.
\chi^{2} = \sum \frac{(O-E)^{2}}{E}
df = (R-1)(C-1)
TI-84: \chi^{2} \text{-Test}

Chapter 11 Formulas

One-Way ANOVA:
H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \ldots = \mu_{k} \quad\quad k = \text{number of groups}
H_{1}: At least one mean is different.
One-way ANOVA table showing the formulas for sum of squares, degrees of freedom, mean squares, and F-values for factor and error.

\bar{x}_{i} = sample mean from the i^{th} group
n_{i} = sample size of the i^{th} group
s_{i}^{2} = sample variance from the i^{th} group
N = n_{1} + n_{2} + \cdots + n_{k}
\bar{x}_{GM} = \frac{\sum x_{i}}{N}
Bonferroni test statistic: t = \dfrac{\bar{x}_{i} - \bar{x}_{j}}{\sqrt{\left( MSW \left(\frac{1}{n_{i}} + \frac{1}{n_{j}}\right) \right)}}
H_{0}: \mu_{i} = \mu_{j}
H_{1}: \mu_{i} \neq \mu_{j}
Multiply p-value by m = {}_{k} C_{2}, divide area for critical value by m = {}_{k} C_{2}
Two-Way ANOVA:
Row Effect (Factor A): H_{0}: The row variable has no effect on the average ______________.
H_{1}: The row variable has an effect on the average ______________.

Column Effect (Factor B): H_{0}: The column variable has no effect on the average ______________.
H_{1}: The column variable has an effect on the average ______________.

Interaction Effect (A \times B\):
H_{0}: There is no interaction effect between row variable and column variable on the average ______________.
H_{1}: There is an interaction effect between row variable and column variable on the average ______________.

Two-way ANOVA table showing equations for SS, df, MS, and F-values for the row factor, column factor, interaction, and error.

Chapter 12 Formulas

SS_{xx} = (n-1) s_{x}^{2}
SS_{yy} = (n-1) s_{y}^{2}
SS_{xy} = \sum (xy) - n \cdot \bar{x} \cdot \bar{y}
Correlation Coefficient
r = \frac{SS_{xy}}{\sqrt{\left( SS_{xx} \cdot SS_{yy} \right)}}
Slope = b_{1} = \frac{SS_{xy}}{SS_{xx}}

y-intercept = b_{0} = \bar{y} - b_{1} \bar{x}

Regression Equation (Line of Best Fit): \hat{y} = b_{0} + b_{1} x
Correlation t-test
H_{0}: \rho = 0; \ H_{1}: \rho \neq 0 \quad\quad\quad t = r \sqrt{\left(\frac{n-2}{1-r^{2}}\right)} \quad df = n-2

Slope t-test
H_{0}: \beta_{1} = 0; \ H_{1}: \beta_{1} \neq 0 \quad\quad\quad t = \frac{b_{1}}{\sqrt{\left( \frac{MSE}{SS_{xx}} \right)}} \quad df = n - p - 1 = n-2
Residual
e_{i} = y_{i} - \hat{y}_{i} (Residual plots should have no patterns.)

Standard Error of Estimate
s_{est} = \sqrt{\frac{\sum \left(y_{i} - \hat{y}_{i}\right)^{2}}{n - 2}} = \sqrt{MSE}

Prediction Interval
\hat{y} = t_{\alpha/2} \cdot s_{est} \sqrt{\left(1 + \frac{1}{n} + \frac{\left(x - \bar{x}\right)^{2}}{SS_{xx}}\right)}
Slope/Model F-test
H_{0}: \beta_{1} = 0; \ H_{1}: \beta_{1} \neq 0
Table showing equations to calculate SS, df, MS, and F-value for regression and error.
Multiple Linear Regression Equation
\hat{y} = b_{0} + b_{1} x_{1} + b_{2} x_{2} + \cdots + b_{p} x_{p}
Coefficient of Determination
R^{2} = (r)^{2} = \frac{SSR}{SST}
Model F-Test for Multiple Regression
H_{0}: \beta_{1} = \beta_{2} = \cdots \beta_{p} = 0
H_{1}: At least one slope is not zero.
Adjusted Coefficient of Determination
R_{adj}^{2} = 1 - \left(\frac{\left(1 - R^{2}\right) (n-1)}{(n - p - 1)}\right)

Chapter 13 Formulas

Ranking Data

  • Order the data from smallest to largest.
  • The smallest value gets a rank of 1.
  • The next smallest gets a rank of 2, etc.
  • If there are any values that tie, then each of the tied values gets the average of the corresponding ranks.

Sign Test

H_{0}: Median = MD_{0}
H_{1}: Median \neq MD_{0}
p-value uses binomial distribution with p = 0.5 and n is the sample size not including ties with the median or differences of 0.

  • For a 2-tailed test, the test statistic, x, is the smaller of the plus or minus signs. If x is the test statistic, the p-value for a two-tailed test is 2 \cdot \text{P}(X \leq x).
  • For a right-tailed test, the test statistic, x, is the number of plus signs. For a left-tailed test, the test statistic, x, is the number of minus signs. The p-value for a one-tailed test is \text{P}(X \geq x) or \text{P}(X \leq x).
Wilcoxon Signed-Rank Test

n is the sample size not including a difference of 0. When n < 30, use test statistic w_{s}, which is the absolute value of the smaller of the sum of ranks. CV uses table below.

If critical value is not in table then use an online calculator: http://www.socscistatistics.com/tests/signedranks

When n \geq 30, use z-test statistic: z = \frac{\left(w_{s} - \left(\frac{n (n+1)}{4}\right) \right)}{\sqrt{\left( \frac{n(n+1)(2n+1)}{24} \right)}}
Mann-Whitney U Test

When n_{1} \leq 20 and n_{2} \leq 20
U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2}, \ U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2}.
U = \text{Min} \left(U_{1}, U_{2}\right)

CV uses tables below. If critical value is not in tables then use an online calculator: https://www.socscistatistics.com/tests/mannwhitney/default.aspx

When n_{1} > 20 and n_{2} > 20, use z-test statistic: z = \frac{\left( U - \left(\frac{n_{1} \cdot n_{2}}{2}\right) \right)}{\sqrt{\left( \frac{n_{1} \cdot n_{2} \left(n_{1} + n_{2} + 1\right)}{12} \right)}}

Wilcoxon Signed-Rank Critical Values

Table of Wilcoxon signed-rank critical values for both 1-tailed and 2-tailed tests, with alpha values of 0.01, 0.05, and 0.10.

Mann-Whitney U Critical Values

Table of critical values for 2-tailed Mann-Whitney U Test for alpha = 0.05.

Table of critical values for 2-tailed Mann-Whitney U Test for alpha = 0.01.

  • Was this article helpful?

Support Center

How can we help?