|
Binomial Distribution
|
a discrete random variable (RV) that arises from Bernoulli trials. There are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV Χ is defined as the number of successes in \(n\) trials. The notation is: \(X \sim B(n, p) \mu = np\) and the standard deviation is \(\sigma=\sqrt{n p q}\). The probability of exactly \(x\) successes in \(n\) trials is \(P(X=x)=\left(\begin{array}{l}{n} \\ {x}\end{array}\right) p^{x} q^{n-x}\).
|
|
Central Limit Theorem
|
Given a random variable (RV) with known mean \(\mu\) and known standard deviation \(\sigma\). We are sampling with size n and we are interested in two new RVs - the sample mean, \(\overline X\). If the size n of the sample is sufficiently large, then \(\overline{X} \sim N\left(\mu, \frac{\sigma}{\sqrt{n}}\right)\). If the size n of the sample is sufficiently large, then the distribution of the sample means will approximate a normal distribution regardless of the shape of the population. The expected value of the mean of the sample means will equal the population mean. The standard deviation of the distribution of the sample means, \(\frac{\sigma}{\sqrt{n}}\), is called the standard error of the mean.
|
|
Confidence Interval (CI)
|
an interval estimate for an unknown population parameter. This depends on:
-
The desired confidence level.
-
Information that is known about the distribution (for example, known standard deviation).
-
The sample and its size.
|
|
Critical Value
|
The \(t\) or \(Z\) value set by the researcher that measures the probability of a Type I error, \(\sigma\).
|
|
Hypothesis
|
a statement about the value of a population parameter, in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation \(H_0\)) and the contradictory statement is called the alternative hypothesis (notation \(H_a\)).
|
|
Hypothesis Testing
|
Based on sample evidence, a procedure for determining whether the hypothesis stated is a reasonable statement and should not be rejected, or is unreasonable and should be rejected.
|
|
Normal Distribution
|
a continuous random variable (RV) with pdf \(f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}\), where \(\mu\) is the mean of the distribution, and \(\sigma\) is the standard deviation, notation: \(X \sim N(\mu, \sigma)\). If \(\mu = 0\) and \(\sigma = 1\), the RV is called
the standard normal distribution
.
|
|
Standard Deviation
|
a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation.
|
|
Student's t-Distribution
|
investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student. The major characteristics of the random variable (RV) are:
-
It is continuous and assumes any real values.
-
The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
-
It approaches the standard normal distribution as n gets larger.
-
There is a "family" of t distributions: every representative of the family is completely defined by the number of degrees of freedom which is one less than the number of data items.
|
|
Test Statistic
|
The formula that counts the number of standard deviations on the relevant distribution that estimated parameter is away from the hypothesized value.
|
|
Type I Error
|
The decision is to reject the null hypothesis when, in fact, the null hypothesis is true.
|
|
Type II Error
|
The decision is not to reject the null hypothesis when, in fact, the null hypothesis is false.
|