Skip to main content
Statistics LibreTexts

17.3: Appendix C- Data on some common distributions

  • Page ID
    11801
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Discrete distributions

    Indicator function \(X = I_E\) \(P(X = 1) = P(E) = p\) \(P(X = 0) = q = 1 - p\)

    \(E[X] = p\) \(\text{Var} [X] = pq\) \(M_X (s) = q + pe^s\) \(g_X (s) = q + ps\)

    Simple random variable \(X = \sum_{i = 1}^{n} t_i I_{A_i}\) (a primitive form) \(P(A_i) = p_i\)

    \(E[X] = \sum_{i = 1}^{n} t_ip_i\) \(\text{Var} [X] = \sum_{i = 1}^{n} t_i^2 p_i q_i - 2 \sum_{i < j} t_i t_j p_i p_j\) \(M_X(s) = \sum_{i = 1}^{n} p_i e^{st_i}\)

    Binomial\((n, p)\)\(X = \sum_{i = 1}^{n} I_{E_i}\) with \(\{I_{E_i} : 1 \le i \le n\}\) iid \(P(E_i) = p\)

    \(P(X = k) = C(n, k) p^k q^{n - k}\)

    \(E[X] = np\) \(\text{Var} [X] = npq\) \(M_X (s) = (q + pe^s)^n\) \(g_X (s) = (q + ps)^n\)

    MATLAB: \(P(X = k) = \text{ibinom} (n, p, k)\) \(P(X \ge k) = \text{cbinom} (n, p, k)\)

    Geometric(\(p\))\(P(X = k) = pq^k\) \(\forall k \ge 0\)

    \(E[X] = q/p\) \(\text{Var} [X] = q/p^2\) \(M_X (s) = dfrac{p}{1 - qe^s}\) \(g_X (s) = \dfrac{p}{1- qs}\)

    If \(Y - 1\) ~ geometric \((p)\), so that \(P(Y = k) = pq^{k - 1}\) \(\forall k \ge 1\), then

    \(E[Y] = 1/p\) \(\text{Var} [X] = q/p^2\) \(M_Y (s) = \dfrac{pe^s}{1 - qe^s}\) \(g_Y (s) = \dfrac{ps}{1 - qs}\)

    Negative binomial\((m, p)\), \(X\) is the number of failures before the \(m\)th success.

    \(P(X = k) = C(m + k - 1, m - 1) p^m q^k\) \(\forall k \ge 0\)

    \(E[X] = mq/p\) \(\text{Var} [X] = mq/p^2\) \(M_X (s) = (\dfrac{p}{1 - qe^s})^m\) \(g_X (s) = (\dfrac{p}{1 - qs})^m\)

    For \(Y_m = X_m + m\), the number of the trial on which \(m\)th success occurs. \(P(Y = k) = C(k - 1, m - 1) p^m q^{k - m}\) \(\forall k \ge m\).

    \(E[Y] = m/p\) \(\text{Var} [Y] = mq/p^2\) \(M_Y(s) = (\dfrac{pe^s}{1 - qe^s})^m\) \(g_Y (s) = (\dfrac{ps}{1 - qs})^m\)

    MATLAB: \(P(Y = k) = \text{nbinom} (m, p, k)\)

    Poisson\((\mu)\). \(P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}\) \(\forall k \ge 0\)

    \(E[X] = \mu\) \(\text{Var}[X] = \mu\) \(M_X (s) = e^{\mu (e^s - 1)}\) \(g_X (s) = e^{\mu (s - 1)}\)

    MATLAB: \(P(X = k) = \text{ipoisson} (m, k)\) \(P(X \ge k) = \text{cpoisson} (m, k)\)

    Absolutely continuous distributions

    Uniform\((a, b)\) \(f_x (t) = \dfrac{1}{b - a}\) \(a < t < b\) (zero elsewhere)

    \(E[X] = \dfrac{b + a}{2}\) \(\text{Var} [X] = \dfrac{(b - a)^2}{12}\) \(M_X (s) = \dfrac{e^{sb} - e^{sa}}{s(b - a)}\)

    Symmetric triangular \((-a, a)\) \(f_X (t) = \begin{cases} (a + t)/a^2 & -a \le t < 0 \\ (a - t)/a^2 & 0 \le t \le a \end{cases}\)

    \(E[X] = 0\) \(\text{Var} [X] = \dfrac{a^2}{6}\) \(M_X (s) = \dfrac{e^{as} + e^{-as} - 2}{a^2 s^2} = \dfrac{e^{as} - 1}{as} \cdot \dfrac{1 - e^{-as}}{as}\)

    Exponential\((\lambda)\)\(f_X(t) = \lambda e^{-\lambda t}\) \(t \ge 0\)

    \(E[X] = \dfrac{1}{\lambda}\) \(\text{Var} [X] = \dfrac{1}{\lambda^2}\) \(M_X (s) = \dfrac{\lambda}{\lambda - s}\)

    Gamma\((\alpha, \lambda)\)\(f_X(t) = \dfrac{\lambda^{\alpha} t^{\alpha - 1} e^{-\lambda t}}{\Gamma (\alpha)}\) \(t \ge 0\)

    \(E[X] = \dfrac{\alpha}{\lambda}\) \(\text{Var} [X] = \dfrac{\alpha}{\lambda^2}\) \(M_X (s) = (\dfrac{\lambda}{\lambda - s})^{\alpha}\)

    MATLAB: \(P(X \le t) = \text{gammadbn} (\alpha, \lambda, t)\)

    Normal\(N(\mu, \sigma^2)f_X (t) = \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp} (-\dfrac{1}{2} (\dfrac{t - \mu}{\sigma})^2)\)

    \(E[X] = \mu\) \(\text{Var} [X] \sigma^2\) \(M_X (s) = \text{exp} (\dfrac{\sigma^2 s^2}{2} + \mu s)\)

    MATLAB: \(P(X \le t) = \text{gaussian} (\mu, \sigma^2, t)\)

    Beta\((r, s)\)

    \(f_X (t) = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} t^{r -1} (1 - t)^{s - 1}\) \(0 < t < 1\), \(r > 0\), \(s > 0\)

    \(E[X] = \dfrac{r}{r + s}\) \(\text{Var} [X] = \dfrac{rs}{(r + s)^2 (r + s + 1)}\)

    MATLAB: \(f_X (t) = \text{beta} (r, s, t)\) \(P(X \le t) = \text{betadbn} (r, s, t)\)

    Weibull(\(\alpha, \lambda, \nu\))

    \(F_X (t) = 1 - e^{-\lambda (t - \nu)^{\alpha}}\), \(\alpha > 0, \lambda >0, \nu \ge 0, t \ge \nu\)

    \(E[X] = \dfrac{1}{\lambda^{1/\alpha}} \Gamma (1 + 1/\alpha) + \nu\) \(\text{Var} [X] = \dfrac{1}{\lambda^{2/\alpha}} [\Gamma (1 + 2/\lambda) - \Gamma^2 (1 + 1/\lambda)]\)

    MATLAB: (\(\nu = 0\) only)

    \(f_X (t) = \text{weibull} (a, l, t)\) \(P(X \le t) = \text{weibull} (a, l, t)\)

    Relationship between gamma and Poisson distributions

    • If \(X\) ~ gamma \((n, \lambda)\), then \(P(X \le t) = P(Y \ge n)\) where \(Y\) ~ Poisson \((\lambda t)\).
    • If \(Y\) ~ Poisson \((\lambda t)\), then \(P(Y \ge n) = P(X \le t)\) where \(X\) ~ gamma \((n, \lambda)\).

    This page titled 17.3: Appendix C- Data on some common distributions is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Paul Pfeiffer via source content that was edited to the style and standards of the LibreTexts platform.