Skip to main content
Statistics LibreTexts

5.24: The Triangle Distribution

  • Page ID
    10364
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\R}{\mathbb{R}} \) \( \newcommand{\N}{\mathbb{N}} \) \( \newcommand{\Z}{\mathbb{Z}} \) \( \newcommand{ \E}{\mathbb{E}} \) \( \newcommand{\P}{\mathbb{P}} \) \( \newcommand{\var}{\text{var}} \) \( \newcommand{\sd}{\text{sd}} \) \( \newcommand{\bs}{\boldsymbol} \) \( \newcommand{\sgn}{\text{sgn}} \) \( \newcommand{\skw}{\text{skew}} \) \( \newcommand{\kur}{\text{kurt}} \)

    Like the semicircle distribution, the triangle distribution is based on a simple geometric shape. The distribution arises naturally when uniformly distributed random variables are transformed in various ways.

    The Standard Triangle Distribution

    Distribution Functions

    The standard triangle distribution with vertex at \(p \in [0, 1]\) (equivalently, shape parameter \(p\)) is a continuous distribution on \( [0, 1] \) with probability density function \(g\) described as follows:

    1. If \(p = 0\) then \(g(x) = 2 (1 - x)\) for \( x \in [0, 1] \)
    2. If \(p = 1\) then \(g(x) = 2 x\) for \( x \in [0, 1] \).
    3. If \(p \in (0, 1)\) then \[ g(x) = \begin{cases} \frac{2x}{p}, & x \in [0, p] \\ \frac{2 (1 - x)}{1 - p}, & x \in [p, 1] \end{cases} \]

    The shape of the probability density function justifies the name triangle distribution.

    The graph of \( g \), together with the domain \([0, 1]\), forms a triangle with vertices \((0, 0)\), \((1, 0)\), and \((p, 2)\). The mode of the distribution is \( x = p \).

    1. If \( p = 0 \), \( g \) is decreasing.
    2. If \( p = 1 \), \( g \) is increasing.
    3. If \( p \in (0, 1) \), \( g \) increases and then decreases.
    Proof

    Using \([0, 1]\) as the base, we can compute the area of the triangle as \(\frac{1}{2} 2 = 1\) so we see immediately that \( g \) is a valid probability density function. The properties are obvious.

    Open special distribution simulator and select the triangle distribution. Vary \(p\) (but keep the default values for the other parameters) and note the shape of the probability density function. For selected values of \(p\), run the simulation 1000 times and compare the empirical density function to the probability density function.

    The distribution function \( G \) is given as follows:

    1. If \(p = 0\), \(G(x) = 1 - (1 - x)^2\) for \( x \in [0, 1] \).
    2. If \(p = 1\), \(G(x) = x^2\) for \( x \in [0, 1] \).
    3. If \( p \in (0, 1) \) \[ G(x) = \begin{cases} \frac{x^2}{p}, & x \in [0, p] \\ 1 - \frac{(1 - x)^2}{1 - p}, & x \in [p, 1] \end{cases} \]
    Proof

    This result follows from standard calculus since \(G(x) = \int_0^x g(t) \, dt\).

    The quantile function \( G^{-1} \) is given by \[ G^{-1}(u) = \begin{cases} \sqrt{u p}, & u \in [0, p] \\ 1 - \sqrt{(1 - u)(1 - p)}, & u \in [p, 1] \end{cases} \]

    1. The first quartile is \( \sqrt{\frac{1}{4}p} \) if \( p \in \left[\frac{1}{4}, 1\right] \) and is \( 1 - \sqrt{\frac{3}{4} (1 - p)} \) if \( p \in \left[0, \frac{1}{4}\right]\)
    2. The median is \( \sqrt{\frac{1}{2} p} \) if \( p \in \left[\frac{1}{2}, 1\right] \) and is \( 1 - \sqrt{\frac{1}{2}(1 - p)} \) if \(p \in \left[0, \frac{1}{2}\right]\).
    3. The third quartile is \( \sqrt{\frac{3}{4} p} \) if \(p \in \left[\frac{3}{4}, 1\right] \) and is \( 1 - \sqrt{\frac{1}{4}(1 - p)} \) if \(p \in \left[0, \frac{3}{4}\right]\).

    Open the special distribution calculator and select the triangle distribution. Vary \(p\) (but keep the default values for the other parameters) and note the shape of the distribution function. For selected values of \(p\), compute the first and third quartiles.

    Moments

    Suppose that \( X \) has the standard triangle distribution with vertex \( p \in [0, 1] \). The moments are easy to compute.

    Suppose that \( n \in \N \).

    1. If \(p = 1\), \(\E(X^n) = 2 \big/ (n + 2)\).
    2. If \(p \in [0, 1)\), \[ \E(X^n) = \frac{2}{n + 1} p^{n+2} + \frac{2}{n + 1} \frac{1 - p^{n+1}}{1 - p} + \frac{2}{n + 2}\frac{1 - p^{n+2}}{1 - p}\]
    Proof

    This follows from standard calculus, since \(\E(X^n) = \int_0^1 x^n g(x) \, dx\).

    From the general moment formula, we can compute the mean, variance, skewness, and kurtosis.

    The mean and variance of \(X\) are

    1. \(\E(X) = \frac{1}{3}(1 + p)\)
    2. \(\var(X) = \frac{1}{18}[1 - p(1 - p)]\)
    Proof

    This follows from the general moment result. Recall that \(\var(X) = \E\left(X^2\right) - [\E(X)]^2\).

    Note that \(\E(X)\) increases from \(\frac{1}{3}\) to \(\frac{2}{3}\) as \(p\) increases from 0 to 1. The graph of \(\var(X)\) is a parabola opening downward; the largest value is \(\frac{1}{18}\) when \(p = 0\) or \(p = 1\) and the smallest value is \(\frac{1}{24}\) when \(p = \frac{1}{2}\).

    Open the special distribution simulator and select the triangle distribution. Vary \(p\) (but keep the default values for the other paramters) and note the size and location of the mean \(\pm\) standard deviation bar. For selected values of \(p\), run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

    The skewness of \( X \) is \[ \skw(X) = \frac{\sqrt{2} (1 - 2 p)(1 + p)(2 - p)}{5[1 - p(1 - p)]^{3/2}} \] The kurtosis of \( X \) is \( \kur(X) = \frac{12}{5} \).

    Proof

    These results follow from the general moment result and the computational formulas for skewness and kurtosis.

    Note that \(X\) is positively skewed for \(p \lt \frac{1}{2}\), negatively skewed for \(p \gt \frac{1}{2}\), and symmetric for \(p = \frac{1}{2}\). More specifically, if we indicate the dependence on the parameter \( p \) then \( \skw_{1-p}(X) = -\skw_p(X) \). Note also that the kurtosis is independent of \(p\), and the excess kurtosis is \( \kur(X) - 3 = -\frac{3}{5} \).

    Open the special distribution simulator and select the triangle distribution. Vary \(p\) (but keep the default values for the other paramters) and note the degree of symmetry and the degree to which the distribution is peaked. For selected values of \(p\), run the simulation 1000 times and compare the empirical density function to the probability density function.

    Related Distributions

    If \(X\) has the standard triangle distribution with parameter \(p\), then \(1 - X\) has the standard triangle distribution with parameter \(1 - p\).

    Proof

    For \(x \in [0, 1]\), \(\P(1 - X \le x) = \P(X \ge 1 - x) = 1 - G(1 - x)\), where \(G\) is the CDF of \(X\). The result now follows from the formula for the CDF.

    The standard triangle distribution has a number of connections with the standard uniform distribution. Recall that a simulation of a random variable with a standard uniform distribution is a random number in computer science.

    Suppose that \(U_1\) and \(U_2\) are independent random variables, each with the standard uniform distribution. Then

    1. \(X = \min\{U_1, U_2\}\) has the standard triangle distribution with \(p = 0\).
    2. \(Y = \max\{U_1, U_2\}\) has the standard triangle distribution with \(p = 1\).
    Proof

    \(U_1\) and \(U_2\) have CDF \(u \mapsto u\) for \(u \in [0, 1]\)

    1. \(X\) has CDF \(x \mapsto 1 - (1 - x)^2\) for \(x \in [0, 1]\)
    2. \(Y\) has CDF \(y \mapsto y^2\) for \(y \in [0, 1]\).

    Suppose again that \(U_1\) and \(U_2\) are independent random variables, each with the standard uniform distribution. Then

    1. \(X = \left|U_2 - U_1\right|\) has the standard triangle distribution with \(p = 0\).
    2. \(Y = \left(U_1 + U_2\right) \big/ 2\) has the standard triangle distribution with \(p = \frac{1}{2}\).
    Proof
    1. Let \(x \in [0, 1]\). Note that the event \(\{X \gt x\} = \left\{\left|U_2 - U_1\right| \gt x\right\}\) is simply the union of two disjoint triangular regions, each with base and height of length \(1 - x\). Hence \(\P(X \le x) = 1 - (1 - x)^2\).
    2. Let \(y \in \left[0, \frac{1}{2}\right]\). The event \(\{Y \le y\} = \left\{U_1 + U_2 \le 2 y\right\}\) is a triangular region with height and base of length \(2 y\). Hence \(\P(Y \le y) = 2 y^2\). For \(y \in \left[\frac{1}{2}, 1\right]\), the event \(\{Y \gt y\}\) is a triangular regtion with height and base if length \(2 - 2y\). Hence \(\P(Y \le y) = 1 - 2 (1 - y)\).

    In the previous result, note that \(Y\) is the sample mean from a random sample of size 2 from the standard uniform distribution. Since the quantile function has a simple closed-form expression, the standard triangle distribution can be simulated using the random quantile method.

    Suppose that \(U\) is has the standard uniform distribution and \(p \in [0, 1]\). Then the random variable below has the standard triangle distribution with parameter \(p\): \[ X = \begin{cases} \sqrt{p U}, & U \le p \\ 1 - \sqrt{(1 -p)(1 - U)}, & p \lt U \le 1 \end{cases} \]

    Open the random quantile experiment and select the triangle distribution. Vary \(p\) (but keep the default values for the other parameters) and note the shape of the distribution function/quantile function. For selected values of \(p\), run the experiment 1000 times and watch the random quantiles. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.

    The standard triangle distribution can also be simulated using the rejection method, which also works well since the region \(R\) under the probability density function \(g\) is bounded. Recall that this method is based on the following fact: if \((X, Y)\) is uniformly distributed on the rectangular region \(S = \{(x, y): 0 \le x \le 1, 0 \le y \le 2\}\) which contains \(R\), then the conditional distribution of \((X, Y)\) given \((X, Y) \in R\) is uniformly distributed on \(R\), and hence \(X\) has probability density function \(g\).

    Open the rejection method experiment and select the triangle distribution. Vary \(p\) (but keep the default values for the other parameters) and note the shape of the probability density function. For selected values of \(p\), run the experiment 1000 times and watch the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.

    For the extreme values of the shape parameter, the standard triangle distributions are also beta distributions.

    Connections to the beta distribution:

    1. The standard triangle distribution with shape parameter \( p = 0 \) is the beta distribution with left parameter \( a = 1 \) and right parameter \( b = 2 \).
    2. The standard triangle distribution with shape parameter \( p = 1 \) is the beta distribution with left parameter \( a = 2 \) and right parameter \( b = 1 \).
    Proof

    These results follow directly from the form of the standard triangle PDF.

    Open the special distribution simulator and select the beta distribution. For parameter values given below, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.

    1. \( a = 1 \), \( b = 2 \)
    2. \( a = 2 \), \( b = 1 \)

    The General Triangle Distribution

    Like so many standard distributions, the standard triangle distribution is usually generalized by adding location and scale parameters.

    Definition

    Suppose that \(Z\) has the standard triangle distribution with vertex at \(p \in [0, 1]\). For \(a \in \R\) and \(w \in (0, \infty)\), random variable \(X = a + w Z\) has the triangle distribution with location parameter \(a\), and scale parameter \(w\), and shape parameter \(p\)

    Distribution Functions

    Suppose that \(X\) has the general triangle distribution given in the definition above.

    \(X\) has probability density function \(f\) given as follows:

    1. If \( p = 0 \), \( f(x) = \frac{2}{w^2}(a + w - x) \) for \(x \in [a, a + w]\).
    2. If \( p = 1 \), \( f(x) = \frac{2}{w^2}(x - a) \) for \(x \in [a, a + w]\).
    3. If \( p \in (0, 1) \), \[ f(x) = \begin{cases} \frac{2}{p w^2}(x - a), & x \in [a, a + p w] \\ \frac{2}{w^2 (1 - p)}(a + w - x), & x \in [a + p w, a + w] \end{cases}\]
    Proof

    This follows from a standard result for location-scale families. Recall that \[ f(x) = \frac{1}{w} g\left(\frac{x - a}{w}\right), \quad \frac{x - a}{w} \in [0, 1] \] where \( g \) is the standard triangle PDF with parameter \(p\).

    Once again, the shape of the probability density function justifies the name triangle distribution.

    The graph of \( f \), together with the domain \([a, a + w]\), forms a triangle with vertices \((a, 0)\), \((a + w, 0)\), and \((a + p w, 2/w)\). The mode of the distribution is \( x = a + p w \).

    1. If \( p = 0 \), \( f \) is decreasing.
    2. If \( p = 1 \), \( f \) is increasing.
    3. If \( p \in (0, 1) \), \( f \) increases and then decreases.

    Clearly the general triangle distribution could be parameterized by the left endpoint \(a\), the right endpoint \(b = a + w\) and the location of the vertex \(c = a + p w\), but the location-scale-shape parameterization is better.

    Open special distribution simulator and select the triangle distribution. Vary the parameters \( a \), \( w \), and \( p \), and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

    The distribution function \( F \) of \( X \) is given as follows:

    1. If \( p = 0 \), \( F(x) = 1 - \frac{1}{w^2}(a + w - x)^2 \) for \( x \in [a, a + w] \)
    2. If \( p = 1 \), \( F(x) = \frac{1}{w^2}(x - a)^2 \) for \(x \in [a, a + w]\)
    3. If \( p \in (0, 1) \), \[ F(x) = \begin{cases} \frac{1}{p w^2}(x - a)^2, & x \in [a, a + p w] \\ 1 - \frac{1}{w^2 (1 - p)}(a + w - x)^2, & x \in [a + p w, a + w] \end{cases}\]
    Proof

    This follows from a standard result for location-scale families: \[ F(x) = G\left(\frac{x - a}{w}\right), \quad x \in [a, a + w]\] where \( G \) is the standard triangle CDF with parameter \(p\).

    \( X \) has quantile function \(F^{-1}\) given by \[ F^{-1}(u) = a + \begin{cases} w \sqrt{u p}, & 0 \le u \le p \\ w\left[1 - \sqrt{(1 - u)(1 - p)}\right], & p \le u \le 1 \end{cases} \]

    1. The first quartile is \(a + w \sqrt{\frac{1}{4} p} \) if \( p \in \left[\frac{1}{4}, 1\right]\) and is \(a + w \left( 1 - \sqrt{\frac{3}{4} (1 - p)} \right) \) if \( p \in \left[0, \frac{1}{4}\right] \)
    2. The median is \(a + w \sqrt{\frac{1}{2} p} \) if \( p \in \left[\frac{1}{2}, 1\right] \) and is \( a + w \left(1 - \sqrt{\frac{1}{2} (1 - p)}\right) \) if \( p \in \left[0, \frac{1}{2}\right] \).
    3. The third quartile is \(a + w \sqrt{\frac{3}{4} p} \) if \(p \in \left[\frac{3}{4}, 1\right]\) and is \(a + w\left(1 - \sqrt{\frac{1}{4}(1 - p)}\right) \) if \( p \in \left[0, \frac{3}{4}\right] \).
    Proof

    Ths follows from a standard result for location-scale families: \( F^{-1}(u) = a + w G^{-1}(u) \) for \( u \in [0, 1] \), where \( G^{-1} \) is the standard triangle quantile function with parameter \(p\).

    Open the special distribution simulator and select the triangle distribution. Vary the the parameters \( a \), \( w \), and \( p \), and note the shape and location of the distribution function. For selected values of parameters, compute the median and the first and third quartiles.

    Moments

    Suppose again that \(X\) has the triangle distribution with location parameter \(a \in \R\), scale parameter \(w \in (0, \infty)\) and shape parameter \(p \in [0, 1]\). Then we can take \(X = a + w Z\) where \(Z\) has the standard triangle distribution with parameter \(p\). Hence the moments of \(X\) can be computed from the moments of \(Z\). Using the binomial theorem and the linearity of expected value we have \[ \E(X^n) = \sum_{k=0}^n \binom{n}{k} w^k a^{n-k} \E(Z^k), \quad n \in \N \]

    The general results are rather messy.

    The mean and variance of \( X \) are

    1. \( \E(X) = a + \frac{w}{3}(1 + p) \)
    2. \( \var(X) = \frac{w^2}{18}[1 - p(1 - p)] \)
    Proof

    This follows from the results for the mean and variance of the standard triangle distribution, and simple properties of expected value and variance.

    Open the special distribution simulator and select the triangle distribution. Vary the parameters \( a \), \( w \), and \( p \), and note the size and location of the mean \(\pm \) standard deviation bar. For selected values of the paramters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

    The skewness of \( X \) is \[ \skw(X) = \frac{\sqrt{2} (1 - 2 p)(1 + p)(2 - p)}{5[1 - p(1 - p)]^{3/2}} \] The kurtosis of \( X \) is \( \kur(X) = \frac{12}{5} \).

    Proof

    These results follow immediately from the skewness and kurtosis of the standard triangle distribution. Recall that skewness and kurtosis are defined in terms of the standard score, which is independent of the location and scale parameters.

    As before, the excess kurtosis is \( \kur(X) - 3 = -\frac{3}{5} \).

    Related Distributions

    Since the triangle distribution is a location-scale family, it's invariant under location-scale transformations. More generally, the family is closed under linear transformations with nonzero slope.

    Suppose that \( X \) has the triangle distribution with shape parameter \(a \in \R\), scale parameter \( w \in (0, \infty) \), and shape parameter \( p \in [0, 1] \). If \(b \in \R\) and \( c \in (0, \infty) \) then

    1. \(b + c X \) has the triangle distribution with location parameter \(b + c a\), scale parameter \( c w \), and shape parameter \( p \).
    2. \(b - c X\) has the triangle distribution with location parameter \(b - c (a + w)\), scale parameter \(c w\), and shape parameter \(1 - p\).
    Proof

    From the definition we can take \(X = a + w Z\) where \(Z\) has the standard triangle distribution with parameter \( p \).

    1. Note that \(b + c X = (b + c a) + c w Z\).
    2. Note that \(b - c X = b - c(a + w) + c w (1 - Z)\), and recall from the result above that \(1 - Z\) has the basic triangle distribution with parameter \(1 - p\).

    As with the standard distribution, there are several connections between the triangle distribution and the continuous uniform distribution.

    Suppose that \(V_1\) and \(V_2\) are independent and are uniformly distributed on the interval \([a, a + w]\), where \(a \in \R\) and \(w \in (0, \infty)\). Then

    1. \(\min\{V_1, V_2\}\) has the triangle distribution with location parameter \(a\), scale parameter \(w\), and shape parameter \(p = 0\).
    2. \(\max\{V_1, V_2\}\) has the triangle distribution with location parameter \(a\), scale parameter \(w\), and shape parameter \(p = 1\).
    Proof

    The uniform distribution is itself a location-scale family, so we can write \(V_1 = a + w U_1\) and \(V_2 = a + w U_2\), where \(U_1\) and \(U_2\) are independent and each has the standard uniform distribution. Then \(\min\{V_1, V_2\} = a + w \min\{U_1, U_2\}\) and \(\max\{V_1, V_2\} = a + w \max\{U_1, U_2\}\) so the result follows from the corresponding result for the standard triangle distribution.

    Suppose again that \(V_1\) and \(V_2\) are independent and are uniformly distributed on the interval \([a, a + w]\), where \(a \in \R\) and \(w \in (0, \infty)\). Then

    1. \(\left|V_2 - V_1\right|\) has the triangle distribution with location parameter 0, scale parameter \(w\), and shape parameter \(p = 0\).
    2. \(V_1 + V_2\) has the triangle distribution with location parameter \(2 a\), scale parameter \(2 w\), and shape parameter \(p = \frac{1}{2}\).
    3. \(V_2 - V_1\) has the triangle distribution with location parameter \(-w\), scale parameter \(2 w\), and shape parameter \(p = \frac{1}{2}\)
    Proof

    As before, we can write \(V_1 = a + w U_1\) and \(V_2 = a + w U_2\), where \(U_1\) and \(U_2\) are independent and each has the standard uniform distribution.

    1. \(\left|V_2 - V_1\right| = w \left|U_2 - U_1\right|\) and by the result above, \(\left|U_2 - U_1\right|\) has the standard triangle distribution with parameter \(p = 0\).
    2. \(V_1 + V_2 = 2 a + 2 w \left[\frac{1}{2}(U_1 + U_2)\right]\) and by the result above, \(\frac{1}{2}(U_1 + U_2)\) has the standard triangle distribution with parameter \(p = \frac{1}{2}\).
    3. Let \(Z = \frac{1}{2} + \frac{1}{2}(U_2 - U_1) = \frac{1}{2}U_2 + \frac{1}{2}(1 - U_1)\). Since \(1 - U_1\) also has the standard uniform distribution and is independent of \(U_2\), it follows from the result above that \(Z\) has the basic triangle distribution with parameter \(p = \frac{1}{2}\). But \(V_2 - V_1 = w (U_2 - U_1) = w (2 Z - 1) = 2 w Z - w\) and hence the result follows.

    A special case of (b) leads to a connection between the triangle distribution and the Irwin-Hall distribution.

    Suppose that \(U_1\) and \(U_2\) are independent random variables, each with the standard uniform distribution. Then \(U_1 + U_2\) has the triangle distribtion with location parameter \(0\), scale parameter \(2\) and shape parameter \(\frac{1}{2}\). But this is also the Irwin-Hall distribution of order \(n = 2\).

    Open the special distribution simulator and select the Irwin-Hall distribution. Set \( n = 2 \) and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.

    Since we can simulate a variable \(Z\) with the basic triangle distribution with parameter \(p \in [0, 1]\) by the random quantile method above, we can simulate a variable with the triangle distribution that has location parameter \(a \in \R\), scale parameter \(w \in (0, \infty)\), and shape parameter \(p\) by our very definition: \(X = a + w Z\). Equivalently, we could compute a random quantile using the quantile function of \(X\).

    Open the random quantile experiment and select the triangle distribution. Vary the location parameter \(a\), the scale parameter \(w\), and the shape parameter \(p\), and note the shape of the distribution function. For selected values of the parameters, run the experiment 1000 times and watch the random quantiles. Compare the empirical density function, mean and standard deviation to their distributional counterparts.

    As with the standard distribution, the general triangle distribution has a bounded probability density function on a bounded interval, and hence can be simulated easily via the rejection method.

    Open the rejection method experiment and select the triangle distribution. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 times and watch the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.


    This page titled 5.24: The Triangle Distribution is shared under a CC BY 2.0 license and was authored, remixed, and/or curated by Kyle Siegrist (Random Services) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.