# 3.9: General Distribution Functions

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$
$$\renewcommand{\P}{\mathbb{P}}$$ $$\newcommand{\R}{\mathbb{R}}$$ $$\newcommand{\N}{\mathbb{N}}$$ $$\newcommand{\Q}{\mathbb{Q}}$$ $$\newcommand{\E}{\mathbb{E}}$$

Our goal in this section is to define and study functions that play the same role for positive measures on $$\R$$ that (cumulative) distribution functions do for probability measures on $$\R$$. Of course probability measures on $$\R$$ are usually associated with real-valued random variables. These general distribution functions are useful for constructing measures on $$\R$$ and will appear in our study of integrals with respect to a measure in the next section, as well as non-homogeneous Poisson processes and general renewal processes.

## Basic Theory

Throughout this section, our basic measurable space is $$(\R, \mathscr{R})$$, where $$\mathscr{R}$$ is the $$\sigma$$-algebra of Borel measurable subsets of $$\R$$, and as usual, we will let $$\lambda$$ denote Lebesgue measure on $$(\R, \mathscr{R})$$. As with cumulative distribution functions, it's convenient to have compact notation for the limits of a function $$F: \R \to \R$$ from the left and right at $$x \in \R$$, and at $$\infty$$ and $$-\infty$$ (assuming of course that these limits exist): $F(x^+) = \lim_{t \downarrow x}F(t), \; F(x^-) = \lim_{t \uparrow x} F(t), \; F(\infty) = \lim_{t \to \infty} F(t), \; F(-\infty) = \lim_{t \to -\infty} F(t)$

### Distribution Functions and Their Measures

A function $$F: \R \to \R$$ that satisfies the following properties is a distribution function on $$\R$$

1. $$F$$ is increasing: if $$x \le y$$ then $$F(x) \le F(y)$$.
2. $$F$$ is continuous from the right: $$F(x^+) = F(x)$$ for all $$x \in \R$$.

Since $$F$$ is increasing, $$F(x^-)$$ exists in $$\R$$. Similarly $$F(\infty)$$ exists, as a real number or $$\infty$$, and $$F(-\infty)$$ exists, as a real number or $$-\infty$$.

If $$F$$ is a distribution function on $$\R$$, then there exists a unique positive measure $$\mu$$ on $$\mathscr{R}$$ that satisfies $\mu(a, b] = F(b) - F(a), \quad a, \, b \in \R, \; a \le b$

Proof

Let $$\mathscr{I}$$ denote the collection of subsets of $$\R$$ consisting of intervals of the form $$(a, b]$$ where $$a, \, b \in \R$$ with $$a \le b$$, and intervals of the form $$(-\infty, a]$$ and $$(a, \infty)$$ where $$a \in \R$$. Then $$\mathscr{I}$$ is a semi-algebra. That is, if $$A, \, B \in \mathscr{I}$$ then $$A \cap B \in \mathscr{I}$$, and if $$A \in \mathscr{I}$$ then $$A^c$$ is the union of a finite number (actually one or two) sets in $$\mathscr{I}$$. We define $$\mu$$ on $$\mathscr{I}$$ by $$\mu(a, b] = F(b) - F(a)$$, $$\mu(-\infty, a] = F(a) - F(-\infty)$$ and $$\mu(a, \infty) = F(\infty) - F(a)$$. Note that $$\mathscr{I}$$ contains the empty set via intervals of the form $$(a, a]$$ where $$a \in \R$$, but the definition gives $$\mu(\emptyset) = 0$$. Next, $$\mu$$ is finitely additive on $$\mathscr{I}$$. That is, if $$\{A_i: i \in I\}$$ is a finite, disjoint collection of sets in $$\mathscr{I}$$ and $$\bigcup_{i \in I} A_i \in \mathscr{I}$$, then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$ Next, $$\mu$$ is countably subadditive on $$\mathscr{I}$$. That is, if $$A \in \mathscr{I}$$ and $$A \subseteq \bigcup_{i \in I} A_i$$ where $$\{A_i: i \in I\}$$ is a countable collection of sets in $$\mathscr{I}$$ then $\mu(A) \le \sum_{i \in I} \mu(A_i)$ Finally, $$\mu$$ is clearly $$\sigma$$-finite on $$\mathscr{I}$$ since $$\mu(a, b] \lt \infty$$ for $$a, \, b \in \R$$ with $$a \lt b$$, and $$\R$$ is a countable, disjoint union of intervals of this form. Hence it follows from the basic extension and uniqueness theorems that $$\mu$$ can be extended uniquely to a measure on the $$\mathscr{R} = \sigma(\mathscr{I})$$.

For the final uniqueness part, suppose that $$\mu$$ is a measure on $$\mathscr{R}$$ satisfying $$\mu(a, b] = F(b) - F(a)$$ for $$a, \, b \in \R$$ with $$a \lt b$$. Then by the continuity theorem for increasing sets, $$\mu(-\infty, a] = F(a) - F(-\infty)$$ and $$\mu(a, \infty) = F(\infty) - F(a)$$ for $$a \in \R$$. Hence $$\mu$$ is the unique measure constructed above.

The measure $$\mu$$ is called the Lebesgue-Stieltjes measure associated with $$F$$, named for Henri Lebesgue and Thomas Joannes Stieltjes. A very rich variety of measures on $$\R$$ can be constructed in this way. In particular, when the function $$F$$ takes values in $$[0, 1]$$, the associated measure $$\P$$ is a probability measure. Another special case of interest is the distribution function defined by $$F(x) = x$$ for $$x \in \R$$, in which case $$\mu(a, b]$$ is the length of the interval $$(a, b]$$ and therefore $$\mu = \lambda$$, Lebesgue measure on $$\mathscr{R}$$. But although the measure associated with a distribution function is unique, the distribution function itself is not. Note that if $$c \in \R$$ then the distribution function defined by $$F(x) = x + c$$ for $$x \in \R$$ also generates Lebesgue measure. This example captures the general situation.

Suppose that $$F$$ and $$G$$ are distribution functions that generate the same measure $$\mu$$ on $$\R$$. Then there exists $$c \in \R$$ such that $$G = F + c$$.

Proof

For $$x \in \R$$, note that $$F(x) - F(0) = G(x) - G(0)$$. The common value is $$\mu(0, x]$$ if $$x \ge 0$$ and $$-\mu(x, 0]$$ if $$x \lt 0$$. Thus $$G(x) = F(x) - F(0) + G(0)$$ for $$x \in \R$$.

Returning to the case of a probability measure $$\P$$ on $$\R$$, the cumulative distribution function $$F$$ that we studied in this chapter is the unique distribution function satisfying $$F(-\infty) = 0$$. More generally, having constructed a measure from a distribution function, let's now consider the complementary problem of finding a distribution function for a given measure. The proof of the last theorem points the way.

Suppose that $$\mu$$ is a positive measure on $$(\R, \mathscr{R})$$ with the property that $$\mu(A) \lt \infty$$ if $$A$$ is bounded. Then there exists a distribution function that generates $$\mu$$.

Proof

Define $$F$$ on $$\R$$ by $F(x) = \begin{cases} \mu(0, x], & x \ge 0 \\ -\mu(x, 0], & x \lt 0 \end{cases}$ Then $$F: \R \to \R$$ by the assumption on $$\mu$$. Also $$F$$ is increasing: if $$0 \le x \le y$$ then $$\mu(0, x] \le \mu(0, y]$$ by the increasing property of a positive measure. Similarly, if $$x \le y \le 0$$, the $$\mu(x, 0] \ge \mu(y, 0]$$, so $$-\mu(x, 0] \le -\mu(y, 0]$$. Finally, if $$x \le 0 \le y$$, then $$-\mu(x, 0] \le 0$$ and $$\mu(0, y] \ge 0$$. Next, $$F$$ is continuous from the right: Suppose that $$x_n \in \R$$ for $$n \in \N_+$$ and $$x_n \downarrow x$$ as $$n \to \infty$$. If $$x \ge 0$$ then $$\mu(0, x_n] \downarrow \mu(0, x]$$ by the continuity theorem for decreasing sets, which applies since the measures are finite. If $$x \lt 0$$ then $$\mu(x_n, 0] \uparrow \mu(x, 0]$$ by the continuity theorem for increasing sets. So in both cases, $$F(x_n) \downarrow F(x)$$ as $$n \to \infty$$. Hence $$F$$ is a distribution function, and it remains to show that it generates $$\mu$$. Let $$a, \, b \in \R$$ with $$a \le b$$. If $$a \ge 0$$ then $$\mu(a, b] = \mu(0, b] - \mu(0, a] = F(b) - F(a)$$ by the difference property of a positive measure. Similarly, if $$b \le 0$$ then $$\mu(a, b] = \mu(a, 0] - \mu(b, 0] = -F(a) + F(b)$$. Finally, if $$a \le 0$$ and $$b \ge 0$$, then $$\mu(a, b] = \mu(a, 0] + \mu(0, b] = -F(a) + F(b)$$.

In the proof of the last theorem, the use of 0 as a reference point is arbitrary, of course. Any other point in $$\R$$ would do as well, and would produce a distribution function that differs from the one in the proof by a constant. If $$\mu$$ has the property that $$\mu(-\infty, x] \lt \infty$$ for $$x \in \R$$, then it's easy to see that $$F$$ defined by $$F(x) = \mu(-\infty, x]$$ for $$x \in \R$$ is a distribution function that generates $$\mu$$, and is the unique distribution function with $$F(-\infty) = 0$$. Of course, in the case of a probability measure, this is the cumulative distribution function, as noted above.

### Properties

General distribution functions enjoy many of the same properties as the cumulative distribution function (but not all because of the lack of uniqueness). In particular, we can easily compute the measure of any interval from the distribution function.

Suppose that $$F$$ is a distribution function and $$\mu$$ is the positive measure on $$(\R, \mathscr{R})$$ associated with $$F$$. For $$a, \, b \in \R$$ with $$a \lt b$$,

1. $$\mu[a, b] = F(b) - F(a^-)$$
2. $$\mu\{a\} = F(a) - F(a^-)$$
3. $$\mu(a, b) = F(b^-) - F(a)$$
4. $$\mu[a, b) = F(b^-) - F(a^-)$$
Proof

All of these results follow from the continuity theorems for a positive measure. Suppose that $$(x_1, x_2, \ldots)$$ is a sequence of distinct points in $$\R$$.

1. If $$x_n \uparrow a$$ as $$n \to \infty$$ then $$(x_n, b] \uparrow [a, b]$$ so $$\mu(x_n, b] \uparrow \mu[a, b]$$ as $$n \to \infty$$. But also $$\mu(x_n, b] = F(b) - F(x_n) \to F(b) - F(a^-)$$ as $$n \to \infty$$.
2. This follows from (a) by taking $$a = b$$
3. If $$x_n \uparrow b$$ as $$n \to \infty$$ then $$(a, x_n] \uparrow (a, b)$$ so $$\mu(a, x_n] \uparrow \mu(a, b)$$ as $$n \to \infty$$. But also $$\mu(a, x_n] = F(x_n) - F(a) \to F(b^-) - F(a)$$ as $$n \to \infty$$.
4. From (a) and (b) and the difference rule, $\mu[a, b) = \mu[a, b] - \mu\{b\} = F(b) - F(a^-) - \left[F(b) - F(b^-)\right] = F(b^-) - F(a^-)$

Note that $$F$$ is continuous at $$x \in \R$$ if and only if $$\mu\{x\} = 0$$. In particular, $$\mu$$ is a continuous measure (recall that this means that $$\mu\{x\} = 0$$ for all $$x \in \R$$) if and only if $$F$$ is continuous on $$\R$$. On the other hand, $$F$$ is discontinuous at $$x \in \R$$ if and only if $$\mu\{x\} \gt 0$$, so that $$\mu$$ has an atom at $$x$$. So $$\mu$$ is a discrete measure (recall that this means that $$\mu$$ has countable support) if and only if $$F$$ is a step function.

Suppose again that $$F$$ is a distribution function and $$\mu$$ is the positive measure on $$(\R, \mathscr{R})$$ associated with $$F$$. If $$a \in \R$$ then

1. $$\mu(a, \infty) = F(\infty) - F(a)$$
2. $$\mu[a, \infty) = F(\infty) - F(a^-)$$
3. $$\mu(-\infty, a] = F(a) - F(-\infty)$$
4. $$\mu(-\infty, a) = F(a^-) - F(-\infty)$$
5. $$\mu(\R) = F(\infty) - F(-\infty)$$
Proof

The proofs, as before, just use the continuity theorems. Suppose that $$(x_1, x_2, \ldots)$$ is a sequence of distinct points in $$\R$$

1. If $$x_n \uparrow \infty$$ as $$n \to \infty$$ then $$(a, x_n] \uparrow (a, \infty)$$ so $$\mu(a, x_n] \uparrow \mu(a, \infty)$$ as $$n \to \infty$$. But also $$\mu(a, x_n] = F(x_n) - F(a) \to F(\infty) - F(a)$$ as $$n \to \infty$$
2. Similarly, if $$x_n \uparrow \infty$$ as $$n \to \infty$$ then $$[a, x_n] \uparrow (a, \infty)$$ so $$\mu[a, x_n] \uparrow \mu[a, \infty)$$ as $$n \to \infty$$. But also $$\mu[a, x_n] = F(x_n) - F(a^-) \to F(\infty) - F(a^-)$$ as $$n \to \infty$$
3. If $$x_n \downarrow -\infty$$ as $$n \to \infty$$ then $$(x_n, a] \uparrow (-\infty, a]$$ so $$\mu(x_n, a] \uparrow \mu(-\infty, a]$$ as $$n \to \infty$$. But also $$\mu(x_n, a] = F(a) - F(x_n) \to F(a) - F(-\infty)$$ as $$n \to \infty$$
4. Similarly, if $$x_n \downarrow -\infty$$ as $$n \to \infty$$ then $$(x_n, a) \uparrow (-\infty, a)$$ so $$\mu(x_n, a) \uparrow \mu(-\infty, a)$$ as $$n \to \infty$$. But also $$\mu(x_n, a) = F(a^-) - F(x_n) \to F(a^-) - F(-\infty)$$ as $$n \to \infty$$
5. $$\mu(\R) = \mu(-\infty, 0] + \mu(0, \infty) = \left[F(0) - F(-\infty)\right] + \left[F(\infty) - F(0)\right] = F(\infty) - F(-\infty)$$.

### Distribution Functions on $$[0, \infty)$$

Positive measures and distribution functions on $$[0, \infty)$$ are particularly important in renewal theory and Poisson processes, because they model random times.

The discrete case. Suppose that $$G$$ is discrete, so that there exists a countable set $$C \subset [0, \infty)$$ with $$G\left(C^c\right) = 0$$. Let $$g(t) = G\{t\}$$ for $$t \in C$$ so that $$g$$ is the density function of $$G$$ with respect to counting measure on $$C$$. If $$u: [0, \infty) \to \R$$ is locally bounded then $\int_0^t u(s) \, dG(s) = \sum_{s \in C \cap [0, t]} u(s) g(s)$

In the discrete case, the distribution is often arithmetic. Recall that this means that the countable set $$C$$ is of the form $$\{n d: n \in \N\}$$ for some $$d \in (0, \infty)$$. In the following results,

The continuous case. Suppose that $$G$$ is absolutely continuous with respect to Lebesgue measure on $$[0, \infty)$$ with density function $$g: [0, \infty) \to [0, \infty)$$. If $$u: [0, \infty) \to \R$$ is locally bounded then $\int_0^t u(s) \, dG(s) = \int_0^t u(s) g(s) \, ds$

The mixed case. Suppose that there exists a countable set $$C \subset [0, \infty)$$ with $$G(C) \gt 0$$ and $$G\left(C^c\right) \gt 0$$, and that $$G$$ restricted to subsets of $$C^c$$ is absolutely continuous with respect to Lebesgue measure. Let $$g(t) = G\{t\}$$ for $$t \in C$$ and let $$h$$ be a density with respect to Lebesgue measure of $$G$$ restricted to subsets of $$C^c$$. If $$u: [0, \infty) \to \R$$ is locally bounded then, $\int_0^t u(s) \, dG(s) = \sum_{s \in C \cap [0, t]} u(s) g(s) + \int_0^t u(s) h(s) \, ds$

The three special cases do not exhaust the possibilities, but are by far the most common cases in applied problems.

This page titled 3.9: General Distribution Functions is shared under a CC BY 2.0 license and was authored, remixed, and/or curated by Kyle Siegrist (Random Services) via source content that was edited to the style and standards of the LibreTexts platform.