# 14.1: Introduction to the Poisson Process

- Page ID
- 10266

## The Poisson Model

We will consider a process in which points occur randomly in time. The phrase points in time is generic and could represent, for example:

- The times when a sample of radioactive material emits particles
- The times when customers arrive at a service station
- The times when file requests arrive at a server computer
- The times when accidents occur at a particular intersection
- The times when a device fails and is replaced by a new device

It turns out that under some basic assumptions that deal with independence and uniformity in time, a *single*, one-parameter probability model governs all such random processes. This is an amazing result, and because of it, the Poisson process (named after Simeon Poisson) is one of the most important in probability theory.

Run the Poisson experiment with the default settings in single step mode. Note the random points in time.

### Random Variables

There are three collections of random variables that can be used to describe the process. First, let \(X_1\) denote the time of the first arrival, and \(X_i\) the time between the \((i - 1)\)st and \(i\)th arrival for \(i \in \{2, 3, \ldots\}\). Thus, \(\bs{X} = (X_1, X_2, \ldots)\) is the sequence of inter-arrival times. Next, let \(T_n\) denote the time of the \(n\)th arrival for \(n \in \N_+\). It will be convenient to define \(T_0 = 0\), although we do not consider this as an arrival. Thus \(\bs{T} = (T_0, T_1, \ldots)\) is the sequence of arrival times. Clearly \(\bs{T}\) is the partial sum process associated \(\bs{X}\), and so in particular each sequence determines the other: \begin{align} T_n & = \sum_{i=1}^n X_i, \quad n \in \N \\ X_n & = T_n - T_{n-1}, \quad n \in \N_+ \end{align} Next, let \(N_t\) denote the number of arrivals in \((0, t]\) for \(t \in [0, \infty)\). The random process \(\bs{N} = (N_t: t \ge 0)\) is the counting process. The arrival time process \(\bs{T}\) and the counting process \(\bs{N}\) are inverses of one another in a sense, and in particular each process determines the other: \begin{align} T_n & = \min\{t \ge 0: N_t = n\}, \quad n \in \N \\ N_t & = \max\{n \in \N: T_n \le t\}, \quad t \in [0, \infty) \end{align} Note also that \( N_t \ge n \) if and only if \( T_n \le t \) for \( n \in \N \) and \( t \in [0, \infty) \) since each of these events means that there are at least \(n\) arrivals in the interval \((0, t]\).

Sometimes it will be helpful to extend the notation of the counting process. For \(A \subseteq [0, \infty)\) (measurable of course), let \(N(A)\) denote the number of arrivals in \(A\): \[ N(A) = \#\{n \in \N_+: T_n \in A\} = \sum_{n=1}^\infty \bs{1}(T_n \in A) \] Thus, \(A \mapsto N(A)\) is the counting measure associated with the random points \((T_1, T_2, \ldots)\), so in particular it is a *random* measure. For our original counting process, note that \(N_t = N(0, t]\) for \(t \ge 0\). Thus, \( t \mapsto N_t \) is a (random) distribution function, and \( A \mapsto N(A) \) is the (random) measure associated with this distribution function.

### The Basic Assumption

The assumption that we will make can be described intuitively (but imprecisely) as follows: If we fix a time \(t\), whether constant or one of the arrival times, then the process *after* time \(t\) is independent of the process *before* time \(t\) and behaves probabilistically just like the original process. Thus, the random process has a strong renewal property. Making the strong renewal assumption precise will enable use to completely specify the probabilistic behavior of the process, up to a single, positive parameter.

Think about the strong renewal assumption for each of the specific applications given above.

Run the Poisson experiment with the default settings in single step mode. See if you can detect the strong renewal assumption.

As a first step, note that part of the renewal assumption, namely that the process restarts

at each arrival time, independently of the past, implies the following result:

The sequence of inter-arrival times \(\bs{X}\) is an independent, identically distributed sequence

## Proof

Note that \(X_2\) is the first arrival time after \(T_1 = X_1\), so \(X_2\) must be independent of \(X_1\) and have the same distribution. Similarly \(X_3\) is the first arrival time after \(T_2 = X_1 + X_2\), so \(X_2\) must be independent of \(X_1\) and \(X_2\) and have the same distribution as \(X_1\). Continuing this argument, \(\bs{X}\) must be an independent, identically distributed sequence.

A model of random points in time in which the inter-arrival times are independent and identically distributed (so that the process restarts

at each arrival time) is known as a renewal process. A separate chapter explores Renewal Processes in detail. Thus, the Poisson process is a renewal process, but a very special one, because we also require that the renewal assumption hold at *fixed times*.

## Analogy with Bernoulli Trials

In some sense, the Poisson process is a continuous time version of the Bernoulli trials process. To see this, suppose that we have a Bernoulli trials process with success parameter \( p \in (0, 1) \), and that we think of each success as a random point in *discrete* time. Then this process, like the Poisson process (and in fact any renewal process) is completely determined by the sequence of inter-arrival times \( \bs{X} = (X_1, X_2, \ldots) \) (in this case, the number of trials between successive successes), the sequence of arrival times \( \bs{T} = (T_0, T_1, \ldots) \) (in this case, the trial numbers of the successes), and the counting process \( (N_t: t \in \N) \) (in this case, the number of successes in the first \( t \) trials). Also like the Poisson process, the Bernoulli trials process has the strong renewal property: at each fixed time and at each arrival time, the process starts over

independently of the past. But of course, time is discrete in the Bernoulli trials model and continuous in the Poisson model. The Bernoulli trials process can be characterized in terms of each of the three sets of random variables.

Each of the following statements characterizes the Bernoulli trials process with success parameter \( p \in (0, 1) \):

- The inter-arrival time sequence \( \bs{X} \) is a sequence of independent variables, and each has the geometric distributions on \( \N_+ \) with success parameter \( p \).
- The arrival time sequence \( \bs{T} \) has stationary, independent increments, and for \( n \in \N_+ \), \( T_n \) has the negative binomial distribution with stopping parameter \( n \) and success parameter \( p \)
- The counting process \( \bs{N} \) has stationary, independent increments, and for \( t \in \N \), \( N_t \) has the binomial distribution with trial parameter \( t \) and success parameter \( p \).

Run the binomial experiment with \(n = 50\) and \(p = 0.1\). Note the random points in discrete time.

Run the Poisson experiment with \(t = 5\) and \(r = 1\). Note the random points in continuous time and compare with the behavior in the previous exercise.

As we develop the theory of the Poisson process we will frequently refer back to the analogy with Bernoulli trials. In particular, we will show that if we run the Bernoulli trials at a faster and faster rate but with a smaller and smaller success probability, in just the right way, the Bernoulli trials process converges to the Poisson process.