We will consider a process in which points occur randomly in continuous time, modeled by the set \([0, \infty)\). The phrase points in time is generic.
Examples:
It turns out that under some basic assumptions that deal with independence and uniformity in time, a single, one-parameter probability model governs all such random processes. This is an amazing result, and because of it, the Poisson process (named after Simeon Poisson) is one of the most important in probability theory.
Run the Poisson experiment with the default settings in single step mode. Note the random points in time.
There are three collections of random variables that can be used to describe the process.
The definition \(T_0 = 0\) is a helpful convention, although we do not consider this as an arrival time. The three random processes are equuivalent:
The sequence \(\bs{T}\) is the partial sum process associated \(\bs{X}\), and so in particular each sequence determines the other: \begin{align} T_n & = \sum_{i=1}^n X_i, \quad n \in \N \\ X_n & = T_n - T_{n-1}, \quad n \in \N_+ \end{align}
The arrival time process \(\bs{T}\) and the counting process \(\bs{N}\) are inverses of one another in the sense that \( N_t \ge n \) if and only if \( T_n \le t \) for \( n \in \N \) and \( t \in [0, \infty) \). In particular each process determines the other: \begin{align} T_n & = \min\{t \ge 0: N_t = n\}, \quad n \in \N \\ N_t & = \max\{n \in \N: T_n \le t\}, \quad t \in [0, \infty) \end{align}
For \( n \in \N \) and \( t \in [0, \infty) \), note that the event \( N_t \ge n \) and the event \( T_n \le t \) mean that there are at least \(n\) arrivals in the interval \((0, t]\).
Sometimes it will be helpful to extend the notation of the counting process. The set \([0, \infty)\) that models continuous time is given the standard \(\sigma\)-algebra of measuraable sets, as usual.
For measurable \(A \subseteq [0, \infty)\), let \(N(A)\) denote the number of arrivals in \(A\): \[ N(A) = \#\{n \in \N_+: T_n \in A\} = \sum_{n=1}^\infty \bs{1}(T_n \in A) \]
Thus, \(A \mapsto N(A)\) is the counting measure associated with the random points \((T_1, T_2, \ldots)\), so in particular it is a random measure. For our original counting process, note that \(N_t = N(0, t]\) for \(t \ge 0\). Thus, \( t \mapsto N_t \) is a (random) distribution function, and \( A \mapsto N(A) \) is the (random) measure associated with this distribution function.
The assumption that we will make can be described intuitively (but imprecisely) as follows:
If we fix a time \(t\), whether constant or one of the arrival times, then the process after time \(t\) is independent of the process before time \(t\) and behaves probabilistically just like the original process. Thus, the random process has a strong renewal property.
Making the strong renewal assumption precise will enable use to completely specify the probabilistic behavior of the process, up to a single, positive parameter.
Run the Poisson experiment with the default settings in single step mode. See if you can detect the strong renewal assumption.
As a first step, note that part of the renewal assumption, namely that the process restarts
at each arrival time, independently of the past, implies the following assumption:
The sequence of inter-arrival times \(\bs{X}\) is an independent, identically distributed sequence
Note that \(X_2\) is the first arrival time after \(T_1 = X_1\), so \(X_2\) must be independent of \(X_1\) and have the same distribution. Similarly \(X_3\) is the first arrival time after \(T_2 = X_1 + X_2\), so \(X_2\) must be independent of \(X_1\) and \(X_2\) and have the same distribution as \(X_1\). Continuing this argument, \(\bs{X}\) must be an independent, identically distributed sequence.
A model of random points in time in which the inter-arrival times are independent and identically distributed (so that the process restarts
at each arrival time) is known as a renewal process. A separate chapter explores renewal processes in detail. Thus, the Poisson process is a renewal process, but a very special one, because we also require that the renewal assumption hold at fixed times.
In some sense, the Poisson process is a continuous time version of the Bernoulli trials process, if we think of each success as a random point in discrete time. Once again, we have three different collections of random variables:
Consider a Bernoulli trials process with success parameter \( p \in (0, 1) \).
As before, the three collections of random variables are equivalent. Exercise holds for the Bernoulli trials process, as does if we restrict \(t\) to \(\N\) rather than \([0, \infty)\).
Like the Poisson process, the Bernoulli trials process has the strong renewal property: at each fixed time and at each arrival time, the process starts over
independently of the past. But of course, time is discrete in the Bernoulli trials model and continuous in the Poisson model. The Bernoulli trials process can be characterized in terms of each of the three sets of random variables.
Each of the following statements characterizes the Bernoulli trials process with success parameter \( p \in (0, 1) \):
Run the binomial experiment with \(n = 50\) and \(p = 0.1\). Note the random points in discrete time.
Run the Poisson experiment with \(t = 5\) and \(r = 1\). Note the random points in continuous time and compare with the behavior in .
As we develop the theory of the Poisson process we will frequently refer back to the analogy with Bernoulli trials. In particular, we will show that if we run the Bernoulli trials at a faster and faster rate but with a smaller and smaller success probability, in just the right way, the Bernoulli trials process converges to the Poisson process.