\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\C}{\mathbb{C}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\lfrta}{\leftrightarrow}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Reliability
  2. 7. Paths and Related Graphs
  3. 1
  4. 2
  5. 3
  6. 4

2. The Infinite Path

Preliminaries

In this section, we consider the discrete graph \((\N, \lfrta)\) where \(\lfrta\) is the relation on \(\N\) defined by \(x \lfrta x + 1\) for \(x \in \N\) and \(x \lfrta x - 1\) for \(x \in \N_+\). So \((\N, \lfrta)\) is the undirected simple path \((0, 1, 2, \ldots)\), clearly a basic and interesting algebraic structure. Since the graph is discrete, we have our usual reference space \((\N, \ms P(\N), \#)\) in the background.

The \(\sigma\)-algebra associated with \((\N, \lfrta)\) is \(\ms P(\N)\).

Details:

By definition, the \(\sigma\)-algebra associated with the graph is \(\ms A = \sigma\{A_x: x \in \N\}\) where \(A_x = \{y \in \N: x \lfrta y\}\) is the neighbor set of \(x \in \N\). So \(A_x = \{x - 1, x + 1\}\) for \(x \in \N_+\) and \(A_0 = \{1\}\). So \(A_x \cap A_{x + 2} = \{x + 1\} \in \ms A\) for \(x \in \N\). It then follows that \(\N_+ \in \ms A\) and hence \(\{0\} \in \ms A\).

The walk functions for \((\N, \lfrta)\) are given recursively as follows.

  1. \[ u_{n}(0) = \binom{n}{\lfloor n/2 \rfloor}, \quad n \in \N \]
  2. \[ u_n(1) = u_{n + 1}(0) = \binom{n + 1}{\lfloor (n + 1) /2 \rfloor}, \quad n \in \N \]
  3. \[ u_n(x + 1) = u_{n + 1}(x) - u_n(x - 1), \quad x \in \N_+, \, n \in \N \]
Details:
  1. Consider walks ending in 0. A walk of even length \(2 n\) must start in a state in \(\{0, 2, \ldots, 2n\}\). A walk of odd length \(2 n + 1\) must start in a state in \(\{1, 3, \ldots 2 n + 1\}\). For \(n \in \N_+\), let \(a_n(k)\) be the number of walks of length \(2 n\) starting in \(k \in \{0, 2, \ldots, 2 n\}\) and ending in 0, and let \(b_n(k)\) be the number of walks of length \(2 n + 1\) starting in \(k \in \{1, 3, \ldots, 2 n + 1\}\) and ending in 0. So \[u_{2 n}(0) = \sum_{j = 0}^n a_n(2 j), \; u_{2 n +1}(0) = \sum_{j = 0}^n b_n(2 j + 1); \quad n \in \N_+\] Trivially \(a_n(2 n) = b_n(2 n + 1) = 1\) for \(n \in \N_+\). Also, \(b_n(k) = a_n(k - 1) + a_n(k + 1)\) for \(k \in \{1, 3, \ldots, 2n - 1\}\). Similarly, \(a_{n + 1}(0) = b_n(1)\) and \(a_{n + 1}(k) = b_n(k - 1) + b_n(k + 1)\) for \(k \in \{2, 4, \ldots, 2 n\}\). It then follows that \(u_{2 n}(0) = 2 u_{2 n - 1}(0)\) and \(u_{2 n + 1}(0) = 2 u_{2 n}(0) - a_n(0)\). This leads to (a).
  2. By definition, \(u_{n + 1}(x) = (u_n L)(x)\) for \(n \in \N\), where as usual, \(L\) is the adjacency kernel of \((\N, \lfrta)\). With \(x = 0\) we have \(u_{n + 1}(0) = (u_n L)(0) = \gamma_n(1)\) which gives (b).
  3. Finally, for \(x \in \N_+\) we have \(u_{n + 1}(x) = (u_n L)(x) = u_n(x - 1) + u_n(x + 1)\). Solving for \(u_n(x + 1)\) gives (c).

The generating function \(U(x, t)\) for \((\N, \lfrta)\) satisfies \begin{align*} U(0, t) &= 1 + t U(1, t), \quad |t| \lt \frac 1 2 \\ U(x, t) &= 1 + t U(x + 1, t) + t U(x - 1, t), \quad x \in \N_+, \, |t| \lt \frac 1 2 \end{align*}

Details:

First, it's easy to see that for \(x \in \N\), the generating function converges for \(|t| \lt \frac 1 2\). From part (b) of proposition we have \(t^{n + 1} u_n(1) = t^{n + 1} u_{n + 1}(0)\) for \(|t| \lt \frac 1 2\) and \(n \in \N\). Summing over \(n \in \N\) gives \[t U(1, t) = U(0, t) - U_0(0) = U(0, t) - 1\] From part (c) of proposition we have \[t^{n + 1} u_n(x + 1) = t^{n + 1} u_{n + 1}(x) - t^{n + 1} u_n(x - 1), \quad n \in \N, \, x \in \N_+, \, |t| \lt \frac 1 2\] Summing over \(n \in \N\) gives \[t U(x + 1, t) = U(x, t) - u_0(x) - t U(x - 1, t) = U(x, t) - 1 - t U(x - 1, t), \quad x \in \N_+, \, |t| \lt \frac 1 2 \]

For the following result, \(L\) denotes the adjacency kernel of \((\N, \lfrta)\), as usual, and \(P_n\) denotes the polynomial in Section 1 of degree \(n \in \N\).

For \(\beta \in \R\), the solution of the equation \(L g = \beta g\) is given by \(g(x) = g(0) P_x(\beta)\) for \(x \in \N\).

Details:

For \(\beta \in \R\), the equation \(L g = \beta g\) becomes \begin{align*} g(1) & = \beta g(0) \\ g(x + 1) & = \beta g(x) - g(x - 1), \quad x \in \N_+ \end{align*} Hence \(g(x) = g(0) P_x (\beta)\) for \(x \in \N\) by defintion of the polynomials.

So on the vector space of all functions from \(\N\) to \(\R\), every \(\beta \in \R\) is an eigenvalue of \((\N, \lfrta)\), and a corresponding eigenfunction is given by \(x \mapsto P_x(\beta)\). But usually we are interested in eigenvalues and eigenfunctions of \((\N, \lfrta)\) for the space \(\ms L_1\), where probability density functions live.

Probability

Suppose that \(X\) is a random variable in \(\N\) with density function \(f\). The reliability function \(F\) of \(X\) for \((\N, \lfrta)\) is given by \[F(0) = f(1), \quad F(x) = f(x - 1) + f(x + 1), \; x \in \N_+\]

As usual, we have our basic support assumption that \(F(x) \gt 0\) for all \(x \in \N_+\). So in particular, \(f(1) \gt 0\), and for every \(x \in \N_+\), either \(f(x - 1) \gt 0\) or \(f(x + 1) \gt 0\). The probability function does in fact determines the distribution, and we can characterize reliability functions:

The graph \((\N, \lfrta)\) is stochastic. Suppose that \(f\) is a probability density function on \(\N\) and that \(F\) is the corresponding reliability function for \((\N, \lfrta)\). Then

  1. \( f(0) = 2 - \sum_{x = 0}^\infty F(x) \)
  2. \( f(2 x + 1) = (-1)^x \sum_{k = 0}^x (-1)^k F(2 k), \quad x \in \N \)
  3. \( f(2 x) = (-1)^x f(0) + (-1)^x \sum_{k = 1}^{x} (-1)^y F(2 k - 1), \quad x \in \N_+ \)

Conversely, if \(F\) is a positive function on \(\N\) such that \(1 \lt \sum_{x = 0}^\infty F(x) \lt 2\) and such that the right side of (b) is positive for \(x \in \N\) and the right side of (c) is positive for \(x \in \N_+\), then \(F\) is the reliability function for \((\N, \lfrta)\) of a distribution on \(\N\) with probability density function given by (a), (b), and (c).

Details:

Suppose that \(f\) is a density function on \(\N\) and that \(F\) is the reliability function of \(f\) for \((\N, \lfrta)\). Note that \[\sum_{x = 0}^\infty F(x) = f(1) + \sum_{x = 1}^\infty [f(x - 1) + f(x + 1)] = f(0) + 2 \sum_{x = 1}^\infty f(x) = f(0) + 2[1 - f(0)] = 2 - f(0)\] and hence (a) holds. For the odd-order terms, note first that \(f(1) = F(0)\). Next, \(F(2) = f(1) + f(3)\) so \(f(3) = F(2) - F(0)\). Next, \(F(4) = f(3) + f(5)\) so \[f(5) = F(4) - f(3) = F(4) - F(2) + F(0)\] Continuing in this way gives (b). Similarly, \(F(1) = f(0) + f(2)\), so \(f(2) = F(1) - f(0)\), where \(f(0)\) is given in (a). Next, \(F(3) = f(2) + f(4)\) so \[f(4) = F(3) - f(2) = F(3) - F(1) + f(0)\] Continuing in this way gives (c).

Conversely, suppose that \(F\) is a positive function on \(\N\) such that \(1 \lt \sum_{x=0}^\infty F(x) \lt 2\) and with the right sides of (b) and (c) positive. Define \(f\) by (a), (b), and (c). Then \(f(x) \ge 0\) for \(x \in \N\) and it's easy to see from the definitions that \(F(0) = f(1)\) and \(F(x) = f(x - 1) + f(x + 1)\) for \(x \in \N_+\). So we have \[\sum_{x = 0}^\infty F(x) = f(0) + 2 \sum_{x = 1}^\infty f(x)\] Therefore \[f(0) + \sum_{x = 0}^\infty F(x) = 2 \sum_{x = 0}^\infty f(x)\] But by definition of \(f(0)\), the left side is \(2\). It follows that \(f\) is a density function on \(\N\) and \(F\) is the probability function of \(f\) for \((\N, \lfrta)\).

Here is our main result:

The graph \((\N, \lfrta)\) does not have a constant rate distribution. However, the geometric distribution on \(\N\) has constant rate on \(\N_+\) and is the only such distribution.

Details:

Suppose that \(X\) is a random variable with value in \(\N\) having constant rate \(\alpha\). From our support assumptions, \(\P(X = 0) \gt 0\) and \(\P(X \in \N_+) \gt 0\). It follows that \(\E[u(X)] \in (1, 2)\) where as usual, \(u\) is the walk function of order 1. But \(\E[u(X)] = 1 / \alpha\) from the basic moment result in Section 1.5 so \(\alpha \in \left(\frac 1 2, 1\right)\). Suppose next that \(X\) has probability density function \(f\). By the constant rate property, \(L f = \frac 1 \alpha f\), so that

  1. \( f(1) = \frac 1 \alpha f(0) \)
  2. \( f(x + 1) = \frac 1 \alpha f(x) - f(x - 1), \quad x \in \N_+ \)
The characteristic equation of the difference equation (b) is \(r^2 - \frac{1}{\alpha} r + 1 = 0\) which has roots \[ r_1 = \frac{1 - \sqrt{1 - 4 \alpha^2}}{2 \alpha}, \quad r_2 = \frac{1 + \sqrt{1 - 4 \alpha^2}}{2 \alpha} \] Since \(\alpha \in \left(\frac 1 2, 1\right)\), the roots are complex conjugates so either \(f(x) = 0\) for all \(x \in \N\) or \(f(x) \lt 0\) for infinitely many \(x \in \N\), in either case a contradiction. In fact, we can give the solution explicitly. Since \(1 \lt 1 / \alpha \lt 2\), we can write \(1 / \alpha = 2 \cos \theta\) for some \(\theta \in (0, \pi / 3)\). From Section 1, \[f(x) = f(0) P_x\left(\frac 1 \alpha\right) = f(0) \frac{\sin[(x + 1) \theta]}{\sin \theta}, \quad x \in \N\] Next, suppose that \(X\) has the geometric distribution on \(\N\) with success parameter \(p \in (0, 1)\). So, the probability density function \(f\) is given by \(f(x) = p (1 - p)^x\) for \(x \in \N\). Of course, \(X\) has an exponential distribution for the positive semigroup \((\N, +)\). Then \(F(0) = f(1) = p (1 - p)\) and \[F(x) = f(x - 1) + f(x + 1) = p (1 - p)^{x-1} + p (1 - p)^{x + 1} = p (1 - p)^{x-1}[1 + (1 - p)^2], \quad x \in \N_+\] So the rate function \(r\) of \(X\) for \((\N, \lfrta)\) satisfies \(r(0) = f(0) / F(0) = 1/(1 - p)\) and \[r(x) = \frac{f(x)}{F(x)} = \frac{1 - p}{1 + (1 - p)^2}, \quad x \in \N_+\] Hence \(X\) has constant rate \((1 - p) / [1 + (1 - p)^2]\) on \(\N_+\). Conversely, suppose that \(X\) has constant rate \(\alpha\) on \(\N_+\), so that the probability density function \(f\) satisfies (b) above (but not necessarily (a)). The only possible solution with \(f(x) \to 0\) as \(x \to \infty\) is when \(\alpha \lt \frac{1}{2}\) so that there is a root \(r \lt 1\) of the characteristic equation. The solution has the form \(f(x) = c r^x\). The requirement that \(f\) be a density function then implies that \(c = 1 - r\), so \(X\) has the geometric distribution with parameter \(p = 1 - r\).

Returning to the general case, we will consider random walk on \((\N, \lfrta)\) next.

Suppose that \(X\) is a random variable with probability density function \(f\) on \(\N\) and with reliability function \(F\) for \((\N, \lfrta)\). Let \(\bs{Y} = (Y_1, Y_2, \ldots)\) denote the random walk on \((\N, \lfrta)\) associated with \(X\), so that \(Y_1\) has density function \(f\) and \(\bs Y\) has transition kernel \(P\) given by \(P(0, 1) = 1\) and \begin{align*} P(x, x - 1) & = \frac{f(x - 1)}{f(x -1) + f(x + 1)}, \quad x \in \N_+ \\ P(x, x + 1) & = \frac{f(x + 1)}{f(x - 1) + f(x + 1)}, \quad x \in \N_+ \end{align*}

Hence \(\bs Y\) is a birth-death chain on \(\N\) with reflecting boundary point 0.

The random walk \(\bs{Y}\) is positive recurrent and reversible, with invariant probability density function \(g\) given by \(g(0) = f(0) f(1) / c\) and \(g(x) = f(x)[f(x - 1) + f(x + 1)] / c\) for \(x \in \N_+\) where \[c = 2 \sum_{x = 0}^\infty f(x) f(x + 1)\]

Details

This follows from results in Section 1.4. Recall that an invariant function is \( \varphi = f F\). The normalizing constant is \[c = \sum_{x = 0}^\infty \varphi(x) = 2 \sum_{x = 0}^\infty f(x) f(x + 1)\]

Of course, the general theory of birth-death chains would lead to the same conclusions.

Suppose that \(X\) has the geometric distribution on \(\N\) with success parameter \(p\) and that \(\bs Y\) is the random walk on \((\N, \lfrta)\) associated with \(X\).

  1. \(\bs Y\) has transition kernel \(P\) given by \(p(0, 1) = 1\) and \[P(x, x - 1) = \frac{1}{1 + (1 - p)^2}, \; P(x, x + 1) = \frac{(1 - p)^2}{1 + (1 - p)^2}, \quad x \in \N_+\]
  2. The invariant distribution of \(\bs Y\) is a modified geometric distribution with probability density function \(g\) given by \begin{align*} g(0) & = p (1 - p / 2) \\ g(x) &= [1 - p (1 - p / 2)][1 - (1 - p)^2] (1 - p)^{2 x - 2}, \quad x \in \N_+ \end{align*}

The app below simulates the random walk on \((\N, \lfrta)\) corresponding to the geometric distribution with success parameter \(p\), as described in .

The geometric distribution is almost the most random way to put points in the infinite path \((\N, \lfrta)\).