\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\rta}{\rightarrow}\) \(\newcommand{\upa}{\uparrow}\)
  1. Reliability
  2. 5. Rooted Trees and Related Spaces
  3. 1
  4. 2
  5. 3
  6. 4

3. Downward Run Graphs

General Theory

In this section we study a class of graphs that generalize the graph in the classical remaining life Markov chain. We start with a discrete rooted tree \((S, \upa)\) with root \(e\) and at least two vertices. The graph \((S, \upa)\) is the covering graph of a partial order graph \((S, \preceq)\), as discussed in Section 1, but now we are interested in a different graph that can be constructed from the tree.

Graph Basics

To review the notation, let \(S_+ = S - \{e\}\). For \(x, \, y \in S\) with \(x \preceq y\), let \(d(x, y)\) denote the distrance from \(x\) to \(y\) in \((S, \preceq)\), that is, the length of the unique path in \((S, \upa)\) from \(x\) to \(y\). Finally, for \(x \in S\), let \(d(x) = d(e, x)\) and let \((S_x, \upa)\) denotes the subtree of \((S, \upa)\) rooted at \(x\).

For \(n \in \N\) and \(x \in S\), let \(\beta_n(x) = \#\{y \in S: d(x, y) = n\}\).

So \(\beta_n(x)\) is the number of vertices at distance \(n\) from \(x\) in the subtree \((S_x, \upa)\), and in particular, \(\beta_0(x) = 1\).

Define the power series \(\varphi\) as follows \[\varphi(t) = \sum_{n = 0}^\infty \beta_n(e) t^{n + 1}, \quad t \in [0, 1]\]

Note that \(\varphi(1) = \sum_{n = 0}^\infty \beta_n(e) = \#(S)\). So if the tree is finite then \[\varphi(t) = \sum_{n = 0}^{m} \beta_n(e) t^{n + 1}, \quad t \in [0, 1]\] where \(m \in \N\) is the height of the tree. If the tree is infinite, then \(\varphi\) has radius of convergence \(\rho \in [0, 1]\).

The downward run graph \((S, \rta)\) associated with the tree \((S, \upa)\) is defined by \(x \rta x^-\) for \(x \in S_+\) and \(e \rta x\) for every \(x \in S\).

We will refer to the downward run graph \((\N, \rta)\) associated with \((\N, \upa)\), the covering graph of \((\N, \le)\), as the standard downward run graph. That is, for this graph, \(x \rta x - 1\) for \(x \in \N_+\) and \(0 \rta x\) for every \(x \in \N\). This is the graph of the classical remaining life Markov chain, which in turn is the time-reversal of the success-runs chain. Generalizations of success runs graphs will be studied in Section 4. For the remiander of this section, we assume that \((S, \rta)\) is the downward run graph associated with a rooted tree \((S, \upa)\) with at least two elements. A walk in \((S, \rta)\) moves successivley down the tree from the starting point, and then if the walk reaches \(e\), the next step of the walk can be any element of \(S\), including \(e\). The walk functions are complicated except in some special cases.

For \(n \in \N\), let \(u_n\) denote the left walk function for the graph \((S, \rta)\). Then for \(x \in S\),

  1. \(u_0(x) = 1\)
  2. \(u_1(x) = 1 + \beta_1(x)\)
  3. \(u_2(x) = 1 + \beta_1(e) + \beta_1(x) + \beta_2(x)\)
  4. \(u_3(x) = 1 + 2 \beta_1(e) + \beta_2(e) + \beta_1(e) \beta_1(x) + \beta_1(x) + \beta_2(x) + \beta_3(x)\)
Details:

Part (a) is definition. The other parts follow from the general recursive formula \(u_{n + 1}(x) = \sum_{y \rta x} u_n(y)\) for \(n \in \N\) and \(x \in S\).

Let \(L\) denote the set of leaves (vertices without children) in the rooted tree \((S, \upa)\). The (right) \(\sigma\)-algebra associated with the downward run graph \((S, \rta)\) is \(\ms P(L^c) \cup \{L\}\).

Details:

The (right) neighbor set of \(x \in S_+\) for the graph \((S, \rta)\) is \(\{x^-\}\), the singleton set with the parent of \(x\). If \(x \in S_+\) then \(x^- \in L^c\). The neighbor set of the root \(e\) is \(S\) (which belongs to every \(\sigma\)-algebra of subsets of \(S\)). So the \(\sigma\)-algebra \(\ms A\) associated with \((S, \rta)\) is \(\sigma\{\{x^-\}: x \in S_+\} = \sigma\{\{x\}: x \in L^c\} = \ms P(L^c) \cup \{L\}\).

In particular, if \((S, \upa)\) has no leaves or a single leaf, then the \(\sigma\)-algebra associated with \((S, \rta)\) is the reference \(\sigma\)-algebra \(\ms P(S)\). But if \((S, \upa)\) has two or more leaves then the associated \(\sigma\)-algebra is a proper subset of \(\ms P(S)\).

Probability

Once again, we have a downward run graph \((S, \rta)\) associated with a rooted tree \((S, \upa)\) that has a set of leaves \(L\) (possibly empty).

The graph \((S, \rta)\) is (right) stochastic. That is, a probability measure \(P\) on the associated \(\sigma\)-algebra \(\ms P(L^c) \cup \{L\}\) is uniquely determined by the reliability function \(F\) of \(P\) for \((S, \rta)\).

Details:

From it follows that we must show that \(P(\{x\})\) can be computed from \(F\) for \(x \in L^c\) and that \(P(L)\) can be computed from \(F\). If \(x \in L^c\) then \(x\) has a child, so select a particular child \(x^+\). Then by definition, \(F(x^+) = P(\{x\})\). But then also, \(P(L) = 1 - \sum_{x \in L^c} F(x^+)\).

But if the tree has two or more leaves, then the reliability function \(F\) of a probability measure \(P\) on the reference \(\sigma\)-algebra \(\ms P(S)\) does not uniquely determine \(P\). Specifically we cannot determine \(P(\{x\})\) individually for \(x \in L\), but only \(P(L)\) in total. Let us restate the baisc facts in terms of a random variable \(X\) relative to the standard reference space \((S, \ms P(S), \#)\) and defined on an underlying probability space \((\Omega, \ms F, \P)\). Let \(f\) denote the density function so that \(f(x) = \P(X = x)\) for \(x \in S\). We impose our usual support assumption so that \(f(x) \gt 0\) for \(x \in S\).

For \((S, \rta)\),

  1. \(X\) has reliability function \(F\) given by \(F(x) = f(x^-)\) for \(x \in S_+\) and \(F(e) = 1\).
  2. \(X\) has rate function \(r\) given by \(r(x) = f(x) / f(x^-)\) for \(x \in S_+\) and \(r(e) = f(e)\).
  3. If the tree \((S, \upa)\) has no leaves or a single leaf then \(F\) determines \(f\).
  4. \(r\) determines \(f\).
Details:
  1. This follows from the definition of the downward run graph.
  2. This follows from (a) and the definition of the rate function.
  3. This follows from .
  4. From (b), \(f(e) = r(e)\). If \(x \in S_+\) with \(d(x) = n \in \N_+\) then there exists a unique path \(e \upa x_1 \upa x_2 \upa \cdots \upa x_{n - 1} \upa x\). Using (b) recursively, \[f(x) = r(x) r(x_{n - 1}) \cdots r(x_1) r(e)\]

The next proposition is the main result on constant rate distributions for downward run graphs. The power series \(\varphi(e, \cdot)\) defined in will play an important role.

If \(\varphi\) has positive radius of convergence then \((S, \rta)\) has a unique distribution with constant rate \(\alpha \in (0, 1)\) satisfying \(\varphi(\alpha) = 1\). If \(X\) has this distribution then

  1. \(X\) has density function \(f\) given by \(f(x) = \alpha^{d(x) + 1}\) for \(x \in S\)
  2. \(d(X)\) has density function \(g\) given by \(g(n) = \beta_n(e) \alpha^{n + 1}\) for \(n \in \N\).
  3. For \(n \in \N\), the conditional distribution of \(X\) given \(d(X) = n\) is uniform on \(\{x \in S: d(x) = n\}\).

If \(\varphi\) has radius of convergence 0 then \((S, \rta)\) has no constant rate distribution.

Details:

To have constant rate \(\alpha\), \(f\) must satisfy \(f(e) = \alpha\) and \(f(x) = \alpha f(x^-)\) for \(x \in S_+\). So \(\alpha \in (0, 1)\) and \(f(x) = \alpha^{d(x) + 1}\) for \(x \in S\). But we must have \[\sum_{x \in S} f(x) = \sum_{n = 0}^\infty \sum_{d(x) = n} \alpha^{n + 1} = \sum_{n = 0}^\infty \beta_n(e) \alpha^{n + 1} = \varphi(\alpha) = 1\] If the tree is finite, then \(\varphi\) is continuous and increasing on the interval \([0, 1]\) and satisfies \(\varphi(0) = 1\) and \(\varphi(1) = \#(S) \ge 2\). If the tree is infinite but \(\varphi\) has radius of convergence \(\rho \in (0, 1]\), then \(\varphi\) is continuous and increasing on the interval \([0, \rho)\) and satisfies \(\varphi(0) = 0\) and \(\lim_{t \to \rho} \varphi(t) = \infty\). In either case, there exists a unique \(\alpha \in (0, 1)\) with \(\varphi(\alpha) = 1\). If the tree is infinite and \(\varphi\) has radius of convergence \(0\) then \(\varphi(t) = \infty\) for \(t \in (0, 1]\). In this case, there is no constant rate distribution.

So if \(\beta_n(e) = \infty\) for some \(n \in \N_+\) (that is, some vertex has infinitely many children) or if \(\beta_n(e)\) grows too fast with \(n\) then \((S, \rta)\) has no constant rate distribution. For example, a downward run graph with \(\beta_n(e) = n!\) for \(n \in \N\) has no constant rate distribution. On the other hand, the downward run graph corresponding to a finite rooted tree always has a unique constant rate distribution. But of course, we already knew that since every finite, strongly connected graph has a unique constant rate distribution. Other examples are given below.

Random Walks

Once again, we have a rooted tree \((S, \upa)\) and the corresponding downward run graph \((S, \rta)\). Recall also that \((S, \upa)\) is the covering graph for the partial order graph \((S, \preceq)\).

Suppose again that \(X\) has density function \(f\). The random walk \(\bs{Y} = (Y_1, Y_2, \ldots)\) on \((S, \rta)\) associated with \(X\) has transition probability function \(P\) given by \begin{align*} P(x, x^-) &= 1, \quad x \in S_+\\ P(e, x) &= f(x), \quad x \in S \end{align*}

Details:

This follows from the definition of the random walk on \((S, \rta)\) associated with \(X\): The transition probability function \(P\) is given by \(P(x, y) = f(y) / F(x)\) for \(x \rta y\).

In the case of the standard downward run graph, \(\bs Y\) is a remaining life Markov chain in the usual elementary formulation, with lifetime distribution \(f\).

Consider again the random walk \(\bs{Y}\) on \((S, \rta)\) associated with \(X\).

  1. The chain \(\bs{Y}\) is recurrent.
  2. The reliability function \(G\) of \(X\) for \((S, \preceq)\) is invariant for \(\bs Y\).
  3. \(\bs{Y}\) is positive recurrent if and only if \(m := \E[d(X)] \lt \infty\), and then the invariant density function \(g = G / (m + 1)\).
Details:
  1. By construction, \(\bs Y\) returns to the root \(e\), starting at \(e\) with probability 1.
  2. Recall that \(G(x) = \P(x \preceq X)\) for \(x \in S\). Hence \begin{align*} (G P)(x) &= \sum_{w \in S} G(w) P(w, x) = G(e) P(e, x) + \sum_{w \in S_+} G(w) P(w, x) \\ &= f(x) + \sum_{x \upa w} G(w) = f(x) + \sum_{x \upa w} \P(w \preceq X) = f(x) + \P(x \prec X) \\ &= \P(x \preceq X) = G(x), \quad x \in S \end{align*}
  3. The chain \(\bs Y\) is aperiodic and irreducible, so from the standard theory of discrete-time Markov chains, \(\bs Y\) is positive recurrent if and only if \(\sum_{x \in S} G(x) \lt \infty\). But from our standard moment result in Section 5.2, \(\sum_{x \in S} G(x) = \E[d(X)] + 1\). Recall that \(\mu(x) = 1 / g(x)\) is the mean return time to \(x \in S\) for \(\bs Y\) starting at \(x\).

Note that \(g\) and \(1 + m\) are the density function and mean, respectively, of \(d(X) + 1\), which is the return time to \(e\), starting at \(e\) for the chain \(\bs Y\). It's interesting that the reliability function for the partial order graph \((S, \preceq)\) plays a fundamental role in the analysis of the random walk on the downward run graph \((S, \rta)\). In particular, the special case where \(X\) has constant rate for \((S, \preceq)\) is interesting. Recall that the constant rate distributions for \((S, \preceq)\) were characterized in Section 1.

Suppose that \(X\) has constant rate for \((S, \preceq)\). The random walk \(\bs Y\) on \((S, \rta)\) associated with \(X\) is positive recurrent and the density function \(f\) of \(X\) is also the density function of the invariant distribution of \(\bs Y\).

Details:

Suppose that \(X\) has constant rate \(\alpha \in (0, 1)\) for \((S, \preceq)\). From , the reliability function \(G\) of \(X\) for \((S, \preceq)\) is invariant for \(\bs Y\). But \(f = \alpha G\) so \(f\) is also invariant for \(\bs Y\). Recall also from Section 1 that \(d(X)\) has the geometric distribution with parameter \(\alpha\). and hence \(\E[d(X)] = 1 / (1 - \alpha) \lt \infty\).

Examples

Infinite Regular Rooted Trees

In this subsection, we assume that \((S, \upa)\) is the infinite, regular rooted tree of degree \(k \in \N_+\), as described in Section 1, and then \((S, \rta)\) is the corresponding downward run graph.

The left walk function \(u_n\) of length \(n \in \N\) for \((S, \rta)\) is given by \(u_n(x) = (k + 1)^n\) for \(x \in S\).

Details:

This follows from a general result in Section 1.1 since \((S, \rta)\) is left regular of degree \(k + 1\).

In particular, for the standard downward run graph \((\N, \rta)\), we have \(u_n(x) = 2^n\) for \(n, \, x \in \N\).

The downward run graph \((S, \rta)\) has the unique constant rate distribution with rate \(\alpha = 1 / (k + 1)\). If \(X\) has constant rate then

  1. \(X\) has density function \(f\) given by \(f(x) = 1 / (k + 1)^{d(x) + 1}\) for \(x \in S\).
  2. \(d(X)\) has the geometric distribution on \(\N\) with success parameter \(1 / (k + 1)\).
Details:

The results follow from . Note that \(\beta_n(e) = k^n\) for \(n \in \N\) so \(\varphi(t) = \sum_{n = 0}^\infty k^n t^{n + 1} = k / (1 - k t)\) for \(|t| \lt 1 / k\). Hence the rate constant \(\alpha\) satisfies \(k / (1 - \alpha k) = 1\) so \(\alpha = 1 / (k + 1)\).

  1. \(X\) has density function \(f\) given by \(f(x) = \alpha^{d(x) + 1} = 1 / (k + 1)^{d(x) + 1}\) for \(x \in S\).
  2. \(d(X)\) has density function \(g\) given by \[g(n) = \beta_n(e) \alpha^{n + 1} = k^n \frac{1}{(k + 1)^{n + 1}} = \left(\frac{k}{k + 1}\right)^n \frac{1}{k + 1}, \quad n \in \N\]

Finite Regular Rooted Trees

In this subsection, we consider the downward run graph \((S, \rta)\) associated with the finite regular rooted tree \((S, \upa)\) of degree \(k \in \N_+\) and height \(m \in \N_+\). By , \((S, \rta)\) has a unique constant rate distribution, with rate \(\alpha \in (0, 1)\). If \(X\) has this distribution then we know that \(X\) has density function \(f\) given by \(f(x) = \alpha^{d(x) + 1}\) for \(x \in S\), so we concentrate on properties of \(\alpha\) and \(d(X)\). Of course, \(\alpha\) and \(\varphi\) depend on \(k\) and \(m\). A critical relationship between those parameters will play an important role. Recall that \(\N_m = \{0, 1, \ldots, m\}\) for \(m \in \N\).

Suppose that \(k = m + 1\) and that \(X\) has the distribution with constant rate for \((S, \rta)\). Then

  1. \(X\) has rate \(\alpha = 1 / k = 1 / (m + 1)\).
  2. \(d(X)\) is uniformly distributed on \(\N_m\).
Details:
  1. Note that \(\beta_n(e) = k^n = (m + 1)^n\) for \(n \in \N_m\). Hence \(\varphi(t) = \sum_{n = 0}^m (m + 1)^n t^{n + 1}\) for \(t \in (0, \infty)\). In particular \[\varphi\left(\frac{1}{m + 1}\right) = \sum_{n = 0}^m \frac{1}{m + 1} = 1\]
  2. \(d(X)\) has density function \(g\) given by \(g(n) = \beta_n(e) \alpha^{n + 1} = 1 / (m + 1)\) for \(n \in \N_m\).

Suppose that \(k \lt m + 1\) and that \(X\) has the distribution with constant rate for \((S, \rta)\). Then

  1. \(X\) has rate \(\alpha \in (0, 1 / k)\) satisfying \(k^{m + 1} \alpha^{m + 2} - (k + 1) \alpha + 1 = 0\).
  2. \(\alpha \to 1 / (k + 1)\) as \(m \to \infty\) for fixed \(k\).
  3. The distribution of \(d(X)\) is the same as the conditional distribution of \(N\) given \(N \in \N_m\) where \(N\) has the geometric distribution on \(\N\) with success parameter \(1 - k \alpha\).
Details:
  1. As before \(\beta_n(e) = k^n\) for \(n \in \N_m\). Hence \(\varphi(1 / k) = \sum_{n = 0}^m 1 / k = (m + 1) / k \gt 1\) so \(\alpha \in (1 / k, 1)\). On the other hand, for \(t \in [0, 1]\) with \(t \ne 1 / k\) we have \[\varphi(t) = \sum_{n = 0}^m k^n t^{n + 1} = t \frac{(k t)^{m + 1} - 1}{k t - 1}\] Solving \(\varphi(\alpha) = 1\) gives the equation in part (a).
  2. Note that \(\varphi(t)\) increases as \(m\) increases for each \(t \in [0, 1]\). Hence \(\alpha\) decreases as \(m\) increases, and so in particular is bounded below \(1 / k\) and has a limit in \([0, 1 / k]\) as \(m \to \infty\). Letting \(m \to \infty\) in the equation in (a) then shows that \(\alpha \to 1 / (k + 1)\) as \(m \to \infty\).
  3. Note that \(1 - k \alpha \in (0, 1)\). From the equation in (a) defining \(\alpha\), the density function \(g\) of \(d(X)\) is given by \[g(n) = k^n \alpha^{n + 1} = \frac{(k \alpha)^n (1 - k \alpha)}{(1 - k \alpha) / \alpha} = \frac{(k \alpha)^n (1 - k \alpha)}{1 - (k \alpha)^{m + 1}} \quad n \in \N_m\] The last expression is \(\P(N = n \mid N \in \N_m)\) for \(n \in \N_m\) where \(N\) has the geometric distribution on \(\N\) with success parameter \(1 - k \alpha\).

Suppose that \(k \gt m + 1\) and that \(X\) has the distribution with constant rate for \((S, \rta)\). Then

  1. \(X\) has rate \(\alpha \in (1 / k, 1)\) satisfying \(k^{m + 1} \alpha^{m + 2} - (k + 1) \alpha + 1 = 0\).
  2. The distribution of \(m - d(X)\) is the same as the conditional distribution of \(N\) given \(N \in \N_m\) where \(N\) has the geometric distribution on \(\N\) with success parameter \(1 - 1 / \alpha k\).
Details:
  1. Once again, \(\beta_n(e) = k^n\) for \(n \in \N_m\). Hence \(\varphi(1 / k) = \sum_{n = 0}^m 1 / k = (m + 1) / k \lt 1\) so \(\alpha \in (1 / k, 1)\). On the other hand, for \(t \in (0, \infty)\) with \(t \ne 1 / k\) we have \[\varphi(t) = \sum_{n = 0}^m k^n t^{n + 1} = t \frac{(k t)^{m + 1} - 1}{k t - 1}\] Solving \(\varphi(\alpha) = 1\) gives the equation in part (a).
  2. Note that \(1 - 1 / \alpha k \in (0, 1)\). From the equation in (a) defining \(\alpha\) \[\P[m - d(x) = n] = k^{m - n} \alpha^{m - n + 1} = k^m \alpha^{m + 1} \left(\frac{1}{\alpha k}\right)^n = \frac{(1 / \alpha k)^n (1 - 1 / \alpha k)}{1 - (1 / \alpha k)^{m + 1}} \quad n \in \N_m\] The last expression is \(\P(N = n \mid N \in \N_m)\) for \(n \in \N_m\) where \(N\) has the geometric distribution on \(\N\) with success parameter \(1 - 1 / \alpha k\).

The asymptotic behavior of the rate constant as \(k \to \infty\) in the context of is somewhat complicated. To state the result we need to enhance the notation a bit.

Let \(\alpha_k\) denote the rate constant for \((S, \rta)\) for fixed \(m \in \N_+\) and with \(k \in \N_+\) satisfying \(k \gt m + 1\), as described in . Then \(k^m \alpha_k^{m + 1} \to 1\) as \(k \to \infty\). As corollaries,

  1. \(\alpha_k \to 0\) as \(k \to \infty\)
  2. \(k \alpha_k \to \infty\) as \(k \to \infty\) and hence \(k^n \alpha_k^n \to \infty\) as \(k \to \infty\) for \(n \in \N_+\).
  3. \(k^n \alpha_k^{n + 1} \to 0\) as \(k \to \infty\) for \(n \in \{0, 1, \ldots, m - 1\}\).
  4. \(k^n \alpha_k^{n + 1} \to \infty\) as \(k \to \infty\) for \(n \in \{m + 1, m + 2, \ldots \}\).
Details:

Parts (a) and (b) we will actually need as lemmas.

  1. Recall that the power series is \(\varphi_k(t) = \sum_{n = 0}^\infty k^n t^{n + 1}\) for \(t \in [0, 1]\) and that \(\alpha_k\) satisfies \(\varphi_k(\alpha_k) = 1\). Note that \(\varphi_k(t)\) increases as \(k\) increases for each \(t \in [0, 1]\) and hence \(\alpha_k\) decreases as \(k\) increases. Moreover, \(\alpha_k \in (1 / k, 1)\) and in particular is bounded below by 0, so \(\lim_{k \to \infty} \alpha_k\) exists in \([0, 1]\). Next, \[1 = \liminf_{k \to \infty} \left(\sum_{n = 0}^n k^n \alpha_k^{n + 1}\right) \ge \sum_{n = 0}^m \liminf_{k \to \infty} k^n \alpha_k^{n + 1}\] Since \(m \in \N_+\) this means that \(\liminf_{k \to \infty} k \alpha_k^2 \le 1\) which in turn implies that \(\alpha_k \to 0\) as \(k \to \infty\) (since we know the limit exists).
  2. Next we have \(k^{n + 1} \alpha_k^{n + 2} = (k \alpha_k) (k^n \alpha_k^{n + 1}) \ge k^n \alpha_k^{n + 1}\) for \(n \in \N\) since \(\alpha_k \gt 1 / k\) and so \(k \alpha_k \gt 1\). That is, \(n \mapsto k^n \alpha_k^{n + 1}\) is increasing. Using the defining equation again, we have \[1 = \sum_{n = 0}^m k^n \alpha_k^{n + 1} \le \sum_{n = 0}^m k^m \alpha_k^{m + 1} = (m + 1) k^m \alpha_k^{m + 1}\] Hence \(k^m \alpha_k^{m + 1} \ge 1 / (m + 1)\) for \(k \gt m + 1\) and so \(\liminf_{k \to \infty} k^m \alpha_k^{m + 1} \ge 1 / (m + 1)\). Next, \[\liminf_{k \to \infty} k^{m + 1} \alpha_k^{m + 1} = \liminf_{k \to \infty} k \cdot k^m \alpha_k^{m + 1} \ge \liminf_{k \to \infty} k \cdot \liminf_{k \to \infty} k^m \alpha_k^{m + 1} = \infty\] since the first factor on the right is infinity and the second is positive. It then follows that \(k^{m + 1} \alpha_k^{m + 1} \to \infty\) as \(k \to \infty\) and so \(k \alpha_k \to \infty\) as \(k \to \infty\).

Now, to prove the main result, note that from the equation for \(\alpha_k\) in we have \[k^m \alpha_k^{m + 1} = 1 - \frac{1 - \alpha_k}{k \alpha_k}\] Taking limits and using (a) and (b) gives \(k^m \alpha_k^{m + 1} \to 1\) as \(k \to \infty\). We can now establish the other two corollaries.

  1. Suppose that \(n \in \{0, 1, \ldots, m - 1\}\). Then \(k^m \alpha_k^{m + 1} = (k^{m - n} \alpha_k^{m - n}) (k^n \alpha_k^{n + 1})\). Taking limits and using the main result and (b) gives \(k^n \alpha_k^{n + 1} \to 0\) as \(k \to \infty\).
  2. Suppose that \(n \in \{m + 1, m + 2, \ldots\}\). Then \(k^n \alpha_k^{n + 1} = (k^m \alpha_k^{m + 1}) (k^{n - m} \alpha_k^{n - m})\). Taking limits and using (b) and the main result we have \(k^n \alpha_k^{n + 1} \to \infty\) as \(k \to \infty\).

So asymptotically, \(\alpha_k \sim (1 / k)^{m / (m + 1)}\) as \(k \to \infty\). The case that \(m = 1\) is a special case of or or (depending on \(k \in \N_+\)). In this case, the underlying tree is a rooted star and then the downward run graph is the ordinary undirected star with the addition of a loop at the center \(e\). This case is interesting because, as we will see in Section 4, the corresponding downward and upward graphs are the same.

Suppose that \(m = 1\) and \(k \in \N_+\) and that \(X\) has the distribution with constant rate for \((S, \rta)\).

  1. \(X\) has rate \[\alpha = \frac{\sqrt{4 k + 1} - 1}{2 k}\]
  2. \(\alpha \to 0\) as \(k \to \infty\).
  3. \(X\) has density function \(f\) given by \(f(e) = \alpha\), \(f(x) = \alpha^2\) for \(x \in S_+\).
Details:

In this case, \(\varphi(t) = t + k t^2\) for \(t \in \R\), so \(\alpha \in (0, 1)\) satisfies \(k \alpha^2 + \alpha - 1 = 0\).

The case that \(k = 1\) is a special case of . In this case, the underlying tree is a finite directed path. As described in Section 1, we can let \(S = \N_m\) with root 0 and \(d(x) = x\) for \(x \in \N_m\).

Suppose that \(k = 1\) and \(m \in \N_+\) and that \(X\) has the distribution with constant rate for \((S, \rta)\). Then

  1. \(X\) has rate \(\alpha \in (0, 1)\) satisfying \(\alpha^{m + 2} - 2 \alpha + 1 = 0\).
  2. \(\alpha \to \frac 1 2\) as \(m \to \infty\).
  3. The distribution of \(d(X)\) is the same as the conditional distribution of \(N\) given \(N \in \N_m\) where \(N\) has the geometric distribution on \(\N\) with success parameter \(1 - \alpha\).
Details:

This is a corollary of .

Binomial Trees

For \(m \in \N_+\), let \((S_m, \upa)\) denote the rooted binomial tree of order \(m\), as described in Section 1. Suppose that \(X\) has the unique constant rate distribution for the downward run graph \((S, \rta)\).

  1. \(X\) has rate \(\alpha \in (0, 1)\) satisfying \(\alpha (1 + \alpha)^m = 1\).
  2. \(\alpha \to 0\) as \(m \to \infty\).
  3. \(d(X)\) has the binomial distribution with parameters \(m\) and \(\alpha / (1 + \alpha)\).
Details:

The results follow from and from basic properties of binomial trees in Section 1.

  1. \(\beta_k(e) = \binom{m}{k}\) for \(k \in \N_m\). Hence \(\varphi(t) = \sum_{k = 0}^m \binom{m}{k} t^{k + 1} = t (1 + t)^m\). So the rate constant \(\alpha\) satisifes \(\alpha (1 + \alpha)^m = 1\).
  2. \(d(X)\) has density function \(g\) given by \[g(k) = \binom{m}{k} \alpha^{k + 1} = \binom{m}{k} \frac{\alpha^{k + 1}}{\alpha (1 + \alpha)^m} = \binom{m}{k} \left(\frac{\alpha}{1 + \alpha}\right)^k \left(\frac{1}{1 + \alpha}\right)^{m - k}, \quad k \in \N_m\]

So this result gives new meaning to the term binomial tree.

The Standard Graph

The standard remaining life graph \((\N, \rta)\) is the remaining life graph associated with the covering tree \((\N, \upa)\). So 0 is the root and \(d(x) = x\) for \(x \in \N\).

The unique constant rate distribution for \((\N, \rta)\) is the geometric distribution on \(\N\) with parameter \(1 / 2\) and so with density function \(f\) given by \[f(x) = \left(\frac 1 2\right)^{x + 1}, \quad x \in \N\]

Details:

This is a special case of .

The following result gives a recursive scheme for computing the density function of the random walk variables in the case of

Consider the random walk \(\bs{Y} = (Y_1, Y_2, \ldots)\) on \((\N, \rta)\). associated with random variable \(X\). Let \(f\) denote the density function of \(X\) and \(f_n\) the density function of \(Y_n\) for \(n \in \N_+\). Then recursively for \(n \in \N\), \[f_{n + 1}(x) = \sum_{k = 0}^{n - 1} f(k) f_{n - k}(x) + f(x + n), \quad x \in \N\]

Details:

Fix \(n \in \N_+\). Note that a walk \(\bs x = (x_1, x_2, \ldots, x_n)\) of length \(n\) in \((\N, \rta)\) must start with a subsequence decreasing by 1 from term to term, perhaps to 0, and if so, followed by another subsequence of the same form, and so forth. Now let \(k \in \N\) denote the number of times 0 occurs in \(\bs x\). Then let \(j_1 = 1\) and \(j_i\) the index following the \(i\)th 0 for \(i \in \{1, 2, \ldots, k\}\). Then the probability density function of \((Y_1, Y_2, \ldots, Y_n)\) at \(\bs x\) has the form \[f(x_{j_1}) f(x_{j_2}) \cdots f(x_{j_k})\] This makes sense. The only source of randomness in the sequence \(\bs Y\) are the rebirth terms following the \(0\)s. For the density function \(f_n\) of \(Y_n\), we use the general formula from Section 1.4: \(f_n(x) = R_{n - 1}(x) f(x)\) for \(y \in \N\). This leads to the following, each for \(x \in \N\): \begin{align*} f_1(x) &= f(x)\\ f_2(x) &= f(0) f(x) + f(x + 1)\\ f_3(x) &= f^2(0) f(x) + f(0) f(x + 1) + f(1) f(x) + f(x + 2)\\ \end{align*} The next term is \begin{align*} f_4(x) = & f^3(0) f(x) + f^2(0) f(x + 1) + f(0) f(x + 2) + f(0) f(1) f(x) \\ & + f(1) f(0) f(x) + f(1) f(x + 1) + f(2) f(x) + f(x + 3) \end{align*} In general, \(f_n(x)\) has \(2^{n - 1}\) terms, one for each of the \(2^{n - 1}\) walks of length \(n - 1\) in \((\N, \rta)\) terminating in \(x\). Recursively, for \(n \in \N\), we have the formula in the proposition.

The app below is a simulation of the remaining life chain. Random variable \(X\) has the geometric distribution with parameter \(1 - p\) which is also the invariant density function.