\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\rta}{\rightarrow}\) \(\newcommand{\upa}{\uparrow}\) \(\newcommand{\lfrta}{\leftrightarrow}\) \(\newcommand{\Lfrta}{\Leftrightarrow}\) \(\newcommand{\Rta}{\Rightarrow}\)
  1. Reliability
  2. 1. Graphs
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9
  12. 10

7. Induced Graphs

A measurable function from a measure space to another measure space with a graph leads to a graph on the domain space in a natural way. This section explores this setting in general and in a number of important special cases.

Introduction

Suppose that \((S, \ms S, \lambda)\) and \((T, \ms T, \mu)\) are \(\sigma\)-finite measure spaces and that \(\varphi\) is a measurable function from \(S\) onto \(T\). The function \(\varphi\) induces a measurable partition \(\ms P = \{S_t: t \in T\}\) of \(S\) where \[S_t = \varphi^{-1}\{t\} = \{x \in S: \varphi(x) = t\}, \quad t \in T\] Let \(\ms S_t\) denote the associated \(\sigma\)-algebra on \(S_t\), so that \[\ms S_t = \{A \in \ms S: A \subseteq S_t\}, \quad t \in T\]

To complete the basic setup, we need an assumption that links the measures \(\lambda\) and \(\mu\) and the mapping \(\varphi\) in a fundamental way.

We assume that there is a finite, positive measure \(\lambda_t\) on \((S_t, \ms S_t)\) for each \(t \in T\) so that \(t \mapsto \lambda_t(S_t)\) is measurable and \[\lambda(A) = \int_T \lambda_t(A \cap S_t) \, d\mu(t), \quad A \in \ms S\]

It's possible to give conditions on the measure spaces and the mapping to ensure that the basic assumption is satisfied, but such technical conditions are not worth the effort. As we will see, the basic assumption is satisfied in the important special cases that are of most interest to us—the discrete case studied below, and the norm graphs studied in Chapter 3. In the latter case, the graphs are on Euclidean spaces where the basic assumption is essentially the co-area formula. However, it's easy to see that the assumption places significant restrictions on the two measure spaces and the mapping. Naturally, there is an integral version of the basic assumption.

If \(f: S \to \R\) is measurable then \[\int_S f(x) \, d \lambda(x) = \int_T \int_{S_t} f(x) \, d\lambda_t(x) \, d \mu(t)\] assuming that the integrals exist.

From now on, \(\lambda_t\) is the standard reference measure on \((S_t, \ms S_t)\) for \(t \in T\), so in particular, density functions on \(S_t\) are with respect to this measure. Let \(\beta(t) = \lambda_t(S_t)\) for \(t \in T\), so that by assumption, \(\beta\) is a measurable function from \(T\) into \((0, \infty)\). The function \(\beta\) will play an important role, as we will see. As a first indication, recall that the measure \(\nu\) on \((T, \ms T)\) induced by \(\varphi\) is given by \[\nu(A) = \lambda[\varphi^{-1}(A)], \quad A \in \ms T\] The induced measure \(\nu\) is typically not the same as the reference measure \(\mu\), but under our basic assumption, they are closely related.

The induced measure \(\nu\) is absolutely continuous with respect to the reference measure \(\mu\), with density function \(\beta\).

Details:

Let \(A \in \ms T\). Then by the basic assumption, \[\nu(A) = \lambda[\varphi^{-1}(A)] = \int_T \lambda_t[\varphi^{-1}(A) \cap S_t] \, d\mu(t) = \int_T \lambda_t[\varphi^{-1}(A \cap \{t\})] \, d\mu(t)\] But \(\varphi^{-1}(A \cap \{t\}) = \varphi^{-1}\{t\} = S_t\) if \(t \in A\) and is \(\emptyset\) otherwise, so \[\nu(A) = \int_A \lambda_t(S_t) \, d\mu(t) = \int_A \beta(t) \, d\mu(t)\]

Since \(\beta\) is strictly positive, the reference measure \(\mu\) is also absolutely continuous with respect to the induced measure \(\nu\), with density function \(1 / \beta\). Hence, the two measures are equivalent, under the natural equivalence relation associated with absolute continuity. But again, the reference measure is usually more natural—counting measure in the discrete case and Lebesgue measure on Euclidean spaces, for example. Next is our fundamental definition.

Suppose that \((T, \rta)\) is a graph. The graph \((S, \Rta)\) induced by \((T, \rta)\) and the mapping \(\varphi\) is defined by \(x \Rta y\) if and only if \(\varphi(x) \rta \varphi(y)\) for \((x, y) \in S^2\).

Details:

We need to show that \((S, \Rta)\) is a valid graph. The function \((\varphi, \varphi)\) from \(S^2\) into \(T^2\) is measurable, and \[\{(x, y) \in S^2: x \Rta y\} = \{(x, y) \in S^2: \varphi(x) \rta \varphi(y)\}\] is the inverse image under \((\varphi, \varphi)\) of \(\{(u, v) \in T^2: u \rta v\}\).

The main point of this section is to see how results for the graph \((T, \rta)\) can be leveraged to obtain corresponding results for the induced graph \((S, \Rta)\). Going forward, we will use our usual notation for mathematical objects associated with the graphs \((S, \Rta)\) and \((T, \rta)\), but with the latter augmented by a circumflex.

Let \(\gamma_n\) denote the path function of order \(n \in \N\) for the graph \((S, \Rta)\). Then \[\gamma_n(x) = \int_{t_1 \rta t_2 \rta \cdots \rta t_n \rta \varphi(x)} \beta(t_1) \beta(t_2) \cdots \beta(t_n) \, d\mu^n(t_1, t_2 \ldots, t_n), \quad x \in S\]

Details:

As before, let \(\nu\) denote the measure on \((T, \ms T)\) induced by \(\varphi\). By the definitions, and by the change of variables theorem, \begin{align*} \gamma_n(x) &= \int_{S^n} \bs 1(x_1 \Rta x_2 \Rta \cdots \Rta x_n \Rta x) \, d\lambda^n(x_1, x_2, \ldots, x_n) \\ &= \int_{S^n} \bs 1[\varphi(x_1) \rta \varphi(x_2) \rta \cdots \rta \varphi(x_n) \rta \varphi(x)] \, d\lambda^n(x_1, x_2, \ldots, x_n) \\ &= \int_{T^n} \bs 1[t_1 \rta t_2 \rta \cdots \rta t_n \rta \varphi(x)] \, d\nu^n(t_1, t_2, \ldots, n_n) \end{align*} The result now follows from , since \(d \nu^n(t_1, t_2, \ldots, t_n) = \beta(t_1) \beta(t_2) \cdots \beta(t_n) d \mu^n(t_1, t_2, \ldots, t_n)\).

In the special case that \(\beta\) is constant on \(T\), we have \(\gamma_n(x) = \beta^n \hat \gamma_n[\varphi(x)]\) for \(x \in S\), where \(\hat \gamma_n\) is the path function of order \(n\) for \((T, \rta)\).

Probability

Suppose now that \(X\) is a random variable in \(S\), and let \(\hat X = \varphi(X)\), the associated random variable in \(T\). There is a trivial relationship between the reliability functions of \(X\) and \(\hat X\).

Let \(F\) and \(\hat F\) denote the reliability functions of \(X\) and \(\hat X\) for the graphs \((S, \Rta)\) and \((T, \rta)\), respectively. Then \[F(x) = \hat F[\varphi(x)], \quad x \in S\]

Details:

By definition, \[F(x) = \P(x \Rta X) = \P[\varphi(x) \rta \varphi(X)] = \P[\varphi(x) \rta \hat X] = \hat F[\varphi(x)], \quad x \in S\]

Equivalently, \(F(x) = \hat F(t)\) for \(t \in T\) and \(x \in S_t\), so \(F\) is constant on \(S_t\) for each \(t \in T\). We assume that \(\hat X\) is supported by \((T, \rta)\), so that \(\hat F(t) \gt 0\) for \(t \in T\). It then follows that \(X\) is supported by \((S, \Rta)\).

Suppose that \(X\) has density function \(f\). Then \(\hat X\) has density function \(\hat f\) given by \[\hat f(t) = \int_{S_t} f(x) \, d\lambda_t(x), \quad t \in T\]

Details:

Let \(A \in \ms T\). Then \[\P(\hat X \in A) = \P[X \in \varphi^{-1}(A)] = \int_{\varphi^{-1}(A)} f(x) \, d\lambda(x) = \int_S \bs 1[x \in \varphi^{-1}(A)] f(x) \, d\lambda(x)\] So by , \[\P(\hat X \in A) = \int_T \int_{S_t} \bs 1[x \in \varphi^{-1}(A)] f(x) \, d\lambda_t(x) \, d\mu(t) = \int_T \int_S \bs 1[x \in \varphi^{-1}(A) \cap S_t] f(x) \, d\lambda_t(x) \, d\mu(t)\] But \(\varphi^{-1} A \cap S_t = \varphi^{-1}(A) \cap \varphi^{-1}\{t\} = \varphi^{-1}(A \cap \{t\})\) and this set is \(S_t\) if \(t \in A\) and is \(\emptyset\) otherwise. Hence \[\P(\hat X \in A) = \int_A \int_{S_t} f(x) \, d\lambda_t(x) \, d\mu(t)\]

For \(t \in T\), a conditional density of \(X\) given \(\hat X = t\) is defined by \[f(x \mid t) = \frac{f(x)}{\hat f(t)}, \quad x \in S_t\]

Details:

Fix \(t \in T\). For \(A \in \ms S\), \begin{align*} & \int_T \hat f(t) \int_{A \cap S_t} f(x \mid t) \, d\lambda_t(x) \, d\mu(t) = \int_T \hat f(t) \int_{A \cap S_t} \frac{f(x)}{\hat f(t)} \, d\lambda_t(x) \, d\mu(t) \\ &= \int_T \int_{A \cap S_t} f(x) \, d\lambda_t(x) \, d\mu(t) = \int_A f(x) \, d\lambda(x) = \P(X \in A) \end{align*} where again we have used . So it follows by definition that \[\P(X \in A \mid \hat X = t) = \int_A f(x \mid t) \, d\lambda_t(x), \quad A \in \ms S_t\]

Let \(r\) and \(\hat r\) denote the rate functions of \(X\) and \(\hat X\) for the graphs \((S, \Rta)\) and \((T, \rta)\), respectively. Then \[r(x) = f[x \mid \varphi(x) \hat r[\varphi(x)], \quad x \in S\]

Details:

As usual, let \(f\) and \(\hat f\) denote density functions of \(X\) and \(\hat X\), and let \(F\) and \(\hat F\) denote the reliability functions of \(X\) and \(\hat X\) for \((S, \Rta)\) and \((T, \rta)\), respectively. Then \[r(x) = \frac{f(x)}{F(x)} = \frac{f(x)}{\hat f(t)} \frac{\hat f(t)}{\hat F(t)} = f(x \mid t) \hat r(t), \quad t \in T, \; x \in S_t \]

Equvalently \(r(x) = f(x \mid t) \hat r(t)\) for \(t \in T\) and \(x \in S_t\). From note that \[\int_{S_t} r(x) \, d\lambda_t(x) = \hat r(t), \quad t \in T\] As usual, we are particularly interested in constant rate distributions for the induced graph \((S, \Rta)\). Here is the main result.

\(X\) has constant rate \(\alpha \in (0, \infty)\) for \((S, \Rta)\) if and only if

  1. \(\hat X\) has rate function \(\hat r\) for \((T, \rta)\) given by \(r(t) = \alpha \beta(t)\) for \(t \in T\).
  2. The conditional distribution of \(X\) given \(\hat X = t\) is uniform on \(S_t\) for \(t \in T\).
Details:

As before, let \(\hat F\) denote the reliability function of \(\hat X\) for \((T, \rta)\), so that the reliability function \(F\) of \(X\) for \((S, \Rta)\) is given by \(F(x) = \hat F[\varphi(x)]\) for \(x \in S\). Suppose first that \(X\) has constant rate \(\alpha \in (0, \infty)\) for \((S, \Rta)\). Then \(f = \alpha F\) is a density of \(X\) and hence by , a density \(\hat f\) for \(\hat X\) is given by \[\hat f(t) = \int_{S_t} f(x) \, d\lambda_t(x) = \int_{S_t} \alpha \hat F[\varphi(x)] \, d\lambda_t(x) = \alpha \hat F(t) \beta(t), \quad t \in T\] Hence the rate function of \(\hat X\) for \((T, \rta)\) is \(\hat r = \alpha \beta\). Moreover, by , for \(t \in T\), the conditional density of \(X\) given \(\hat X = t\) is \[f(x \mid t) = \frac{f(x)}{\hat f(t)} = \alpha \frac{F(x)}{\hat f(t)} = \alpha \frac{\hat F(t)}{\hat f(t)} = \frac{1}{\beta(t)}, \quad x \in S_t\] Conversely, suppose that (a) and (b) hold, so in particular a density \(\hat f\) of \(\hat X\) is given by \(\hat f(t) = \alpha \beta(t) \hat F(t)\) for \(t \in T\). Let \(f = \alpha F\) so that \(f(x) = \alpha \hat F([\varphi(x)]\) for \(x \in S\). We need to show that \(f\) is a density of \(X\). For \(A \in \ms S\), \begin{align*} \P(X \in A) &= \E[\P(X \in A \mid \hat X)] = \int_T \hat f(t) \P(X \in A \mid \hat X = t) \, d\mu(t) \\ &= \int_T \hat f(t) \frac{\lambda_t(A \cap S_t)}{\lambda_t(S_t)} \, d\mu(t) = \int_T \alpha \beta(t) \hat F(t) \frac{\lambda_t(A \cap S_t)}{\beta(t)} \\ &= \int_T \alpha \hat F(t) \lambda(A \cap S_t) \, d\mu(t) \end{align*} But on the other hand, \(f(x) = \alpha \hat F(t)\) for \(x \in S_t\) and \(t \in T\), so by , \[\int_A f(x) \, d\lambda(x) = \int_T \int_{A \cap S_t} f(x) \, d\lambda_t(x) \, d\mu(t) = \int_T \alpha \hat F(t) \lambda_t(A \cap S_t)\]

Let \(K\) be the kernel for \((T, \rta)\) defined by \[K(u, v) = \beta(u) \bs 1(u \rta v) \quad (u, v) \in T^2\] Then condition (a) in means that \(\alpha\) is a right eigenvalue of \(K\) and \(\hat f\) is a corresponding eigenfunction, for the space \(\ms L_1(T)\).

Note also that if \(\beta\) is constant on \(T\), then \(X\) has constant rate \(\alpha\) for the graph \((S, \Rta)\) if and only if \(\hat X\) has constant rate \(\alpha \beta\) for the graph \((T, \rta)\).

Let \(H\) denote the entropy operator, and suppose again that \(X\) is a random variable on \(S\). Then \[H(X) = H(\hat X) + H(X \mid \hat X) \]

Details:

This is a general result in entropy, since the distribution of \(X\) completely determines the distribution of \((X, \hat X)\), but we give a separate proof in this context. \begin{align*} H(X) &= -\E[\ln f(X)] = -\E[\E[\ln f(X) \mid \hat X]] = -\E(\E[\ln(\hat f(\hat X) f(X \mid \hat X)) \mid \hat X]) \\ &= -\E[\ln \hat f(\hat X)] - \E(\E[\ln f(X \mid \hat X) \mid \hat X]) = H(\hat X) + H(X \mid \hat X) \end{align*}

Random Walks

Let \(X\) be a random variable on \(S\) with probability density function \(f\) and with reliability function \(F\) for \((S, \Rta)\). Let \(\hat X = \varphi(X)\) be the corresponding variable on \(T\) with probability density function \(\hat f\) and with reliability function \(\hat F\) for \((T, \rta)\). As before, we assume that \(\hat f(t) \gt 0\) for almost all \(t \in T\).

Suppose that \(\bs Y = (Y_1, Y_2, \ldots)\) is the random walk on \((S, \Rta)\) associated with \(X\). Then \(\bs{\hat Y} = (\hat Y_1, \hat Y_2, \ldots)\) is the random walk on \((T, \rta)\) associated with \(\hat X\). Let \(P\) and \(\hat P\) denote the transition densities for \(\bs Y\) and \(\hat{\bs Y}\), respectively. Then for \(n \in \N_+\), \[P^n(x, y) = \hat P^n[\varphi(x), \varphi(y)] f[y \mid \varphi(y)], \quad (x, y) \in S^2\]

Details:

By definition, \(Y_1\) has density function \(f\) so \(\hat Y_1\) has density function \(\hat f\). Moreover, \(\bs Y\) is a homogenous, discrete-time Markov process with transition density \(P\) given by \(P(x, y) = f(y) / F(x)\) if \(x \Rta y\) (and 0 otherwise). That is, \[P(x, y) = \frac{f(y)}{F(x)} = \frac{f(y)}{\hat f[\varphi(y)]} \frac{\hat f[\varphi(y)]}{\hat F[\varphi(x)} = f[y \mid \varphi(y)] \hat P[\varphi(x), \varphi(y)]\] The important point is that \(P(x, y)\) is constant for \(x \in S_t\) and \(t \in T\), so for \(k \in \N_+\) conditioning on \(Y_k = x\) is the same as conditioning on \(Y_k \in S_t\). Let \(B \in \ms T\), \(n \in \N_+\), and \((t_1, t_2, \ldots, t_n) \in T^n\). Then \begin{align*} \P(\hat Y_{n+1} \in B & \mid \hat Y_1 = t_1, \ldots, \hat Y_{n-1} = t_{n-1}, \hat Y_n = t)\\ & = \P[Y_{n+1} \in \varphi^{-1}(B) \mid Y_1 \in S_{t_1}, \ldots, Y_{n-1} \in S_{t_{n-1}}, Y_n \in S_t] \\ &= \P[Y_{n+1} \in \varphi^{-1}(B) \mid Y_n \in S_t] = \int_{\varphi^{-1}(B)} \frac{f(y)}{\hat F(t)} \, d\lambda(y) \\ &= \frac{1}{\hat F(t)} \int_B \int_{S_u} f(x) \, d\lambda_u(x) \, d\mu(u) = \frac{1}{\hat F(t)} \int_B \hat f(u) \, d\mu(u) = \int_B \hat P(t, u) \, d\mu(u) \end{align*} For the higher-order transition densities, we need the relation between the higher-order kernels. For \(n \in \N_+\), \begin{align*} R^{(n)}(x, y) &= \int_{x \Rta x_1 \cdots \Rta x_n \Rta y} r(x_1) \cdots r(x_n) \, d\lambda^n(x_1, \ldots, x_n) \\ &= \int_{\varphi(x) \rta \varphi(x_1) \cdots \rta \varphi(x_n) \rta \varphi(y)} r(x_1) \cdots r(x_n) \, d\lambda^n(x_1, \ldots, x_n) \\ &= \int_{\varphi(x) \rta t_1 \cdots \rta t_n \rta \varphi(y)} \int_{S_{t_1} \times \cdots \times S_{t_n}} r(x_1) \cdots r(x_n) \, d(\lambda_{t_1} \cdots \lambda_{t_n})(x_1, \ldots, x_n) \, d\mu^n(t_1, \ldots, t_n) \\ &= \int_{\varphi(x) \rta t_1 \rta \cdots \rta t_n \rta \varphi(y)} \hat r(t_1) \cdots \hat r(t_n) \, d\mu^n(t_1, \ldots, t_n) = \hat R^{(n)}[\varphi(x), \varphi(y)], \quad (x, y) \in S^2 \end{align*} Hence \begin{align*} P^n(x, y) &= R^{(n)}(x, y) P(x, y) = \hat R^{(n)}[\varphi(x), \varphi(y)] \hat P[\varphi(x), \varphi(y)] f[y \mid \varphi(y)] \\ &= \hat P^n[\varphi(x), \varphi(y)] \hat f[y \mid \varphi(y)], \quad (x, y) \in S^2 \end{align*}

Note in particular that \[\int_{S_t} P^n(x, y) \, d\lambda_t(y) = \hat P^n[\varphi(x), t], \quad x \in S, \, t \in T\] The next two corollaries give more information about the connections between the random walks \(\bs Y\) and \(\bs{\hat Y}\).

Consider again the random walks \(\bs Y\) and \(\bs{\hat Y}\) and fix \(n \in \N_+\).

  1. Let \(g_n\) and \(\hat g_n\) denote the density functions of \((Y_1, Y_2, \ldots, Y_n)\) and \((\hat Y_1, \hat Y_2, \ldots, \hat Y_n)\), respectively. For \((x_1, x_2, \ldots, x_n) \in S^n\), \[g_n(x_1, x_2, \ldots, x_n) = f[x_1 \mid \varphi(x_1)] f([x_2 \mid \varphi(x_2)] \cdots f[x_n \mid \varphi(x_n)] \hat g_n[\varphi(x_1), \varphi(x_2), \ldots, \varphi(x_n)]\]
  2. Let \(f_n\) and \(\hat f_n\) denote the density functions of \(Y_n\) and \(\hat Y_n\), respectively. Then \[f_n(x) = f[x \mid \varphi(x)] \hat f_n[\varphi(x)], \quad x \in S\]
Details:
  1. Recall that \[g_n(x_1, x_2, \ldots, x_n) = r(x_1) r(x_2) \cdots r(x_{n-1}) f(x_n), \quad (x_1, x_2, \ldots, x_n) \in S^n\] The result then follows since \(r(x) = f[x \mid \varphi(x)] \hat r[\varphi(x)]\) and \(f(x) = f[x \mid \varphi(x)] \hat f[\varphi(x)]\) for \(x \in S\).
  2. We need the relation between the cumulative rate functions of order \(n \in \N_+\). The same argument used to show that \(R^{(n)}(x, y) = \hat R^{(n)}[\varphi(x), \varphi(y)]\) for \((x, y) \in S^2\) shows that \(R_n(x) = \hat R_n[\varphi(x)]\) for \(x \in S\). So now \[f_n(x) = R_{n-1}(x) f(x) = \hat R_{n-1}[\varphi(x)] f[x \mid \varphi(x)] \hat f[\varphi(x)] = f[x \mid \varphi(x)] \hat f_n[\varphi(x)], \quad x \in S\]

In particular, \begin{align*} \int_{S_{t_1} \times \cdots \times S_{t_n}} g_n(x_1, \ldots, x_n) \, d(\lambda_{t_1} \cdots \lambda_{t_n})(x_1, \ldots, x_n) &= \hat g(t_1, \ldots, t_n), \quad (t_1, \ldots, i_n) \in T^n \\ \int_{S_t} f_n(x) \, d\lambda_t(x) &= \hat f_n(t), \quad t \in T \end{align*}

Consider again the random walks \(\bs Y = (Y_1, Y_2, \ldots)\) and \(\bs{\hat Y} = (\hat Y_1, \hat Y_2, \ldots)\).

  1. \(\bs Y\) is a conditionally independent sequence given \(\bs{\hat Y}\).
  2. For \(n \in \N_+\) and \(t \in T\), the conditional distribution of \(Y_n\) given \(\hat Y_n = t\) has density function \(f(\cdot \mid t)\).
Details:

The results follow directly from .

  1. Let \(n \in \N_+\) and \((t_1, t_2, \ldots, t_n) \in T^n\). Then the conditional density of \((Y_1, Y_2, \ldots, Y_n)\) given \(\{\hat Y_1 = t_1, \hat Y_2 = t_2, \ldots, \hat Y_n = t_n\}\) is \begin{align*} (x_1, x_2, \ldots, x_n) &\mapsto \frac{g_n(x_1, x_2, \ldots, x_n)}{\hat g_n(t_1, t_2, \ldots, t_n)} \\ &= f(x_1 \mid t_1) f(x_2 \mid t_2) \cdots f(x_n \mid t_n), \quad (x_1, x_2, \ldots, x_n) \in S_{t_1} \times S_{t_2} \times \cdots \times S_{t_n} \end{align*} So by the basic factorization theorem, it follows that \((Y_1, Y_2, \ldots, Y_n)\) are conditionally independent given \(\{\hat Y_1 = t_1, \hat Y_2 = t_2, \ldots \hat Y_n = t_n\}\) and that \(Y_k\) has conditional density function \(f(\cdot \mid t_k)\).
  2. This does not follow immediately from (a) since we are conditioning only on \(\hat Y_n\), but the proof is just as easy. For \(t \in T\), the conditional density of \(Y_n\) given \(\hat Y_n = t\) is \[x \mapsto \frac{f_n(x)}{\hat f_n(t)} = f(x \mid t), \quad x \in S_t\]

The following theorem gives the connection for the point processes \(\bs N\) and \(\bs {\hat N}\) associated with the random walks \(\bs Y\) and \(\bs{\hat Y}\).

Define the measure \(\eta\) on \((T, \ms T)\) by \(\eta(B) = E(\hat N_B)\) for \(B \in \ms T\). Then \[\E(N_A) = \int_T \P(X \in A \cap S_t \mid \hat X = t) \, d\eta(t), \quad A \in \ms S\]

Details:

Note that \begin{align*} \E(N_A) &= \sum_{n=0}^\infty \P(X_n \in A) = \sum_{n=0}^\infty \int_A f_n(x) \, d\lambda(x) \\ &= \sum_{n=0}^\infty \int_T \int_{A \cap S_t} f_n(x) \, d\lambda_t(x) \, d\mu(t) = \sum_{n=0}^\infty \int_T \int_{A \cap S_t} f(x \mid t) \hat f_n(t) \, d\lambda_t(x) \, d\mu(t) \\ &= \sum_{n=0}^\infty \int_T \hat f_n(t) \int_{A \cap S_t} f(x \mid t) \, d\lambda_t(x) \, d\mu(t) = \sum_{n=0}^\infty \int_T \hat f_n(t) \P(X \in A \cap S_t \mid \hat X = t) \, d\mu(t) \\ &= \int_T \P(X \in A \cap S_t \mid \hat X = t) \, d\eta(t) \end{align*} since \(t \mapsto \sum_{n=0}^\infty \hat f_n(t)\) is the density of \(\eta\) with respect to \(\mu\).

In the case that \(X\) has constant rate \(\alpha \in (0, \infty)\) for the graph \((S, \Rta)\), the random walk results simplify significantly since \(f(x \mid t) = 1 / \beta(t)\) for \(x \in S_t\), the density function of the uniform distribution on \(S_t\) for \(t \in T\). Here is a summary of the results, using the same notation as above.

Suppose that \(X\) has constant rate \(\alpha \in (0, \infty)\), and that \(\bs Y = (Y_1, Y_2, \ldots)\) and \(\bs{\hat Y} = (\hat Y_1, \hat Y_2, \ldots)\) are the random walks on \((S, \Rta)\) and \((T, \rta)\) corresponding to \(X\) and \(\hat X\) respectively.

  1. For \(n \in \N_+\), \[P^n(x, y) = \frac{\hat P^n[\varphi(x), \varphi(y)]} {\beta[\varphi(y)]}, \quad (x, y) \in S^2\]
  2. For \(n \in \N_+\) \[g_n(x_1, x_2, \ldots, x_n) = \frac{\hat g_n[\varphi(x_1), \varphi(x_2), \ldots, \varphi(x_n)]}{\beta[\varphi(x_1)] \beta[\varphi(x_2)] \cdots \beta[\varphi(x_n)]}, \quad (x_1, x_2, \ldots, x_n) \in S^n\]
  3. For \(n \in \N_+\), \[f_n(x) = \frac{\hat f_n[\varphi(x)]} {\beta[\varphi(x)]}, \quad x \in S\]
  4. Given \(\bs{\hat Y}\), the random walk \(\bs Y\) is a sequence of independent variables, and given \(\hat Y_n = t\), \(Y_n\) is uniformly distributed on \(S_t\) for each \(n \in \N_+\) and \(t \in T\).

The Discrete Case

Suppose that \((S, \ms S, \lambda)\) is a general \(\sigma\)-finite measure as above, but that the second measure space \((I, \ms P(I), \#)\) is discrete, so that \(I\) is countable, and as usual, \(\ms P(I)\) is the collection of all subsets of \(I\), and the reference measure \(\#\) is counting measure. Once again, \(\varphi\) is a measurable function from \(S\) onto \(I\), so that \(\ms P = \{S_i: i \in I\}\) is a countable, measurable partition of \(S\). We assume that \(\beta(i) = \lambda(S_i) \in (0, \infty)\) for \(i \in I\), and this in turn guarantees Assumption with \(\lambda_i\) simply \(\lambda\) restricted to \((S_i, \ms S_i)\) for \(i \in I\). That is, \[\lambda(A) = \sum_{i \in I} \lambda(A \cap S_i) = \sum_{i \in I} \lambda_i(A \cap S_i), \quad A \in \ms S\] To complete the setup, \((I, \rta)\) is a discrete graph and \((S, \Rta)\) the corresponding induced graph. The general results in the subsections above simplify, with integrals over \(T\) replaced by sums over \(I\). In particular, the path function \(\gamma_n\) of order \(n \in \N_+\) is given by \[\gamma_n(x) = \sum_{i_1 \rta i_2 \rta \cdots \rta i_n \rta \varphi(x)} \beta(i_1) \beta(i_2) \cdots \beta(i_n), \quad x \in S\]

Suppose now that \(X\) is a random variable in \(S\) and density function \(f\). The corresponding index variable \(\hat X = \varphi(X)\) takes values in \(I\) and has density function \(\hat f\) given by \[\hat f(i) = \P(\hat X = i) = \P(X \in S_i) = \int_{S_i} f(x) \, d\lambda(x), \quad i \in I\]

Suppose that \((I, \rta)\) is a finite, strongly connected graph. Then there exists a unique constant rate distribution for the induced graph \((S, \Rta)\).

Details:

As with the general result in Section 5, this is just the Peron-Frobenius theorem. The matrix \(K\) defined in has a unique positive eigenvalue with multiplicity 1 (the largest eigenvalue) and a corresponding eigenfunction that is strictly positive. The normalized eigenfunction \(\hat f\) is a probability density function on \(I\) that satisfies part (a) in . Then \(f\) defined by \(f(x) = \hat f[\varphi(x)] / \beta[\varphi(x)]\) for \(x \in S\) is the probability density function of a constant rate distribution for \((S, \Rta)\).

For the point processes, suppose that \(A \in \ms S\). Then \[\E(N_A) = \sum_{i \in I} \P(X \in A \cap A_i \mid X \in S_i) \E(\hat N_i)\]

Examples and Exercises

Equivalence Relations

Equivalence relations provide one of the simplest examples of induced graphs.

Suppose that the discrete graph is \((I, =)\), so that the only edges are loops at each vertex. The corresponding relation \(\equiv\) on \(S\) is given by \(x \equiv y\) if and only if \(x, \, y \in S_i\) for some \(i \in I\). So as the notation suggests, \(\equiv\) is the equivalence relation associated with the partition \(\ms P = \{S_i: i \in I\}\) (or equivalently the index function \(\varphi\)).

The sets in the partition \(\ms P\) are the equivalence classes and the relation \(\equiv\) is reflexive, symmetric, and transitive. By symmetry, all left and right objects are the same, so we can drop the adjectives. Except for the special symbols for the relations, we use the definitions, notation, and assumptions in the first three subsections. Many of the general results simplify considerably.

Of course \(S = \bigcup_{i \in I} S_i\) but also the relation \(\equiv\) is the set \(\bigcup_{i \in I} S_i^2\), and more generally the set of paths of length \(n \in \N_+\) is the set \(\bigcup_{i \in I} S_i^{n+1}\). For \(n \in \N\), the path function \(\gamma_n\) for \((S, \equiv)\) is given by \[\gamma_n(x) = \beta^n(i), \quad x \in S_i, \, i \in I \] So \(\gamma_n = \gamma^n\) for \(n \in \N\).

The path generating function \(\Gamma\) of \((S, \equiv)\) is given by \[\Gamma(x, t) = \frac{1}{1 - \beta(i) t}, \quad i \in I, \, x \in S_i, \, |t| \lt 1 / \beta(i)\]

Let \(L\) denote the adjacency kernel of \((S, \equiv)\). If \(n \in \N_+\), and \(u: S \to \R\) is measurable then \[L^n u(x) = \beta^{n-1}(i) \int_{S_i} u(y) \, d\lambda(y), \quad i \in I, \, x \in S_i\] assuming as usual that the integral exists.

Suppose now that \(X\) is a random variable in \(S\), and as usual, let \(\hat X\) denote the corresponding index variable in \(I\).

  1. For the reliability functions of the graphs, note that \(\hat F = \hat f\) and hence \(F(x) = \hat f(i)\) for \(i \in I\) and \(x \in S_i\).
  2. For the rate functions of the graphs, note that \(\hat r(i) = 1\) for \(i \in I\) and \(r(x) = f(x) / \hat f(i) = f(x \mid i)\) for \(i \in I\) and \(x \in S_i\).

So every distribution on \(I\) has constant rate 1 for \((I, =)\). Constant rate distributions for \((S, \equiv)\) exist if and only if the equivalence classes have the same size.

A constant rate distribution for \((S, \equiv)\) exists if and only if \(\beta\) is constant on \(I\). In this case, the rate constant is \(\alpha = 1 / \beta\) and the constant rate distribution is uniform on \(S_i\) for each \(i \in I\).

Details:

This follows directly from since condition (a) becomes \(1 = \alpha \beta(i)\) for \(i \in I\).

So if \(\beta\) is constant on \(I\) then there is an infinite family of distributions with constant rate \(\alpha = 1 / \beta\), parametrized by \(\hat f\). That is, given a density function \(\hat f\) on \(I\), the function \(f\) on \(S\) defined by \(f(x) = \hat f(i) / \beta\) for \(x \in S_i\) and \(i \in I\) is a density function that has constant rate \(1 / \beta\) for \((S, \equiv)\). Random walks on \((S, \equiv)\) are particularly simple. Once again, suppose that \(X\) is a random variable on \(S\) and that \(\hat X\) is the corresponding index variable.

Suppose that \(\bs Y = (Y_1, Y_2, \ldots)\) is the random walk on \((S, \equiv)\) associated with \(X\), and that \(\bs N = \{N_A: A \in \ms S\}\) is the corresponding point process. Then

  1. The transition density \(P\) of \(\bs Y\) is given by \(P(x, y) = f(y \mid i)\) for \(i \in I\) and \(x, \, y \in S_i\) and more generally \(P^n = P\) for \(n \in \N_+\).
  2. For \(n \in \N_+\), \((Y_1, Y_2, \ldots, Y_n)\) has density function \(g_n\) given by \[g_n(x_1, x_2, \ldots, x_n) = f(x_1) f(x_2 \mid i) \cdots f(x_n \mid i), \quad i \in I, \, (x_1, x_2, \ldots, x_n) \in S_i^n\]
  3. For \(n \in \N_+\), \(Y_n\) has density function \(f_n = f\) for \(n \in \N_+\).
  4. Given \(Y_1 \in S_i\) for \(i \in I\), the random walk \(\bs Y\) is a sequence of independent variables with common density function \(f(\cdot \mid i)\).
  5. For \(A \in \ms S\), \(\E(N_A) = \infty\) if \(\lambda(A) \gt 0\) and \(\E(N_A) = 0\) if \(\lambda(A) = 0\).
Details:

The results follows from the results in the subsection on Random Walks above since the corresponding random walk \(\bs{\hat Y} = (\hat Y_1, \hat Y_2, \ldots)\) on \((I, =)\) is constant: \(\hat Y_n = \hat Y_1\) for \(n \in \N_+\).

Suppose now that \(\bs Y = (Y_1, Y_2, \ldots)\) is the random walk on \((S, \equiv)\) associated with a random variable \(X\) that has constant rate \(\alpha \in (0, \infty)\).

  1. The transition density \(P\) simplifies to \(P(x, y) = \alpha\) for \(x, \, y \in S_i\) and \(i \in I\).
  2. For \(n \in \N_+\), the density function \(g_n\) of \((Y_1, Y_2, \ldots, Y_n)\) simplifies to \(g_n(x_1, x_2, \ldots, x_n) = \alpha^n \hat f(i)\) for \((x_1, x_2, \ldots, x_n) \in S_i^n\) and \(i \in I\).
  3. Equivalently, given \(Y_1 \in S_i\), the random walk \(\bs Y\) is a sequence of independent variables, each uniformly distributed on \(S_i\).
Details:

Note that \(\beta(i) = 1 / \alpha\) for \(i \in I\). The density function \(f\) of \(X\) is constant on the equivalence classes, and \(f(x) = \alpha \hat f(i)\) for \(x \in S_i\) and \(i \in I\). The distribution in (b) is a mixture of uniform distributions on \(S_i^n\) for \(i \in I\), with mixture density \(\hat f\).

The following exercises explore some very simple special cases:

Consider the general measure space \((S, \ms S, \lambda)\) where the only set in the partition is \(S\) itself, so that \(\equiv\) is the set \(S^2\) and is the complete relation. This corresponds to the discrete graph \((I, =)\) where \(I\) is a singleton. Let \(\beta = \lambda(S) \in (0, \infty)\).

  1. Give the path function \(\gamma_n\) of order \(n \in \N_+\) in closed form.
  2. Give the path generating function \(\Gamma\) in closed form.
  3. Identify the constant rate distribution.
  4. Characterize the random walk on \((S, \equiv)\) associated with a distribution on \(S\).
Details:
  1. The path function \(\gamma_n\) of order \(n \in \N\) for \((S, \equiv)\) is constant on \(S\): \(\gamma_n = \beta^n\).
  2. The path generating function \(\Gamma\) for \((S, \equiv)\) is given by \(\Gamma(x, t) = 1 / (1 - \beta t)\) for \(x \in S\) and \(t \in (-1, 1)\).
  3. The probability function \(F\) of a random variable \(X\) for \((S, \equiv)\) is constant on \(S\): \(F = 1\). If \(X\) has density \(f\) then the rate function of \(X\) for \((S, \equiv)\) is \(r = f\).
  4. The only distribution with constant rate for \((S, \equiv)\) is the uniform distribution on \(S\), with rate \(1 / \beta\).
  5. The random walk \(\bs Y = (Y_1, Y_2, \ldots)\) on \((S, \equiv)\) associated with a density function \(f\) is a sequnce of independent variables, each with density function \(f\).

At the opposite extreme, suppose that the underlying measure space \((S, \ms S, \lambda)\) is discrete and that the equivalence relation is simply equality \(=\), so that the graphs \((S, =)\) and \((I, =)\) are essentially the same.

  1. Give the path function \(\gamma_n\) of order \(n \in \N_+\) in closed form.
  2. Give the path generating function \(\Gamma\) in closed form.
  3. Identify the constant rate distribution.
  4. Characterize the random walk on \((S, =)\) associated with a distribution on \(S\).
Details:
  1. The path function of order \(n \in \N_+\) for \((S, =)\) is constant on \(S\): \(\gamma_n = 1\).
  2. The path generating function \(\Gamma\) for \((S, =)\) is given by \(\Gamma(x, t) = 1 / (1 - t)\) for \(x \in S\) and \(t \in (-1, 1)\).
  3. If \(X\) is a random variable in \(S\) and density function \(f\) then the probability function \(F\) of \(X\) for \((S, =)\) is \(F = f\). So every distribution on \(S\) has constant rate 1 for \((S, =)\).
  4. The random walk on \((S, =)\) associated with the distribution of a random variable \(X\) has the form \((X, X, \ldots)\).

Suppose that \(S = [0, \infty)\) with the usual Borel \(\sigma\)-algebra \(\ms S\) and Lebesgue measure \(\lambda\). Let \(S_k = [k, k + 1)\) for \(k \in \N\).

  1. Give the path function \(\gamma_n\) of order \(n \in \N_+\) in closed form.
  2. Give the path generating function \(\Gamma\) in closed form.
  3. Identify the constant rate distribution.
Details:

The induced equivalence relation \(\equiv\) corresponds to the discrete graph \((\N, =)\) and the index function \(\varphi\) is given by \(\varphi(x) = \lfloor x \rfloor\) for \(x \in [0, \infty)\).

  1. The path function \(\gamma_n\) on \((S, \equiv)\) of order \(n \in \N_+\) is constant on \([0, \infty)\): \(\gamma_n = 1\).
  2. The path generating function \(\Gamma\) on \((S, \equiv)\) is given by \(\Gamma(x, t) = 1 / (1 - t)\) for \(x \in [0, \infty)\) and \(t \in (-1, 1)\).
  3. Suppose that \(\hat f\) is a probability density function on \(\N\). Define \(f(x) = \hat f(k)\) for \(x \in [k, k + 1)\). Then \(f\) is a density on \([0, \infty)\) which has constant rate \(1\) for \(([0, \infty), \equiv)\).

Complete Multipartite Graphs

Complete multipartite graphs are another simple example of induced graphs and are a bit more interesting.

Suppose that the discrete graph is \((I, \ne)\), the complete irreflexive graph on \(I\). The relation \(\lfrta\) on \(S\) induced by \((I, \ne)\) is given by \(x \lfrta y\) if and only if \(x \in S_i\) and \(y \in S_j\) for distinct \(i, \, j \in I\). The graph \((S, \lfrta)\) is the complete multipartite graph associated with the partition \(\ms P = \{S_i: i \in I\}\). In the special case that \(\#(I) = 2\), \((S, \lfrta)\) is a complete bipartite graph and in the case that \(\#(I) = 3\), \((S, \lfrta)\) is a complete tripartite graph.

Note that the discrete graph \((I, \ne)\) is the complement of the graph \((I, =)\) in previous subsection on equivalence relations. The relation \(\lfrta\) is symmetric and anti-reflexive. In particular, all left and right objects are the same and so we can drop the adjectives. Except for the special symbols used for the relations, we will use the same definitions, notation, and assumptions as in the first three subsections.

The path function of order \(n\) is given by \[\gamma_n(x) = \sum_{i_1 \ne i_2 \ne \cdots \ne i_n \ne i} \beta({i_1}) \beta({i_2}) \cdots \beta({i_n}), \quad i \in I, \, x \in S_i\]

Details:

For \(n \in \N_+\) note that a path \((i_1, i_2, \ldots, i_{n+1})\) of length \(n\) in \((I, \ne)\) is just a sequence in \(I^n\) with \(i_{k+1} \ne i_k\) for \(k \in \{1, 2, \ldots, n\}\). So from the results follow from

In particular, \[\gamma(x) = \sum_{j \ne i} \beta(j) = \lambda(S) - \beta(i) \quad i \in I, \, x \in S_i\] Since most of our theory depends on local finiteness of the graph, we make the following assumption:

Assume that \[\lambda(S) = \sum_{i \in I} \beta(i) \lt \infty\] so that \((S, \ms S, \lambda)\) is a finite measure space.

Of course, this hold automatically if \(I\) is finite. The path function and path generating function have simple, closed forms only in some special cases.

Suppose now that \(X\) is a random variable in \(S\). As usual, define the index variable \(\hat X = \varphi(X)\) in \(I\) with density function \(\hat f\) given by \[\hat f(i) = \P(\hat X = i) = \P(X \in S_i), \quad i \in I\]

  1. The reliability function \(\hat F\) of \(\hat X\) for \((I, \ne)\) is given by \[\hat F(i) = \sum_{j \ne i} \hat f(j) = 1 - \hat f(i), \quad i \in I\]
  2. The rate function \(\hat r\) of \(\hat X\) for \((I, \ne)\) is given by \[\hat r(i) = \frac{\hat f(i)}{1 - \hat f(i)}, \quad i \in I\]

Note that \(\hat r(i)\) is the odds ratio of the event \(\{X \in S_i\}\) for \(i \in I\). It follows that the reliability function \(F\) of \(X\) for \((S, \lfrta)\) is given by \(F(x) = 1 - \hat f(i)\) for \(i \in I\) and \(x \in S_i\). The basic equation in part (a) of for a distribution on \((S, \lfrta)\) with constant rate \(\alpha \in (0, \infty)\) is \(\hat f(i) = \alpha \beta(i) [1 - \hat f(i)]\) for \(i \in I\), or equivalently, \[\hat f(i) = \frac{\alpha \beta(i)}{1 + \alpha \beta(i)}, \quad i \in I\] Here is our main result.

There exists a unique constant rate distribution for the complete multipartite graph \((S, \lfrta)\).

  1. The rate constant is the unique solution \(\alpha \in (0, \infty)\) of the equation \[\sum_{i \in I} \frac{\alpha \beta(i)}{1 + \alpha \beta(i)} = 1\]
  2. The density function \(f\) of the constant rate distribution is given by \[f(x) = \frac{\alpha}{1 + \alpha \beta(i)}, \quad i \in I, \, x \in S_i\]
Details:

The equation for the rate constant and the form of \(f\) follow from . So we just need to show that the equation has a unique solution. Define \(\psi\) on \([0, \infty)\) by \(\psi(\alpha) = \sum_{i \in I} \alpha \beta(i) / [1 + \alpha \beta(i)]\). First we note that \(\psi(\alpha) \lt \infty\) for all \(\alpha \in (0, \infty)\). This is obvious of course if \(I\) is finite. In the case that \(I\) is countably infinite, we can take \(I = \N_+\). Since \(\sum_{i=1}^\infty \beta(i) \lt \infty\), it follows that \(\beta(i) \to 0\) as \(i \to \infty\). Comparing the series \(\sum_{i=1}^\infty \alpha \beta(i) / [1 + \alpha \beta(i)]\) with the convergent series \(\sum_{i=1}^\infty \beta(i)\) we have \[\frac{\alpha \beta(i) / [1 + \alpha \beta(i)]}{\beta(i)} = \frac{\alpha}{1 + \alpha \beta(i)} \to \alpha \text{ as } i \to \infty\] Hence \(\psi(\alpha) = \sum_{i=1}^\infty \alpha \beta(i) / [1 + \alpha \beta(i)] \lt \infty\) for \(\alpha \in [0, \infty)\). Next, since \(\alpha \mapsto \alpha \beta(i) / [1 + \alpha \beta(i)]\) is increasing on \([0, \infty)\) for each \(i\), the function \(\psi\) is also increasing on \([0, \infty)\). Moreover, \(\psi(0) = 0\) and by the monotone convergence theorem, \[\lim_{\alpha \to \infty} \psi(\alpha) = \sum_{i \in I} \lim_{\alpha \to \infty} \frac{\alpha \beta(i)}{1 + \alpha \beta(i)} = \sum_{i \in I} 1 = \#(I) \gt 1\] Also by the monotone convergence theorem, the function \(\psi\) is continuous. By the intermediate value theorem, there exists a unique \(\alpha \in (0, \infty)\) with \(\psi(\alpha) = 1\).

Explicit results are difficult except in a few special cases. The case where the partition sets have the same size is particularly simple.

Suppose that \(\#(I) = k \in \{2, 3, \ldots\}\) and \(\beta_i = \beta \in (0, \infty)\) for \(i \in I\).

  1. Give the path function \(\gamma_n\) of order \(n \in \N_+\) in closed form.
  2. Give the path generating function \(\Gamma\) in closed from.
  3. Identify the constant rate distribution.
Details:
  1. The path function \(\gamma_n\) of order \(n \in \N_+\) for \((S, \lfrta)\) is given by \[\gamma_n(x) = [(k - 1) \beta]^n, \quad x \in S\]
  2. The path generating function \(\Gamma\) for \((S, \lfrta)\) is given by \[\Gamma(x, t) = \frac{1}{1 - (k - 1) \beta t}, \quad x \in S, \, |t| \lt \frac{1}{(k - 1) \beta} \]
  3. The uniform distribution on \(S\) is the unique constant rate distribution for \((S, \lfrta)\), with rate \(\alpha = 1 / (k - 1) \beta\).

Complete Bipartite Graphs

Another tractable case is when \(\#(I) = 2\) so that \((I, \ne)\) is simply the (undirected) path on two vertices, and \((S, \lfrta)\) is a complete bipartite graph. For the remainder of this subsection, suppose \(I = \{0, 1\}\), so the partition sets are \(S_0\) and \(S_1 = S_0^c\), with sizes \(\beta_0 = \lambda(S_0)\) and \(\beta_1 = \lambda(S_1)\), respectively.

The path function \(\gamma_n\) of order \(n \in \N\) for \((S, \lfrta)\) is given as follows for \(x \in S_i\) and \(i \in \{0, 1\}\): \begin{align*} \gamma_n(x) & = \beta_i^{n / 2} \beta_{1 - i}^{n / 2}, \quad n \text{ even} \\ \gamma_n(x) & = \beta_i^{(n - 1) / 2} \beta_{1 - i}^{(n + 1) / 2}, \quad n \text{ odd} \end{align*}

Details:

Note that a walk in \((S, \lfrta)\) must alternate between the sets \(S_i\) and \(S_{1 - i}\).

For the complete bipartite graph \((S, \lfrta)\)

  1. The rate constant is \(\alpha = 1 / \sqrt{\beta_0 \beta_1}\).
  2. The density \(f\) of the constant rate distribution is given by \begin{align*} f(x) &= \frac{1}{\beta_0 + \sqrt{\beta_0 \beta_1}}, \quad x \in S_0\\ f(x) &= \frac{1}{\beta_1 + \sqrt{\beta_0 \beta_1}}, \quad x \in S_1 \end{align*}
Details:

The equation for the constant rate in is \(\alpha \beta_0 / (1 + \alpha \beta_0) + \alpha \beta_1 / (1 + \alpha \beta_1) = 1\), which simplifies to \(\alpha^2 \beta_0 \beta_1 = 1\). Hence \(\alpha = 1 / \sqrt{\beta_0 \beta_1}\). The form of \(f\) follows from the general results.

Let \(\bs Y = (Y_1, Y_2, \ldots)\) denote the random walk on \((S, \lfrta)\) associated with the constant rate distribution.

  1. \(\bs Y\) has transition density \(P\) given by \(P(x, y) = 1 / \beta_1\) if \(x \in S_0, \, y \in S_1\) and \(P(x, y) = 1 / \beta_0\) if \(x \in S_1\), \(y \in S_0\).
  2. \(\bs Y\) has invariant probability density function \(h\) given by \(h(x) = 1 / (2 \beta_i)\) if \(x \in S_i\) and \(i \in \{0, 1\}\).
Details:
  1. This follows easily from the definition \(P(x, y) = f(y) / F(x)\) for \(x \lfrta y\) and . The result also follows from the general theory in Section 5.
  2. The invariant probability density function is \(f^2\) normalized, so the result follows after a bit of algebra.

So for \(n \in \N_+\), if \(Y_n \in S_0\) then \(Y_{n+1}\) is uniformly distributed on \(S_1\) and if \(Y_n \in S_1\) then \(Y_{n+1}\) is uniformly distributed on \(S_0\).

For \(k \in \N_+\), the star of order \(k\) has a central vertex labeled 0 and \(k\) endpoints, labeled from 1 to \(k\). The star of order \(k\) is a complete bipartite graph with \(\beta_0 = 1\) and \(\beta_1 = k\).

For the star of order \(k \in \N_+\),

  1. The rate constant is \(\alpha = 1 / \sqrt{k}\)
  2. The constant rate distribution has density function \(f\) is given by \begin{align*} f(0) & = \frac{\sqrt{k}}{k + \sqrt{k}} \\ f(x) & = \frac{1}{k + \sqrt{k}}, \quad x \in \{1, 2, \ldots, k\} \end{align*}

Open the simulation of the constant rate distribution on the star of order \(k\). Vary \(k\) with the scrollbar and note the shape and location of the density function and the rate constant \(\alpha\). For various values of \(k\), run the simulation 1000 times and compare the empirical density function to the probability density function. The mean and variance are shown in the distribution graph and table, but these are largely meaningless since the labels on the vertices are arbitrary.

Open the simulation of the random walk on the star of order \(k\) assoicated with the constant rate distribution. Vary \(k\) with the scrollbar and note the shape and location of the invariant density function and the rate constant \(\alpha\). For various values of \(k\), run the simulation 1000 times and compare the empirical density function to the invarinat density function.