\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\rta}{\rightarrow}\) \(\newcommand{\upa}{\uparrow}\) \(\newcommand{\lfrta}{\leftrightarrow}\)
  1. Reliability
  2. 1. Graphs
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 9
  12. 10

3. Probability on Graphs

The Reliability Function

Definition

Our starting point is a measurable space \((S, \ms S)\). Recall that \(S^n\) is given the product \(\sigma\)-algebra \(\ms S^n\) for \(n \in \N_+\), and in the discrete case, when \(S\) is countable, \(\ms S = \ms P(S)\) the collection of all subsets of \(S\). Graphs with base set \(S\) are assumed to be measurable. In this section we are interested in basic probability on the space that relates to the structure of a graph. We assume that we have a probability space \((\Omega, \ms F, \P)\) in the background and that \(X\) is a random variable in \(S\).

The (right) reliability function \(F\) of \(X\) for the graph \((S, \rta)\) is the measurable function defined by \[F(x) = \P(x \rta X), \quad x \in S \]

Details:

By definition, \((x, y) \mapsto \bs{1}\{(x, y) \in S^2: x \rta y\}\) is measurable as a function from \(S^2\) to \(\{0, 1\}\). Hence \((x, \omega) \mapsto \bs{1}\{(x, \omega): x \rta X(\omega)\}\) is measurable as a function from \(S \times \Omega\) to \(\{0, 1\}\). In turn, this implies that \(x \mapsto \P(x \rta X)\) is measurable as a function from \(S\) to \([0, 1]\).

The term reliability function is used informally, since at this level of abstraction we are far removed from classical reliability theory. However, for a partial order graph \((S, \preceq)\) (our most important special case), the reliability function \(F\) has the more familiar form \[F(x) = \P(X \succeq x), \quad x \in S\] and in particular for the standard continuous graph \(([0, \infty), \le)\), the reliability function is the classical one: \[F(t) = \P(X \ge t), \quad t \in [0, \infty)\] In general, if it helps, you can think of the event \(\{x \rta X\}\) as success for random variable \(X\) at \(x \in S\), just as survival up to time \(t\) is success at \(t \in [0, \infty)\) for a random lifetime \(X\) in the classical setting. With this interpretation, \(F(x)\) is the probability of success at \(x \in S\). We usually assume that the graph supports \(X\) so that \(F(x) \gt 0\) for \(x \in S\).

Suppose that \(X\) and \(Y\) are random variables in \(S\) with reliability functions \(F\) and \(G\) for \((S, \rta)\), respectively. Then relative to \((S, \rta)\),

  1. \(X\) (right) dominates \(Y\) if \(F(x) \ge G(x)\) for \(x \in S\).
  2. \(X\) and \(Y\) are (right) equivalent if \(F(x) = G(x)\) for \(x \in S\).

For a partial order graph \((S, \preceq)\), dominance reduces to the usual definition of (first order) stochastic dominance: \(\P(X \succeq x) \ge \P(Y \succeq x)\) for all \(x \in S\). Clearly equivalence relative to \((S, \rta)\) really does define an equivalence relation on the collection of probability distributions on \(S\), and then dominance can be extended to a partial order on the equivalence classes. Our next result generalizes a theorem by Goldie and Resnick.

Suppose that the graph \((S, \rta)\) is anti-symmetric and that \(X\) is a random variable in \(S\) with reliability function \(F\). Let \(S_0 = \{x \in S: F(x) \lt 1\}\). Then \(\P(X \in S_0) = 1\).

Details:

Let \(C = S_0^c\) and let \(P\) denote the probability distribution of \(X\). For \(x \in C\), \(F(x) = \P(x \rta X) = 1\) so \(\P(\{X \in C\} \cap \{x \rta X\}) = P(C)\). Hence \[\int_C \P(\{X \in C\} \cap \{x \rta X\}) \, dP(x) = \int_C P(C) \, dP(x) = [P(C)]^2\] Equivalently \[\int_C \int_C L(x, y) \, dP(y) dP(x) = [P(C)]^2\] where as usual, \(L\) is the adjacency kernel of the graph. By Fubini's theorem and a change of variables, \[\int_C \int_C L(y, x) \, dP(y) dP(x) = [P(C)]^2\] Adding the last two equations we have \[2 [P(C)]^2 = \int _C \int_C [L(x, y) + L(y, x)] \, dP(y) dP(x)\] Since \(\rta\) is anti-symmetric, almost no \((x, y) \in S^2\) satisfies \(x \rta y\) and \(y \rta x\), so \(L(x, y) + L(y, x) \le 1\) for almost all \((x, y) \in S^2\). Hence we have \(2 [P(C)]^2 \le [P(C)]^2\) or \([P(C)]^2 \le 0\). Hence \(P(C) = 0\).

Suppose that \((S, \preceq)\) is a discrete, uniform partial order graph, and let \(\upa\) denote the covering relation. As above, let \(F\) denote the reliability function of \(X\) for \((S, \preceq)\), and for \(n \in \N\), let \(G_n\) denotethe reliability function of \(X\) for \((S, \upa^n)\). Then \[F = \sum_{n=0}^\infty G_n\]

Details:

Recall that \(\upa^n\) is the \(n\)-fold composition power of \(\upa\) for \(n \in \N\), and in particular, \(\upa^0\) is simply the equality relation \(=\). Since \((S, \preceq)\) is uniform, \(\preceq\) is the disjoint union of \(\upa^n\) over \(n \in \N\) (as sets of ordered pairs). Hence \[F(x) = \P(x \preceq X) = \sum_{n=0}^\infty \P(x \upa^n X) = \sum_{n=0}^\infty G_n(x), \quad x \in S\]

Existence and Uniqueness

Recall that the (right) \(\sigma\)-algebra associated with the graph \((S, \rta)\) is the \(\sigma\)-algebra generated by the collection of (right) neigybor sets: \(\ms {A} = \sigma\{A_x: x \in S\}\) where \(A_x = \{y \in S: x \rta y\}\) for \(x \in S\). So by definition, the reliability function \(F\) of a probability distribution \(P\) on \(\ms A\) is simply the function that assigns probabilites to the right neigbor sets: \(F(x) = P(A_x)\) for \(x \in S\), and this is the reason for our interest in \(\ms A\) in the first place. In the general setting that we have here, \(F\) may or may not uniquely determine \(P\), so here is the appropriate definition:

The graph \((S, \rta)\) is (right) stochastic if the reliability function of a probability measure \(P\) on the associated \(\sigma\)-algebra \(\ms A\) uniquely determines \(P\). That is, if \(P\) and \(Q\) are probability measure on \(\ms A\) with the same reliablity function \(F\) then \(P = Q\).

This really is a graph-theoretic concept, since the associated \(\sigma\)-algebra \(\ms A\) amd the reliability function \(F\) depend only on the graph. However usually the reference \(\sigma\)-algebra \(\ms S\) is of primary interest. When \(\ms A\) is a proper subset of \(\ms S\), then in general there is little chance that the reliability function \(F\) of a probability measure \(P\) on \(\ms S\) uniquely determines \(P\). In the general setting, the existence problem is also quite complicated. That is, what conditions on a function \(F: S \to [0, 1]\) imply that \(F\) is the reliability function of a probability measure \(P\) on the associated \(\sigma\)-algebra \(\ms A\). We will consider the existence and uniqueness problems for some special classes of graphs.

Suppose that \((S, \rta)\) is a graph and that the collection of neighbor sets \(\{A_x: x \in S\}\) is closed under intersection and that there exists a countable \(C \subseteq S\) such that \(\bigcup_{x \in C} A_x = S\). Then \((S, \rta)\) is stochastic.

Details:

This is simply Dynkin's \((\pi, \lambda)\) theorem rephrased in our setting.

Suppose that \((S, \preceq)\) is a discrete partial order graph that is also an upper semi-lattice. Then \((S, \preceq)\) is stochastic.

Details:

For \(x, \, y \in S\), note that \(\{t \in S: x \preceq t\} \cap \{t \in S: y \preceq t\} = \{t \in S: x \vee y \preceq t\}\). Trivially \(\bigcup_{x \in S} \{t \in S: x \preceq t\} = S\) so the result follows from .

Density functions

Suppose now that \(\lambda\) is a fixed, \(\sigma\)-finite reference measure for \((S, \ms S)\). Probability density functions of random variables are with respect to \(\lambda\). As usual, in the discrete case, the reference measure is counting measure \(\#\) on \(\ms S = \ms P(S)\). Here is our first very simple result.

Suppose that \(X\) has density function \(f\) and has reliability function \(F\) for \((S, \rta)\). Then \(F = L f\) where \(L\) is the adjacency kernel of \((S, \rta)\).

Details:

Note that \[F(x) = \P(x \rta X) = \int_{x \rta y} f(y) \, d\lambda(y) = L f(x), \quad x \in S\]

So if \(L\), operating on the right on the space \(\ms L_1\), has a left inverse then the reliability function determines the density function. However, that condition is not necessary. Here is a more complete result.

Suppose again that \((S, \rta)\) is a graph with adjacency kernel \(L\). Then the following statements are equivalent:

  1. If \(f\) and \(g\) are probability density functions on \(S\) with the same reliability function \(F\) for \((S, \rta)\) then \(f = g\) almost everywhere, so that the distributions are the same.
  2. If \(h \in \ms L_1\), \(L h = 0\) and \(\int_S h \, d\lambda = 0\) then \(h = 0\) almost everywhere.
Details:

We prove that (b) implies (a) and that the negation of (b) implies the negation of (a). So suppose first that (b) holds and that \(f\) and \(g\) are probability density functions with the same reliability function \(F\), so that \(L f = L g\). Then \(f - g \in \ms L_1\), \(L(f - g) = 0\), \(\int_S (f - g) \, d \lambda = 0\). Hence \(f - g = 0\) almost everywhere and so \(f = g\) almost everywhere. Conversely, suppose that (b) does not hold, so that there exists \(h \in \ms L_1\) with \(L h = 0\), \(\int_S h \, d \lambda = 0\), and with \(h\) nonzero on a set of positive measure. Then \(\|h\|_1 \gt 0\) and hence \(f = |h| / \|h\|_1\) is a probability density function on \(S\). Moreover, \(f \ge h / \|h\|_1\) and strict inequality most hold on a set of positive measure since \(\int_S h \, d\lambda = 0\) (\(h\) must be positive on a set of positive measure and negative on a set of positive measure). Hence \(g := f - h / \|h\|_1\) differs from \(f\) on a set of positive measure. Note that \(g \ge 0\) and \[\int_S g \, d \lambda = \int_S f \, d \lambda - \frac{1}{\|h\|_1} \int_S h \, d\lambda = 1 - 0 = 1\] so \(g\) is a probability density function. Moreover, \[L g = L f - \frac{1}{\|h\|_1} L h = L f - 0 = L f\] Hence (a) does not hold.

For a finite graph, \(L\) is the adjacency matrix and the result can be restated very simply in terms of the spectrum of the graph.

Suppose that \((S, \rta)\) is a finite graph with adjacency matrix \(L\). Then a reliability function on \((S, \rta)\) uniquely determines the density function except when \(0\) is a (right) eigenvalue of \(L\) and there is a corresponding eigenfunction that sums to \(0\).

Examples that illustrate are given in below. In the case of a discrete, partial order graph \((S, \preceq)\), the probability density function can sometimes be recovered from the reliability function via Möbius inversion as studied in Section 2. Recall that the Möbius kernel \(M\) is the inverse of the adjacency kernel \(L\).

Suppose that \((S, \preceq)\) is a discrete, left finite partial order graph and that \(f\) is a probability density function on \(S\) with reliability function \(F\) for \((S, \preceq)\). If \(\sum_{x \in S} F(x) \lt \infty\) then \(f = M F\) where \(M\) is the Möbius kernel of \((S, \preceq)\). That is, \[f(x) = \sum_{x \preceq y} M(x, y) F(y), \quad x \in S\]

Details:

This follows from Möbius inversion formula in Section 2.

Suppose that \(f\) and \(g\) are probability density functions that have reliability functions \(F\) and \(G\) for \((S, \rta)\) respectively. If \(p \in (0, 1)\) then the density function \(h = p f + (1 - p) g\) has reliability function \(H = p F + (1 - p) G\) for \((S, \rta)\).

Details:

Let \(L\) denote the adjacency kernel of \((S, \rta)\) as before. Then \[H = L h = p L f + (1 - p) L g = p F + (1 - p) G\]

Rate Functions

Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\) and that \(X\) is a random variable in \(S\) with density function \(f\) and the reliability function \(F\). We assume that \(F(x) \gt 0\) for \(x \in S\). There is a natural definition for the rate function of \(X\).

The (right) rate function \(r\) of \(X\) for \((S, \rta)\) is defined by \[r(x) = \frac{f(x)}{F(x)}, \quad x \in S\]

As with the term reliability function, the term rate function is also used informally. The motivation comes again from reliability theory, when the graph is a partial order graph. In particular, for the standard continuous graph \(([0, \infty), \le)\), the rate function \(r\) is the failure rate function. The density function of \(X\) is determined for almost all \(x \in S\), and so the same is true of the rate function. Trivially, the reliability function along with the rate function determine the distribution of \(X\), but the rate function on its own may or may not determine the distribution of \(X\). For the following definition, recall that the (left) walk function \(u_n\) of order \(n \in \N_+\) is given by \[u_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n): x_1 \rta x_2 \rta \cdots \rta x_n \rta x\}, \quad x \in S\] so that \(u_n(x)\) is the measure of the set of walks of length \(n\) terminating in \(x\).

Suppose that \(X\) has rate function \(r\) for the graph \((S, \rta)\) and that \(n \in \N_+\).

  1. The cumulative rate function \(R_n\) of order \(n\) is given by \[R_n(x) = \int_{x_1 \rta x_2 \rta \cdots \rta x_n \rta x} r(x_1) r(x_2) \cdots r(x_n)\, d\lambda^n(x_1, x_2, \ldots, x_n), \quad x \in S\]
  2. The average rate function \(\bar r_n\) of order \(n\) is defined by \[\bar r_n(x) = \frac{R_n(x)}{u_n(x)}, \quad x \in S, \, u_n(x) \ne 0\]

Note that if \(u_n(x) = 0\) for some \(x \in S\) and \(n \in \N_+\) then \(R_n(x) = 0\) since \(u_n(x)\) is the measure of the domain of integration in the definition of \(R_n(x)\). We will show later that \(R_n(x)\) is finite for almost all \(x \in S\), and so by a suitable choice of the denstiy function \(f\), we can assume that \(R_n(x) \lt \infty\) for all \(x \in S\). Hence the definition of the average rate function in (b) makes sense. As before, the terms cumulative rate function and average rate function are used informally. For the standard graph \(([0, \infty) \le )\), the cumulative rate function \(R_1\) is the ordinary cumulative failure rate function and the average rate function \(\bar r_1\) is the average failure rate function. Aside from generalizing functions in classical reliability, the cumulative rate functions will turn out to be very useful in Section 4 on random walks. The following proposition gives a recursive method for computing the cumulative rate functions.

In the setting of definition , define \(R_0(x) = 1\) for \(x \in S\). Then for \(n \in \N\), \[R_{n + 1}(x) = \int_{t \rta x} R_n(t) r(t) \, d\lambda(t), \quad x \in S\]

Details:

By Fubini's theorem, we can write the multiple integral as an iterated integral: \begin{align*} R_{n + 1}(x) & = \int_{x_1 \rta \cdots \rta x_n \rta t \rta x} r(x_1) \cdots r(x_n) r(t) \, d\lambda^{n + 1}(x_1, \ldots, x_n, t) \\ & = \int_{t \rta x} \left[\int_{x_1 \rta \cdots \rta x_n \rta t} r(x_1) \cdots r(x_n) \, d\lambda^n(x_1, \ldots x_n)\right] r(t) \, d\lambda(t) \\ &= \int_{t \rta x} R_n(t) r(t) \, d\lambda(t), \quad x \in S \end{align*}

In terms of the adjacency operator \(L\) note that \(R_{n+1} = (r R_n) L\). We are particularly interested in the case where the rate function or the average rate function is constant; these will be studied in more detail in Section 5. If the relation is a partial order, the concepts of increasing and decreasing rate, and increasing and decreasing average rate make sense. For the later, we state the definition so that we avoid division by 0 problems. Once again, \(u_n\) is the left walk function of order \(n \in \N\).

Suppose that \((S, \preceq)\) is a partial order graph that supports \(X\), and that \(X\) has rate function \(r\) and cumulatve rate function \(R_n\) of order \(n \in \N_+\) for \((S, \preceq)\).

  1. \(X\) has increasing rate if \(x \preceq y\) implies \(r(x) \le r(y)\).
  2. \(X\) has decreasing rate if \(x \preceq y\) implies \(r(x) \ge r(y)\).
  3. \(X\) has increasing average rate of order \(n\) if \(x \preceq y\) implies \(u_n(y) R_n(x) \le u_n(x) R_n(y)\).
  4. \(X\) has decreasing average rate of order \(n\) if \(x \preceq y\) implies \(u_n(y) R_n(x) \ge u_n(x) R_n(y)\).

Suppose now that \((S, \rta)\) is a transitive graph. Let \(x \in S\) and consider the conditional distribution of \(X\) given \(x \rta X\) for the subgraph \((A_x, \rta)\), where \(A_x = \{y \in S: x \rta y\}\) is the set of right neighbors of \(x\).

  1. The reliability function of the conditional distribution is \(y \mapsto F(y) / F(x)\) on \(A_x\).
  2. The rate function of the conditional distribution is the restriction of \(r\) to \(A_x\).
Details
  1. Suppose that \(y \in S\) and that \(x \rta y\). Since the graph is transitive. \[\P(y \rta X \mid x \rta X) = \frac{\P(y \rta X, x \rta X)}{\P(x \rta X)} = \frac{\P(y \rta X)}{\P(x \rta X)} = \frac{F(y)}{F(x)}\]
  2. The conditional distribution has density function \(y \mapsto f(y) / \P(x \rta X)) = f(y) / F(x)\) for \(x \rta y\). Hence from (a) the rate function of the conditional distribution is \(y \mapsto f(y) / F(y) = r(y)\) for \(x \rta y\).

In particular, proposition holds for a partial order graph \((S, \preceq)\).

Moments

The Basic Moment Formula

Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\) and that \(X\) is a random variable in \(S\). Natural moments of \(X\) can be found by integrating the left operator associated with the adjacency kernel \(L\) by the right reliability function \(F\).

If \(g: S \to [0, \infty)\) is measurable then \[\int_S (g L^n) F(x) \, d\lambda(x) = \E[(g L^{n+1})(X)], \quad n \in \N\] where we interpret \(L^0\) as the identity operator, so that \(g L^0 = g\).

Details:

The main tool once again is Fubini's theorem. \begin{align*} \int_S (g L^n)(x) F(x) \, d\lambda(x) &= \int_S (g L^n)(x) \P(x \rta X) \, d\lambda(x) \\ &= \int_S (g L^n)(x) \E[\bs 1(x \rta X)] \, d\lambda(x) \\ &= \E\left[\int_{x \rta X} (g L^n)(x) \, d\lambda(x)\right] = \E[(g L^{n+1})(X)] \end{align*}

The result also holds for measurable \(g: S \to \R\), assuming the appropriate integrability. If \(X\) has density \(f\), the moment result is a simple consequence of the self-adjoint property: \[\langle g L^n, F \rangle = \langle g L^n, L f \rangle = \langle g L^{n+1}, f \rangle = \E[(g L^{n+1})(X)]\] For the following corollary, recall that \(u_n = \bs 1 L^n\) is the walk function of order \(n \in \N\) for the graph \((S, \rta)\) so that \(u_0(x) = 1\) for \(x \in S\), and for \(n \in \N_+\), \[u_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n) \in S^n: x_1 \rta \cdots \rta x_n \rta x\}, \quad x \in S\] the measure of the set of walks of length \(n\) that terminate in \(x\).

For \(n \in \N\), \[\int_S u_n(x) F(x) \, d\lambda(x) = \E[u_{n + 1}(X)]\] and in particular, \[\int_S F(x) \, d\lambda(x) = \E[u(X)] = \E[\lambda\{x \in S: x \rta X\}] \]

Details:

This follows from by taking \(g = \bs 1\).

Note that the condition for recovering the probability density function from the reliability function via Möbius inversion in proposition can be restated as \(\E[u(X)] \lt \infty \) by proposition .

The Generating Function

Our next result involves the left generating function \(U\) of the graph \((S, \rta)\). Recall that \[U(x, t) = \sum_{n = 0}^\infty u_n(x) t^n, \quad x \in S, \, |t| \lt \rho(x)\] where \(\rho(x)\) is the radius of convergence of the power series for \(x \in S\). We can use this to define a generating function for the random variable \(X\)

The (left) moment generating function \(M\) of \(X\) with respect to \((S, \rta)\) is given by \[M(t) = \E[U(X, t)] = \sum_{n = 0}^\infty \E[u_n(X)] t^n, \quad |t| \lt \rho\] where \(\rho\) is the radius of convergence of the power series.

Note that \(M\) is the ordinary generating function of the sequence of moments \((\E[u_n(X)]: n \in \N)\), but of course in this abstract setting, these moments do not in general determine the distribution of \(X\). Nonetheless they are interesting since \(\E[u_n(X)]\) is the expected measure of the walks of length \(n\) that terminate in \(X\). We can get the generating function of \(X\) by integrating the (left) generating function of the graph by the (right) reliability function of \(X\) for the graph:

Suppose that \(X\) has generating function \(M\) and reliability function \(F\) for the graph \((S, \rta)\). Then \[M(t) = 1 + t \int_S U(x, t) F(x) \, d\lambda(x), \quad |t| \lt \rho\]

Details:

Let \(x \in S\) and \(|t| \lt \rho\). Using Fubini's theorem and \begin{align*} 1 + t \int_S U(x, t) F(x) \, d\lambda(x) &= 1 + t \int_S \left[\sum_{n = 0}^\infty u_n(x) t^n\right] F(x) \, d\lambda(x) \\ &= 1 + t \sum_{n=0}^\infty t^n \left[\int_S u_n(x) F(x) \, d\lambda(x)\right] = 1 + t \sum_{n = 0}^\infty t^n \E[u_{n + 1}(X)] \\ &= 1 + \sum_{n = 0}^\infty t^{n + 1} \E[u_{n + 1}(X)] = \E \left[\sum_{k = 0}^\infty t^k u_k(X)\right] = \E[U(X, t) = M(t)] \end{align*}

Entropy

The following definition and theorem concern entropy. At this point, there is no connection to the underlying relation \(\rta\), but there soon will be.

Suppose that \(X\) is a random variable in \(S\) with density function \(f\). The entropy of \(X\) is \[H(X) = -\E(\ln[f(X)]) = -\int_S f(x) \ln[f(x)] d \lambda(x)\]

Suppose that \(X\) and \(Y\) are random variables in \(S\) with density functions \(f\) and \(g\) respectively. Then \[H(Y) = -\int_S g(x) \ln[g(x)] d\lambda(x) \leq -\int_S g(x) \ln[f(x)] d \lambda(x)\] with equality if and only if \(f(x) = g(x)\) almost everywhere (so that \(X\) and \(Y\) have the same distribution).

Details:

Note first that \(\ln t \le t - 1\) for \(t > 0\), so \(-\ln t \ge 1 - t\) for \(t > 0\), with equality only at \(t = 1\). Hence, \[-\ln\left(\frac{f(x)}{g(x)}\right) = -\ln[f(x)] + \ln[g(x)] \ge 1 - \frac{f(x)}{g(x)}\] for \(x \in S\) with \(g(x) \gt 0\). Multiplying by \(g(x)\) gives \[-g(x) \ln[f(x)] + g(x) \ln[g(x)] \ge g(x) - f(x), \quad x \in S\] The inequality is trivially true if \(g(x) = 0\) and hence holds on all of \(S\). Therefore \begin{align*} -\int_S g(x) \ln[f(x)] d \lambda(x) + \int_S g(x) \ln[g(x)] &\ge \int_S g(x) d \lambda(x) - \int_S f(x) d \lambda(x) \\ &= 1 - 1 = 0 \end{align*} Equality holds if and only \(f(x) = g(x)\) almost everywhere.

Right Walk Distributions

Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\). Let \(v_n\) denote the right walk function of order \(n \in \N\), so that \(v_0(x) = 1\) for \(x \in S\), and for \(n \in \N_+\), \(v_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n) \in S^n: x \rta x_1 \rta \cdots \rta x_n\}\) is the measure of the set of walks of length \(n\) starting in \(x \in S\). Let \(w_n\) denote the walk parameter of order \(n \in \N\) so that \(w_0 = \lambda(S)\) and for \(n \in \N_+\), \(w_n = \lambda^{n + 1}\{(x_1, x_2, \ldots, x_{n + 1}) \in S^{n + 1}: x_1 \rta x_2 \rta \cdots \rta x_{n + 1}\}\) is the measure of the set of all walks of length \(n\).

Suppose that \((S, \rta)\) is a finite graph so that \(\lambda(S) \in (0, \infty)\). Let \(f_n\) be the probability density function on \(S\) given by \(f_n(x) = v_n(x) / w_n\) for \(x \in S\). The corresponding distribution is the right walk distribution of order \(n\).

Details:

From Section 1, \(w_n\) is the normalizing constant for the function \(v_n\), and is finite since the graph is finite: \[w_n = \int_S v_n(x) \, d\lambda(x) \le [\lambda(S)]^{n + 1}\] So the density function \(f_n\) makes sense.

Note that the right walk distribution of order \(n = 0\) is the uniform distribution on \(S\), since \(v_0(x) = 1\) for \(x \in S\)and \(w_0 = \lambda(S)\). When \(n = 1\) we have \(v_1(x) = \lambda\{y \in S: x \rta y\}\), the measure of the set of right neighbors of \(x \in S\), and \(w_1 = \lambda^2\{(x, y) \in S^2: x \rta y\}\), the measure of the set of edges in the graph, or put another way, the measure of \(\rta\) as a set of ordered pairs.

Suppose again that \((S, \rta)\) is a finite graph and that \(f_n\) is the density function of the right walk distribution of order \(n \in \N\).

  1. The reliability function \(F_n\) of \(f_n\) is given by \[F_n(x) = \frac{v_{n + 1}(x)}{w_n} = \frac{w_{n + 1}}{w_n} f_{n + 1}(x), \quad x \in S\]
  2. The rate function \(r_n\) of \(f_n\) is given by \[r_n(x) = \frac{v_n(x)}{v_{n + 1}(x)}. \quad x \in S\]
Details:

From Section 1, the reliability function \(F_n\) is given by \[F_n(x) = \int_{x \rta y} f_n(y) \, d\lambda(y) = \frac{1}{w_n} \int_{x \rta y} v_n(y) \, d\lambda(y) = \frac{1}{w_n} v_{n + 1}(x), \quad x \in S\] and then the rate function \(r_n\) is given by \(r_n(x) = f_n(x) / F_n(x) = v_n(x) / v_{n + 1}(x)\) for \(x \in S\).

The results in suggest a possible convergence of \(f_m\), \(F_n\), and \(r_n\) as \(m \to \infty\). We will see this in some of the exercises below, and study this further in Section 5.

Suppose again that \((S, \rta)\) is a finite graph, and also is right regular of degree \(k \in (0, \infty)\). The right walk distribution of order \(n \in \N\) is uniform on \(S\).

Details:

The right walk function of order \(n \in \N\) is \(v_n(x) = k^n\) for \(x \in S\). The normalizing constant is \(w_n = \lambda(S) k^n\).

For finite graphs, the right walk distributions will be important in the study of random walks in Section 4 and constant rate distributions in Section 5.

Examples

Recall again that we have an underlying \(\sigma\)-finte measure space \((S, \ms S, \lambda)\), and that graphs with base set \(S\) are assumed to be measurable. In particular, in the discrete case, the reference \(\sigma\)-algebra \(\ms P(S)\) is the collection of all subsets of \(S\), and the reference measure \(\#\) is counting measure.

The Standard Continuous Graph

The standard continuous graph is mentioned in several places above, but here is a summary.

Recall that \([0, \infty)\) is given the collection of Borel measurable subsets \(\ms B\) as the reference \(\sigma\)-algebra and Lebesgue measure \(\lambda\) as the reference measure. With this measure structure, the graph \(([0, \infty), \le)\) is the standard continuous graph.

  1. The associated \(\sigma\)-algebra is \(\ms B\).
  2. The graph is stochastic.
  3. If \(X\) is a random variable in \([0, \infty)\), then \[\int_0^\infty \frac{x^n}{n!} \P(X \ge x) \, dx = \E\left[\frac{X^{n + 1}}{(n + 1)!}\right], \quad n \in \N\]
  4. The generating function \(M\) of random variable \(X\) is given by \(M(t) = \E\left(e^{t X}\right)\) for \(t \in \R\), so \(M\) is the ordinary moment generating function of \(X\)
Details:

This is standard stuff in probability theory.

  1. The neighbor set at \(x \in \R_+\) is \([x, \infty)\), so in particular \([a, b) = [a, \infty) - [b, \infty)\) for \(a, \, b \in [0, \infty)\) with \(a \lt b\). Hence the associated \(\sigma\)-algebra is the \(\sigma\)-algebra generated by the intervals.
  2. Suppose that \(P\) is a probability measure on \(([0, \infty), \ms B)\) that has reliability function \(F\) for \([0, \infty), \le)\). Then \(F(x) = P[x, \infty)\) for \(x \in [0, \infty)\) and hence \(P[a, b) = F(a) - F(b)\) for \(a, \, b \in \R_+\) with \(a \lt b\). So \(P\) is uniquely determined on \(\ms B\) by \(F\).
  3. This follows immediately from since \(u_n(x) = x^n / n!\) for \(n \in \N\) and \(x \in [0, \infty)\).

Again, for the standard continuous graph, the reliability function, rate function, cumulative rate functions, and average rate functions have their classical meanings, with rate replaced by failure rate. As a special case of (c), we have the standard result from elementary probability: \[\int_0^\infty \P(X \ge x) \, dx = \E(X)\]

Suppose that random variable \(X\) in \([0, \infty)\) has density function \(f\) given by \[f(x) = \frac{\alpha}{(x + 1)^{\alpha + 1}}, \quad x \in [0, \infty)\] where \(\alpha \in (0, \infty)\) is a parameter. Relative to the standard graph \(([0, \infty), \le)\), find each of the following:

  1. The reliability function \(F\).
  2. The rate function \(r\)
  3. The cumulative rate function \(R_n\) of order \(n \in \N\).
  4. The average rate function \(\bar r_n\) of order \(n \in \N\).
Details:
  1. \[F(x) = \frac{1}{(x + 1)^\alpha}, \quad x \in [0, \infty)\]
  2. \[r(x) = \frac{\alpha}{x + 1}, \quad x \in [0, \infty)\]
  3. \[R_n(x) = \frac{\left[\alpha \ln(x + 1)\right]^n}{n!}, \quad x \in [0, \infty)\]
  4. \[\bar r_n(x) = \left[\frac{\alpha \ln(x + 1)}{x}\right]^n, \quad x \in [0, \infty)\]

Random variable \(X\) in exercise has a version of the Pareto distribution. The rate function is decreasing on \([0, \infty)\), as is the average rate fucntion of order \(n \in \N\). The standard continuous graph is studied in detail in Chapter 3.

The Standard Discrete Graph

Recall that the standard discrete graph is \((\N, \le)\).

  1. The associated \(\sigma\)-algebra is \(\ms P(\N)\).
  2. The graph is stochastic.
  3. If \(X\) is a random variable in \(\N\), then \[\sum_{x = 0}^\infty \binom{x + n}{n} \P(X \ge x) = \E\left[\binom{X + n + 1}{n + 1}\right], \quad n \in \N\]
  4. The generating function \(M\) of random variable \(X\) is given by \(M(t) = 1 / (1 - t)^{X + 1}\) for \(|t| \lt 1\).
Details:

Again, this is standard stuff in probability theory.

  1. The neighbor set of \(x \in \N\) is \(A_x = \{x, x + 1, x + 2, \ldots\}\), so \(\{x\} = A_x - A_{x + 1}\) is in the associated \(\sigma\) algebra.
  2. If \(X\) is a random variable in \(\N\) then \(\P(X = x) = \P(X \ge x) - \P(X \ge x + 1)\).
  3. This follows immediately from since \(u_n(x) = \binom{x + n}{n}\) for \(x, \, n \in \N\).

As a special case of (c) with \(n = 0\), we have another standard result from elementary probabiltiy theory: \(\sum_{x = 0}^\infty \P(X \ge x) = \E(X + 1)\). In (d), note that the generating function \(M\) is a close cousin of the probability generating function of \(X\). The standard discrete graph is studied in detail in Chapter 4.

Suppose that random variable \(X\) in \(\N\) has density function \(f\) given by \[f(x) = \frac{1}{(x + 1)^\alpha} - \frac{1}{(x + 2)^\alpha}, \quad x \in \N\] where \(\alpha \in (0, \infty)\) is a prameter. Relative to the standard graph \((\N, \le)\), find each of the following:

  1. The reliability function \(F\).
  2. The rate function \(r\)
Details:
  1. \[F(x) = \frac{1}{(x + 1)^\alpha}, \quad x \in \N\]
  2. \[r(x) = 1 - \left(\frac{x + 1}{x + 2}\right)^\alpha, \quad x \in \N\]

Random variable \(X\) in exercise has a discrete version of the distribution in exercise , in a sense. The rate function is dicreasing on \(\N\).

Diamond Graphs

The diamond graph \((S, \lfrta)\) and a directed diamond graph \((S, \rta)\) were intreoduced in Section 1:

For the diamond graph,

  1. Find the associated \(\sigma\)-algebra.
  2. Is the graph stochastic?
  3. Is a probability measure on the reference \(\sigma\)-algebra \(\ms P(S)\) uniquely determined by the reliability function?
Details:
  1. The neighbor sets are \(A_1 = \{2, 3, 4\}\), \(A_2 = A_3 = \{1, 4\}\), \(A_4 = \{1, 2, 3\}\). So the associated \(\sigma\)-algebra is \(\ms A = \sigma(\{1\}, \{4\}, \{2, 3\})\). The 8 sets in \(\ms A\) are the possible unions of the disjoint sets \(\{1\}\), \(\{4\}\), and \(\{2, 3\}\).
  2. The graph is stochastic. Suppose that \(P\) is a probability measure on \(\ms A\). The reliability function \(F\) of \(P\) is given by \(F(1) = P(\{2, 3, 4\})\), \(F(2) = F(3) = P(\{1, 4\})\), \(F(4) = P(\{1, 2, 3\})\). So \(P\{1\} = 1 - F(1)\), \(P\{2, 3\} = 1 - F(2)\), \(P\{4\} = 1 - F(4)\). So the reliability function determines the probability of the four disjoint sets that generate the \(\sigma\)-algebra \(\ms A\).
  3. No. If \(P\) is a probability measure on \(\ms P(S)\), then we cannot determine \(P(\{2\})\) and \(P(\{3\})\) from the reliability functions. Also, the adjacency matrix has eigenvalue 0 and the corresponding eigenvector sums to 0.

For the directed diamond graph,

  1. Find the associated right \(\sigma\)-algebra.
  2. Is the graph stochastic?
  3. Is a probability measure on the reference \(\sigma\)-algebra \(\ms P(S)\) uniquely determined by the reliability function?
Details:
  1. The right neighbor sets are \(A_1 = \{1, 2\}\), \(A_2 = A_3 = \{4\}\), \(A_4 = \{1\}\). So the associated right \(\sigma\)-algebra is \(\ms A = \sigma(\{1\}, \{4\}, \{2, 3\})\). The 8 sets in \(\ms A\) are the possible unions of the disjoint sets \(\{1\}\), \(\{4\}\), and \(\{2, 3\}\).
  2. The graph is (right) stochastic. Suppose that \(P\) is a probability measure on \(\ms A\). The reliability function \(F\) of \(P\) is given by \(F(1) = P(\{2, 3\})\), \(F(2) = F(3) = P(\{4\})\), \(F(4) = P(\{1\})\). So the reliability function determine the probability of the four disjoint sets that generate the \(\sigma\)-algebra \(\ms A\).
  3. No. If \(P\) is a probability measure on \(\ms P(S)\), then we cannot determine \(P(\{2\})\) and \(P(\{3\})\) from the reliability functions. Also, the adjacency matrix has eigenvalue 0 and the corresponding eigenvector sums to 0.

For the diamond graph,

  1. Find the density function \(f_n\) of the walk distribution of order \(n \in \N\).
  2. Find \(\lim_{n \to \infty} f_n\).
Details:
  1. Using the walk functions of the diamond graph in Section 1, \begin{align*} f_n(1) = f_n(4) & = \frac{\left(1 - \sqrt{17}\right)^n \left(\sqrt{17} - 5\right) + \left(1 + \sqrt{17}\right)^n \left(\sqrt{17} + 5\right)} {4 \left[\left(1 - \sqrt{17}\right)^n \left(\sqrt{17} - 4\right) + \left(1 + \sqrt{17}\right)^n \left(\sqrt{17} + 4\right)\right]} \\ f_n(2) = f_n(3) & = \frac{\left(1 - \sqrt{17}\right)^n \left(\sqrt{17} - 3\right) + \left(1 + \sqrt{17}\right)^n \left(\sqrt{17} + 3\right)} {4 \left[\left(1 - \sqrt{17}\right)^n \left(\sqrt{17} - 4\right) + \left(1 + \sqrt{17}\right)^n \left(\sqrt{17} + 4\right)\right]} \end{align*}
  2. Taking limits in (a) gives \begin{align*} \lim_{n \to \infty} f_n(1) & = \lim_{n \to \infty} f_n(4) = \frac{\sqrt{17} - 3}{4} \\ \lim_{n \to \infty} f_n(2) & = \lim_{n \to \infty} f_n(3) = \frac{5 - \sqrt{17}}{4} \end{align*}

For the directed diamond graph, find the density function \(f_n\) of the right walk distribution of order \(n \in \N\).

  1. Find the density function \(f_n\) of the right walk distribution of order \(n \in \N\).
  2. Does \(f_n\) converge as \(n \to \infty\)?
Details:
  1. Using the walk functions of the diamond graph in Section 1,
    • If \(n = 0 \bmod 3\) then \(f_n(x) = 1 / 4\) for \(x \in S\), so the distribution is uniform on \(S\).
    • If \(n = 1 \bmod 3\) then \(f_n(1) = 2 / 5\) and \(f_n(2) = f_n(3) = f_n(4) = 1 / 5\).
    • If \(n = 2 \bmod 3\) then \(f_n(1) = f_n(4) = 1 / 3\) and \(f_n(2) = f_n(3) = 1 / 6\).
  2. No.

So right walk distribution of order \(n \in \N\) depends only on the remainder when \(n\) is divided by 3, which is the period of the graph. In particular, the distribution does not converge as \(n \to \infty\).

The Bull Graph

Consider the bull graph \((S, \lfrta)\) studied in Section 1 and shown below.

  1. Find the associated \(\sigma\)-algebra. is \
  2. Is the graph stochastic?
Details:
  1. The neighbor sets for the graph are \(A_1 = \{2, 3\}\), \(A_2 = \{1, 3, 4\}\), \(A_3 = \{1, 2, 5\}\), \(A_4 = \{2\}\), \(A_5 = \{3\}\). So \(B = A_2 - A_5 = \{1, 4\}\) and \(C = A_3 - A_4 = \{1, 5\}\). Then \(B \cap C = \{1\}\), \(B \setminus C = \{4\}\), \(C \setminus B = \{5\}\). So \(\ms A = \ms P(S)\), the collection of all subsets of \(S\).
  2. Yes. If \(f\) is a probability density function with reliability function \(F\) then we can recover \(f\) from \(F\): \begin{align*} f(1) &= F(2) + F(3) - 1, \; f(2) = F(4), \; f(3) = F(5) \\ f(4) &= 1 - F(3) - F(5), \; f(5) = 1 - F(2) - F(4) \end{align*} Also, the adjacency matrix has eigenvalue 0 and so is not invertible but the corresponding eigenvector \((-1, 0, 0, 1, 1)\) does not sum to 0.

For the bull graph,

  1. Find the density function \(f_n\) of the walk distribution of order \(n \in \N\).
  2. Find \(\lim_{n \to \infty} f_n\).
Details:
  1. Using the walk functions of the bull graph in Section 1, \begin{align*} f_n(1) = & = \frac{2 \left[\left(1 - \sqrt{13}\right)^n \left(\sqrt{13} - 2\right) + \left(1 + \sqrt{13}\right)^n \left(\sqrt{13} + 2\right)\right]} {\left(1 - \sqrt{13}\right)^n \left(7 \sqrt{13} - 23\right) + \left(1 + \sqrt{13}\right)^n \left(7 \sqrt{13} + 23\right)} \\ f_n(2) = f_n(3) & = \frac{3 \left[\left(1 - \sqrt{13}\right)^n \left(\sqrt{13} - 5\right) + \left(1 + \sqrt{13}\right)^n \left(\sqrt{13} + 5\right)\right]} {2 \left[\left(1 - \sqrt{13}\right)^n \left(7 \sqrt{13} - 23\right) + \left(1 + \sqrt{13}\right)^n \left(7 \sqrt{13} + 23\right)\right]} \\ f_n(4) = f_n(5) & = \frac{\left(1 - \sqrt{13}\right)^n \left(\sqrt{13} - 2\right) + \left(1 + \sqrt{13}\right)^n \left(\sqrt{13} + 2\right)} {\left(1 - \sqrt{13}\right)^n \left(7 \sqrt{13} - 23\right) + \left(1 + \sqrt{13}\right)^n \left(7 \sqrt{13} + 23\right)} \end{align*}
  2. Taking limits in (a) gives \begin{align*} \lim_{n \to \infty} f_n(1) & = \frac{5 - \sqrt{13}}{6} \\ \lim_{n \to \infty} f_n(2) = \lim_{n \to \infty} f_n(3) & = \frac{\sqrt{13} - 2}{6} \\ \lim_{n \to \infty} f_n(4) = \lim_{n \to \infty} f_n(5) & = \frac{5 - \sqrt{13}}{12} \end{align*}

A Triangle Graph

Recall the traingle graph \((S, \rta)\) defined in Section 1. So \(S = \{1, 2, 3\}\) and \(\rta\) is defined by \(1 \rta 2\), \(2 \rta 1\), \(2 \rta 2\), \(2 \rta 3\), \(3 \rta 1\) and \(3 \rta 3\).

  1. Find the associated right \(\sigma\)-algebra.
  2. Is \((S, \rta)\) is right stochastic?
  3. Is a probability measure \(P\) on the reference \(\sigma\)-algebra \(\ms P(S)\) uniquely determined by the reliability function?
Details:

  1. From Section 1, the associated right \(\sigma\)-algebra is \(\ms A = \sigma(\{2\}\) = \{\emptyset, \{2\}, \{1, 3\}, S\}\).
  2. The graph is trivially stochastic. If \(P\) is a probability measure on \(\ms A\) then the associated reliability function \(F\) is given by \(F(1) = P(\{2\})\), \(F(2) = 1\), \(F(3) = P(\{1, 3\})\).
  3. No. We cannot determine \(P(\{2\})\) and \(P(\{3\})\). Also, 0 is an eigenvalue and the corresponding eigenvector \((1, 0, -1)\) sums to 0.

For the triangle graph,

  1. Find the density function \(f_n\) of the right walk distribution of order \(n \in \N\).
  2. Find \(\lim_{n \to \infty} f_n\).
Details:
  1. From the right walk functions in Section 1, \(f_0 = (1 / 3, 1 / 3, 1 / 3)\), \(f_1 = (1 / 6, 1 / 2, 1 / 3)\) and \(f_n = (1 / 4, 1 / 2, 1 / 4)\) for \(n \in \{2, 3, \ldots\}\).
  2. Trivially, \(f_n \to (1 / 4, 1 / 2, 1 / 4)\) as \(n \to \infty\).

Complete and Equality Graphs

Suppose that \(S\) is a nonempty set and that \((S, \equiv)\) is the complete reflexive graph on \(S\) so that \(x \equiv y\) for all \(x, \, y \in S\).

  1. The associated \(\sigma\)-algebra is trivial: \(\ms A = \{\emptyset, S\}\).
  2. The graph is trivially stochastic.
Details:

The only reliability function \(F\) is the constant function 1 and the only probability measure \(P\) on \(\ms A\) is \(P(\emptyset) = 0\), \(P(S) = 1\).

So in example , if \(S\) has a reference \(\sigma\)-algebra \(\ms S\) that is nontrivial then a probability measure \(P\) on \(\ms S\) cannot be determined by \(F\).

Suppose that \(S\) is a nonempty set so that \((S, =)\) is the equality graph on \(S\) and \((S, \ne)\) is the complete irreflexive graph on \(S\).

  1. For both graphs, the associated \(\sigma\)-algebra consists of the countable and co-countable sets: \(\ms C = \{A \subseteq S: A \text{ is countable or } A^c \text{ is countble}\}\).
  2. Both grapahs are stochastic.
Details:
  1. We have seen this before. The neighbor set of \(x \in S\) for \((S, =)\) is \(\{x\}\) while for \((S, \ne)\) the neighbor set is \(S - \{x\}\). So for both graphs, the associated \(\sigma\)-algebra is generated by the singleton sets.
  2. Suppose that \(P\) is a probability measure on \(\ms C\). For \((S, =)\) the reliability function of \(P\) is \(x \mapsto P\{x\}\) while for \((S, \ne)\) the reliability function is \(x \mapsto 1 - P\{x\}\). If \(A \in \ms C\) then \begin{align*} P(A) & = \sum_{x \in A} P\{x\}, \quad A \text{ countable} \\ P(A) & = 1 - \sum_{x \in A^c} P\{x\}, \quad A^c \text{ countable} \end{align*} So we can compute \(P(A)\) from either reliability function.

So in example , if \(S\) is countable then \(\ms C = \ms P(S)\), the reference \(\sigma\)-algebra of all subsets of \(S\). In this case, a probability density function \(f\) on \(S\) is uniquely determined by the reliability function \(F\). But if \(S\) is uncountable and the reference \(\sigma\)-algebra \(\ms S\) is larger than \(\ms C\) then it is usually not the case that a probability measure \(P\) on \(\ms S\) is determined by the reliability function \(F\). For example if \(A \in \ms S\) with both \(A\) and \(A^c\) uncountable, then we cannot determine \(P(A)\) from \(F\).

The Star Graph

For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the star graph of order \(k\) as defined in Section 1.and shown below. Recall that \(S = \{0, 1, \ldots, k\}\) where \(0\) is the center and \(\{1, 2, \ldots, k\}\) are the endpoints.

  1. The associated \(\sigma\)-algebra is \(\ms A = \{\emptyset, \{0\}, \{1, 2, \ldots, k\}, S\}\).
  2. The graph is trivially stochastic. If \(P\) is a probability measure on \(\ms A\) with reliabillity function \(F\) then \(P\{0\} = F(1)\).
  3. The eigenvalues of the graph are \(\pm \sqrt{k}\) and 0 (with multiplicity \(k - 1\)). The eigenvectors corresponding to 0 sum to 0.
Details:

The neighbor sets of the graph are \(A_0 - \{1, 2, \ldots, k\}\) and \(A_i = \{0\}\) for \(i \in \{1, 2, \ldots, k\}\).

So in example , a probability measure \(P\) on \(\ms P(S)\) cannot be determined from the reliability functionn \(F\). Specifically, we cannot determine the probability of an individual endpoint.

For the star of order \(k \in \{3, 4, \ldots\}\), find the density function \(f_n\) of the walk distribution of order \(n \in \N\).

Details

From the walk functions in Section 1,

So the walk distribution of order \(n \in \N\) depends only on the parity of \(n\). In particular, the walk distribution of order \(n \in \N\) does not converge as \(n \to \infty\). Recall that the star is periodic with period 2.

The Cycle Graph

For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the cycle graph on \(k\) vertices, as defined in Section 1 and shown below. Recall that \(S = \{1, 2, \ldots, k\}\) and that \(x \lfrta x - 1\) and \(x \lfrta x + 1\) where we interpret \(k + 1 = 1\) and \(1 - 1 = k\).

  1. If \(k = 4\), the associated \(\sigma\)-algebra is \(\ms A = \sigma\{\{1, 3\}\} = \{\emptyset, \{1, 3\}, \{2, 4\}, S\}\).
  2. If \(k \ne 4\), the associated \(\sigma\)-algebra \(\ms A = \ms P(S)\), the collection of all subsets of \(S\).
  3. The graph is stochastic except when \(k = 4 j\) with \(j \in \{2, 3, \ldots\}\).
Details:

Parts (a) and (b) were shown in Section 1. For part (c), the eigenvalues of the graph are \(2 \cos(2 \pi j / k)\) for \(j \in \{0, 2, \ldots, k - 1\}\). So \(0\) is an eigenvalue if and only if \(2 \pi j / k = \pi / 2\) for some \(j \in \{0, 2, \ldots, k - 1\}\), or equivalently \(j = k / 4\), so that \(k\) is a multiple of \(4\). In this case 0 is an eigenvalue of multiplicty 2, corresponding to \(j = k / 4\) and \(j = 3 k / 4\). An eigenfunction \(g\) corresponding to \(0\) is given by \(g(2 x) = 0\) for \(x \in \{1, \ldots k / 2\}\) and \(g(2 x + 1) = (-1)^x\) for \(x \in \{1, \ldots, k / 2 - 1\}\). So by a probability density function on \(S\) is uniquely determined by the reliability function except when \(k = 4 j\) for some \(j \in \N_+\). Again the case \(k = 4\) is exceptional (and trivial). In this case if \(P\) is a probability measure on the associated \(\sigma\)-algebra \(\ms A\) with reliability function \(F\), then \(F(1) = P\{2, 4\}\) and \(F(2) = P\{1, 3\}\).

Since the cycle is regular (of dgeree 2), the walk distributions are all uniform on \(S\).

The Wheel Graph

For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the wheel graph of order \(k\) as defined in Section 1 and shown below. The wheel is the union of the star in and the cycle in .

  1. If \(k = 4\) then \(\ms A = \sigma\{\{0\}, \{1, 3\}\} = \{\emptyset, \{0\}, \{1, 3\}, \{2, 4\}, \{0, 1, 3\}, \{0, 2, 4\}, \{1, 2, 3, 4\}, S\}\).
  2. If \(k \ne 4\) then \(\ms A = \ms P(S)\).
  3. The graph is stochastic except when \(k = 4 j\) with \(j \in \{2, 3, \ldots\}\).
Details:

Parts (a) and (b) were shown in Section 1. For part (c), the eigenvalues of the wheel are \(1 \pm \sqrt{k + 1}\) together with \(2 \cos(2 j \pi / k)\) for \(j \in \{1, 2, \ldots, k - 1\}\). So \(0\) is an eigenvalue if and only if \(2 j \pi / k = \pi / 2\) for some \(j \in \{1, 2, \ldots, k - 1\}\), or equivalently \(j = k / 4\), so that \(k\) is a multiple of \(4\). In this case 0 is an eigenvalue of multiplicty 2, corresponding to \(j = k / 4\) and \(j = 3 k / 4\). An eigenfunction \(g\) corresponding to \(0\) is given by \(g(2 x) = 0\) for \(x \in \{0, 1, \ldots k / 2\}\) and \(g(2 x + 1) = (-1)^x\) for \(x \in \{0, 1, \ldots, k / 2 - 1\}\). As in , the case \(k = 4\) is exceptional. In this case if \(P\) is a probability measure on the associated \(\sigma\)-algebra \(\ms A = \sigma\{\{0\}, \{1, 3\}, \{2, 4\}\}\) with reliability function \(F\), then \(P\{0\} = 1 - F(0)\), \(P\{1, 3\} = F(2) + F(0) - 1\), and \(P\{2, 4\} = F(1) + F(0) - 1\).

For the wheel of order \(k \in \{3, 4, \ldots\}\),

  1. Find the density function \(f_n\) of the walk distribution of order \(n \in \N\).
  2. Find \(\lim_{n \to \infty} f_n\).
Details:
  1. For \(n \in \N\), using the walk functions of the wheel graph in Section 1, \begin{align*} f_n(0) = & = \frac{\left(1 / 2 + 1 / \sqrt{k + 1}\right) \left(1 + \sqrt{k + 1}\right)^{n - 1} + \left(1 / 2 - 1 / \sqrt{k + 1}\right) \left(1 - \sqrt{k + 1}\right)^{n - 1}}{\left(1 / 2 + 1 / \sqrt{k + 1}\right) \left(1 + \sqrt{k + 1}\right)^{n - 1}\left(2 + \sqrt{k + 1}\right) + \left(1 / 2 - 1 / \sqrt{k + 1}\right) \left(1 - \sqrt{k + 1}\right)^{n - 1}\left(2 - \sqrt{k + 1}\right)} \\ f_n(x) = & = \frac{\left(1 / 2 + 1 / \sqrt{k + 1}\right) \left(1 + \sqrt{k + 1}\right)^n + \left(1 / 2 - 1 / \sqrt{k + 1}\right) \left(1 - \sqrt{k + 1}\right)^n}{k \left[\left(1 / 2 + 1 / \sqrt{k + 1}\right) \left(1 + \sqrt{k + 1}\right)^{n - 1}\left(2 + \sqrt{k + 1}\right) + \left(1 / 2 - 1 / \sqrt{k + 1}\right) \left(1 - \sqrt{k + 1}\right)^{n - 1}\left(2 - \sqrt{k + 1}\right)\right]}, \quad x \in \{1, 2, \ldots, k\} \end{align*}
  2. Taking limits in (a) gives \begin{align*} \lim_{n \to \infty} f_n(0) & = \frac{1}{2 + \sqrt{k + 1}}\\ \lim_{n \to \infty} f_n(x) & = \frac{1 + \sqrt{k + 1}}{k \left(2 + \sqrt{k + 1}\right)}, \quad x \in \{1, 2, \ldots, k\} \end{align*}