Our starting point is a measurable space \((S, \ms S)\). Recall that \(S^n\) is given the product \(\sigma\)-algebra \(\ms S^n\) for \(n \in \N_+\), and in the discrete case, when \(S\) is countable, \(\ms S = \ms P(S)\) the collection of all subsets of \(S\). Graphs with base set \(S\) are assumed to be measurable. In this section we are interested in basic probability on the space that relates to the structure of a graph. We assume that we have a probability space \((\Omega, \ms F, \P)\) in the background and that \(X\) is a random variable in \(S\).
The (right) reliability function \(F\) of \(X\) for the graph \((S, \rta)\) is the measurable function defined by \[F(x) = \P(x \rta X), \quad x \in S \]
By definition, \((x, y) \mapsto \bs{1}\{(x, y) \in S^2: x \rta y\}\) is measurable as a function from \(S^2\) to \(\{0, 1\}\). Hence \((x, \omega) \mapsto \bs{1}\{(x, \omega): x \rta X(\omega)\}\) is measurable as a function from \(S \times \Omega\) to \(\{0, 1\}\). In turn, this implies that \(x \mapsto \P(x \rta X)\) is measurable as a function from \(S\) to \([0, 1]\).
The term reliability function is used informally, since at this level of abstraction we are far removed from classical reliability theory. However, for a partial order graph \((S, \preceq)\) (our most important special case), the reliability function \(F\) has the more familiar form \[F(x) = \P(X \succeq x), \quad x \in S\] and in particular for the standard continuous graph \(([0, \infty), \le)\), the reliability function is the classical one: \[F(t) = \P(X \ge t), \quad t \in [0, \infty)\] In general, if it helps, you can think of the event \(\{x \rta X\}\) as success for random variable \(X\) at \(x \in S\), just as survival up to time \(t\) is success at \(t \in [0, \infty)\) for a random lifetime \(X\) in the classical setting. With this interpretation, \(F(x)\) is the probability of success at \(x \in S\). We usually assume that the graph supports \(X\) so that \(F(x) \gt 0\) for \(x \in S\).
Suppose that \(X\) and \(Y\) are random variables in \(S\) with reliability functions \(F\) and \(G\) for \((S, \rta)\), respectively. Then relative to \((S, \rta)\),
For a partial order graph \((S, \preceq)\), dominance reduces to the usual definition of (first order) stochastic dominance: \(\P(X \succeq x) \ge \P(Y \succeq x)\) for all \(x \in S\). Clearly equivalence relative to \((S, \rta)\) really does define an equivalence relation on the collection of probability distributions on \(S\), and then dominance can be extended to a partial order on the equivalence classes. Our next result generalizes a theorem by Goldie and Resnick.
Suppose that the graph \((S, \rta)\) is anti-symmetric and that \(X\) is a random variable in \(S\) with reliability function \(F\). Let \(S_0 = \{x \in S: F(x) \lt 1\}\). Then \(\P(X \in S_0) = 1\).
Let \(C = S_0^c\) and let \(P\) denote the probability distribution of \(X\). For \(x \in C\), \(F(x) = \P(x \rta X) = 1\) so \(\P(\{X \in C\} \cap \{x \rta X\}) = P(C)\). Hence \[\int_C \P(\{X \in C\} \cap \{x \rta X\}) \, dP(x) = \int_C P(C) \, dP(x) = [P(C)]^2\] Equivalently \[\int_C \int_C L(x, y) \, dP(y) dP(x) = [P(C)]^2\] where as usual, \(L\) is the adjacency kernel of the graph. By Fubini's theorem and a change of variables, \[\int_C \int_C L(y, x) \, dP(y) dP(x) = [P(C)]^2\] Adding the last two equations we have \[2 [P(C)]^2 = \int _C \int_C [L(x, y) + L(y, x)] \, dP(y) dP(x)\] Since \(\rta\) is anti-symmetric, almost no \((x, y) \in S^2\) satisfies \(x \rta y\) and \(y \rta x\), so \(L(x, y) + L(y, x) \le 1\) for almost all \((x, y) \in S^2\). Hence we have \(2 [P(C)]^2 \le [P(C)]^2\) or \([P(C)]^2 \le 0\). Hence \(P(C) = 0\).
Suppose that \((S, \preceq)\) is a discrete, uniform partial order graph, and let \(\upa\) denote the covering relation. As above, let \(F\) denote the reliability function of \(X\) for \((S, \preceq)\), and for \(n \in \N\), let \(G_n\) denotethe reliability function of \(X\) for \((S, \upa^n)\). Then \[F = \sum_{n=0}^\infty G_n\]
Recall that \(\upa^n\) is the \(n\)-fold composition power of \(\upa\) for \(n \in \N\), and in particular, \(\upa^0\) is simply the equality relation \(=\). Since \((S, \preceq)\) is uniform, \(\preceq\) is the disjoint union of \(\upa^n\) over \(n \in \N\) (as sets of ordered pairs). Hence \[F(x) = \P(x \preceq X) = \sum_{n=0}^\infty \P(x \upa^n X) = \sum_{n=0}^\infty G_n(x), \quad x \in S\]
Recall that the (right) \(\sigma\)-algebra associated with the graph \((S, \rta)\) is the \(\sigma\)-algebra generated by the collection of (right) neigybor sets: \(\ms {A} = \sigma\{A_x: x \in S\}\) where \(A_x = \{y \in S: x \rta y\}\) for \(x \in S\). So by definition, the reliability function \(F\) of a probability distribution \(P\) on \(\ms A\) is simply the function that assigns probabilites to the right neigbor sets: \(F(x) = P(A_x)\) for \(x \in S\), and this is the reason for our interest in \(\ms A\) in the first place. In the general setting that we have here, \(F\) may or may not uniquely determine \(P\), so here is the appropriate definition:
The graph \((S, \rta)\) is (right) stochastic if the reliability function of a probability measure \(P\) on the associated \(\sigma\)-algebra \(\ms A\) uniquely determines \(P\). That is, if \(P\) and \(Q\) are probability measure on \(\ms A\) with the same reliablity function \(F\) then \(P = Q\).
This really is a graph-theoretic concept, since the associated \(\sigma\)-algebra \(\ms A\) amd the reliability function \(F\) depend only on the graph. However usually the reference \(\sigma\)-algebra \(\ms S\) is of primary interest. When \(\ms A\) is a proper subset of \(\ms S\), then in general there is little chance that the reliability function \(F\) of a probability measure \(P\) on \(\ms S\) uniquely determines \(P\). In the general setting, the existence problem is also quite complicated. That is, what conditions on a function \(F: S \to [0, 1]\) imply that \(F\) is the reliability function of a probability measure \(P\) on the associated \(\sigma\)-algebra \(\ms A\). We will consider the existence and uniqueness problems for some special classes of graphs.
Suppose that \((S, \rta)\) is a graph and that the collection of neighbor sets \(\{A_x: x \in S\}\) is closed under intersection and that there exists a countable \(C \subseteq S\) such that \(\bigcup_{x \in C} A_x = S\). Then \((S, \rta)\) is stochastic.
This is simply Dynkin's \((\pi, \lambda)\) theorem rephrased in our setting.
Suppose that \((S, \preceq)\) is a discrete partial order graph that is also an upper semi-lattice. Then \((S, \preceq)\) is stochastic.
Suppose now that \(\lambda\) is a fixed, \(\sigma\)-finite reference measure for \((S, \ms S)\). Probability density functions of random variables are with respect to \(\lambda\). As usual, in the discrete case, the reference measure is counting measure \(\#\) on \(\ms S = \ms P(S)\). Here is our first very simple result.
Suppose that \(X\) has density function \(f\) and has reliability function \(F\) for \((S, \rta)\). Then \(F = L f\) where \(L\) is the adjacency kernel of \((S, \rta)\).
Note that \[F(x) = \P(x \rta X) = \int_{x \rta y} f(y) \, d\lambda(y) = L f(x), \quad x \in S\]
So if \(L\), operating on the right on the space \(\ms L_1\), has a left inverse then the reliability function determines the density function. However, that condition is not necessary. Here is a more complete result.
Suppose again that \((S, \rta)\) is a graph with adjacency kernel \(L\). Then the following statements are equivalent:
We prove that (b) implies (a) and that the negation of (b) implies the negation of (a). So suppose first that (b) holds and that \(f\) and \(g\) are probability density functions with the same reliability function \(F\), so that \(L f = L g\). Then \(f - g \in \ms L_1\), \(L(f - g) = 0\), \(\int_S (f - g) \, d \lambda = 0\). Hence \(f - g = 0\) almost everywhere and so \(f = g\) almost everywhere. Conversely, suppose that (b) does not hold, so that there exists \(h \in \ms L_1\) with \(L h = 0\), \(\int_S h \, d \lambda = 0\), and with \(h\) nonzero on a set of positive measure. Then \(\|h\|_1 \gt 0\) and hence \(f = |h| / \|h\|_1\) is a probability density function on \(S\). Moreover, \(f \ge h / \|h\|_1\) and strict inequality most hold on a set of positive measure since \(\int_S h \, d\lambda = 0\) (\(h\) must be positive on a set of positive measure and negative on a set of positive measure). Hence \(g := f - h / \|h\|_1\) differs from \(f\) on a set of positive measure. Note that \(g \ge 0\) and \[\int_S g \, d \lambda = \int_S f \, d \lambda - \frac{1}{\|h\|_1} \int_S h \, d\lambda = 1 - 0 = 1\] so \(g\) is a probability density function. Moreover, \[L g = L f - \frac{1}{\|h\|_1} L h = L f - 0 = L f\] Hence (a) does not hold.
For a finite graph, \(L\) is the adjacency matrix and the result can be restated very simply in terms of the spectrum of the graph.
Suppose that \((S, \rta)\) is a finite graph with adjacency matrix \(L\). Then a reliability function on \((S, \rta)\) uniquely determines the density function except when \(0\) is a (right) eigenvalue of \(L\) and there is a corresponding eigenfunction that sums to \(0\).
Examples that illustrate are given in below. In the case of a discrete, partial order graph \((S, \preceq)\), the probability density function can sometimes be recovered from the reliability function via Möbius inversion as studied in Section 2. Recall that the Möbius kernel \(M\) is the inverse of the adjacency kernel \(L\).
Suppose that \((S, \preceq)\) is a discrete, left finite partial order graph and that \(f\) is a probability density function on \(S\) with reliability function \(F\) for \((S, \preceq)\). If \(\sum_{x \in S} F(x) \lt \infty\) then \(f = M F\) where \(M\) is the Möbius kernel of \((S, \preceq)\). That is, \[f(x) = \sum_{x \preceq y} M(x, y) F(y), \quad x \in S\]
This follows from Möbius inversion formula in Section 2.
Suppose that \(f\) and \(g\) are probability density functions that have reliability functions \(F\) and \(G\) for \((S, \rta)\) respectively. If \(p \in (0, 1)\) then the density function \(h = p f + (1 - p) g\) has reliability function \(H = p F + (1 - p) G\) for \((S, \rta)\).
Let \(L\) denote the adjacency kernel of \((S, \rta)\) as before. Then \[H = L h = p L f + (1 - p) L g = p F + (1 - p) G\]
Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\) and that \(X\) is a random variable in \(S\) with density function \(f\) and the reliability function \(F\). We assume that \(F(x) \gt 0\) for \(x \in S\). There is a natural definition for the rate function of \(X\).
The (right) rate function \(r\) of \(X\) for \((S, \rta)\) is defined by \[r(x) = \frac{f(x)}{F(x)}, \quad x \in S\]
As with the term reliability function, the term rate function is also used informally. The motivation comes again from reliability theory, when the graph is a partial order graph. In particular, for the standard continuous graph \(([0, \infty), \le)\), the rate function \(r\) is the failure rate function. The density function of \(X\) is determined for almost all \(x \in S\), and so the same is true of the rate function. Trivially, the reliability function along with the rate function determine the distribution of \(X\), but the rate function on its own may or may not determine the distribution of \(X\). For the following definition, recall that the (left) walk function \(u_n\) of order \(n \in \N_+\) is given by \[u_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n): x_1 \rta x_2 \rta \cdots \rta x_n \rta x\}, \quad x \in S\] so that \(u_n(x)\) is the measure of the set of walks of length \(n\) terminating in \(x\).
Suppose that \(X\) has rate function \(r\) for the graph \((S, \rta)\) and that \(n \in \N_+\).
Note that if \(u_n(x) = 0\) for some \(x \in S\) and \(n \in \N_+\) then \(R_n(x) = 0\) since \(u_n(x)\) is the measure of the domain of integration in the definition of \(R_n(x)\). We will show later that \(R_n(x)\) is finite for almost all \(x \in S\), and so by a suitable choice of the denstiy function \(f\), we can assume that \(R_n(x) \lt \infty\) for all \(x \in S\). Hence the definition of the average rate function in (b) makes sense. As before, the terms cumulative rate function and average rate function are used informally. For the standard graph \(([0, \infty) \le )\), the cumulative rate function \(R_1\) is the ordinary cumulative failure rate function and the average rate function \(\bar r_1\) is the average failure rate function. Aside from generalizing functions in classical reliability, the cumulative rate functions will turn out to be very useful in Section 4 on random walks. The following proposition gives a recursive method for computing the cumulative rate functions.
In the setting of definition , define \(R_0(x) = 1\) for \(x \in S\). Then for \(n \in \N\), \[R_{n + 1}(x) = \int_{t \rta x} R_n(t) r(t) \, d\lambda(t), \quad x \in S\]
By Fubini's theorem, we can write the multiple integral as an iterated integral: \begin{align*} R_{n + 1}(x) & = \int_{x_1 \rta \cdots \rta x_n \rta t \rta x} r(x_1) \cdots r(x_n) r(t) \, d\lambda^{n + 1}(x_1, \ldots, x_n, t) \\ & = \int_{t \rta x} \left[\int_{x_1 \rta \cdots \rta x_n \rta t} r(x_1) \cdots r(x_n) \, d\lambda^n(x_1, \ldots x_n)\right] r(t) \, d\lambda(t) \\ &= \int_{t \rta x} R_n(t) r(t) \, d\lambda(t), \quad x \in S \end{align*}
In terms of the adjacency operator \(L\) note that \(R_{n+1} = (r R_n) L\). We are particularly interested in the case where the rate function or the average rate function is constant; these will be studied in more detail in Section 5. If the relation is a partial order, the concepts of increasing and decreasing rate, and increasing and decreasing average rate make sense. For the later, we state the definition so that we avoid division by 0 problems. Once again, \(u_n\) is the left walk function of order \(n \in \N\).
Suppose that \((S, \preceq)\) is a partial order graph that supports \(X\), and that \(X\) has rate function \(r\) and cumulatve rate function \(R_n\) of order \(n \in \N_+\) for \((S, \preceq)\).
Suppose now that \((S, \rta)\) is a transitive graph. Let \(x \in S\) and consider the conditional distribution of \(X\) given \(x \rta X\) for the subgraph \((A_x, \rta)\), where \(A_x = \{y \in S: x \rta y\}\) is the set of right neighbors of \(x\).
In particular, proposition holds for a partial order graph \((S, \preceq)\).
Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\) and that \(X\) is a random variable in \(S\). Natural moments of \(X\) can be found by integrating the left operator associated with the adjacency kernel \(L\) by the right reliability function \(F\).
If \(g: S \to [0, \infty)\) is measurable then \[\int_S (g L^n) F(x) \, d\lambda(x) = \E[(g L^{n+1})(X)], \quad n \in \N\] where we interpret \(L^0\) as the identity operator, so that \(g L^0 = g\).
The main tool once again is Fubini's theorem. \begin{align*} \int_S (g L^n)(x) F(x) \, d\lambda(x) &= \int_S (g L^n)(x) \P(x \rta X) \, d\lambda(x) \\ &= \int_S (g L^n)(x) \E[\bs 1(x \rta X)] \, d\lambda(x) \\ &= \E\left[\int_{x \rta X} (g L^n)(x) \, d\lambda(x)\right] = \E[(g L^{n+1})(X)] \end{align*}
The result also holds for measurable \(g: S \to \R\), assuming the appropriate integrability. If \(X\) has density \(f\), the moment result is a simple consequence of the self-adjoint property: \[\langle g L^n, F \rangle = \langle g L^n, L f \rangle = \langle g L^{n+1}, f \rangle = \E[(g L^{n+1})(X)]\] For the following corollary, recall that \(u_n = \bs 1 L^n\) is the walk function of order \(n \in \N\) for the graph \((S, \rta)\) so that \(u_0(x) = 1\) for \(x \in S\), and for \(n \in \N_+\), \[u_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n) \in S^n: x_1 \rta \cdots \rta x_n \rta x\}, \quad x \in S\] the measure of the set of walks of length \(n\) that terminate in \(x\).
For \(n \in \N\), \[\int_S u_n(x) F(x) \, d\lambda(x) = \E[u_{n + 1}(X)]\] and in particular, \[\int_S F(x) \, d\lambda(x) = \E[u(X)] = \E[\lambda\{x \in S: x \rta X\}] \]
Note that the condition for recovering the probability density function from the reliability function via Möbius inversion in proposition can be restated as \(\E[u(X)] \lt \infty \) by proposition .
Our next result involves the left generating function \(U\) of the graph \((S, \rta)\). Recall that \[U(x, t) = \sum_{n = 0}^\infty u_n(x) t^n, \quad x \in S, \, |t| \lt \rho(x)\] where \(\rho(x)\) is the radius of convergence of the power series for \(x \in S\). We can use this to define a generating function for the random variable \(X\)
The (left) moment generating function \(M\) of \(X\) with respect to \((S, \rta)\) is given by \[M(t) = \E[U(X, t)] = \sum_{n = 0}^\infty \E[u_n(X)] t^n, \quad |t| \lt \rho\] where \(\rho\) is the radius of convergence of the power series.
Note that \(M\) is the ordinary generating function of the sequence of moments \((\E[u_n(X)]: n \in \N)\), but of course in this abstract setting, these moments do not in general determine the distribution of \(X\). Nonetheless they are interesting since \(\E[u_n(X)]\) is the expected measure of the walks of length \(n\) that terminate in \(X\). We can get the generating function of \(X\) by integrating the (left) generating function of the graph by the (right) reliability function of \(X\) for the graph:
Suppose that \(X\) has generating function \(M\) and reliability function \(F\) for the graph \((S, \rta)\). Then \[M(t) = 1 + t \int_S U(x, t) F(x) \, d\lambda(x), \quad |t| \lt \rho\]
Let \(x \in S\) and \(|t| \lt \rho\). Using Fubini's theorem and \begin{align*} 1 + t \int_S U(x, t) F(x) \, d\lambda(x) &= 1 + t \int_S \left[\sum_{n = 0}^\infty u_n(x) t^n\right] F(x) \, d\lambda(x) \\ &= 1 + t \sum_{n=0}^\infty t^n \left[\int_S u_n(x) F(x) \, d\lambda(x)\right] = 1 + t \sum_{n = 0}^\infty t^n \E[u_{n + 1}(X)] \\ &= 1 + \sum_{n = 0}^\infty t^{n + 1} \E[u_{n + 1}(X)] = \E \left[\sum_{k = 0}^\infty t^k u_k(X)\right] = \E[U(X, t) = M(t)] \end{align*}
The following definition and theorem concern entropy. At this point, there is no connection to the underlying relation \(\rta\), but there soon will be.
Suppose that \(X\) is a random variable in \(S\) with density function \(f\). The entropy of \(X\) is \[H(X) = -\E(\ln[f(X)]) = -\int_S f(x) \ln[f(x)] d \lambda(x)\]
Suppose that \(X\) and \(Y\) are random variables in \(S\) with density functions \(f\) and \(g\) respectively. Then \[H(Y) = -\int_S g(x) \ln[g(x)] d\lambda(x) \leq -\int_S g(x) \ln[f(x)] d \lambda(x)\] with equality if and only if \(f(x) = g(x)\) almost everywhere (so that \(X\) and \(Y\) have the same distribution).
Note first that \(\ln t \le t - 1\) for \(t > 0\), so \(-\ln t \ge 1 - t\) for \(t > 0\), with equality only at \(t = 1\). Hence, \[-\ln\left(\frac{f(x)}{g(x)}\right) = -\ln[f(x)] + \ln[g(x)] \ge 1 - \frac{f(x)}{g(x)}\] for \(x \in S\) with \(g(x) \gt 0\). Multiplying by \(g(x)\) gives \[-g(x) \ln[f(x)] + g(x) \ln[g(x)] \ge g(x) - f(x), \quad x \in S\] The inequality is trivially true if \(g(x) = 0\) and hence holds on all of \(S\). Therefore \begin{align*} -\int_S g(x) \ln[f(x)] d \lambda(x) + \int_S g(x) \ln[g(x)] &\ge \int_S g(x) d \lambda(x) - \int_S f(x) d \lambda(x) \\ &= 1 - 1 = 0 \end{align*} Equality holds if and only \(f(x) = g(x)\) almost everywhere.
Suppose again \((S, \rta)\) is a measurable graph relative to the \(\sigma\)-finite measure space \((S, \ms S, \lambda)\). Let \(v_n\) denote the right walk function of order \(n \in \N\), so that \(v_0(x) = 1\) for \(x \in S\), and for \(n \in \N_+\), \(v_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n) \in S^n: x \rta x_1 \rta \cdots \rta x_n\}\) is the measure of the set of walks of length \(n\) starting in \(x \in S\). Let \(w_n\) denote the walk parameter of order \(n \in \N\) so that \(w_0 = \lambda(S)\) and for \(n \in \N_+\), \(w_n = \lambda^{n + 1}\{(x_1, x_2, \ldots, x_{n + 1}) \in S^{n + 1}: x_1 \rta x_2 \rta \cdots \rta x_{n + 1}\}\) is the measure of the set of all walks of length \(n\).
Suppose that \((S, \rta)\) is a finite graph so that \(\lambda(S) \in (0, \infty)\). Let \(f_n\) be the probability density function on \(S\) given by \(f_n(x) = v_n(x) / w_n\) for \(x \in S\). The corresponding distribution is the right walk distribution of order \(n\).
From Section 1, \(w_n\) is the normalizing constant for the function \(v_n\), and is finite since the graph is finite: \[w_n = \int_S v_n(x) \, d\lambda(x) \le [\lambda(S)]^{n + 1}\] So the density function \(f_n\) makes sense.
Note that the right walk distribution of order \(n = 0\) is the uniform distribution on \(S\), since \(v_0(x) = 1\) for \(x \in S\)and \(w_0 = \lambda(S)\). When \(n = 1\) we have \(v_1(x) = \lambda\{y \in S: x \rta y\}\), the measure of the set of right neighbors of \(x \in S\), and \(w_1 = \lambda^2\{(x, y) \in S^2: x \rta y\}\), the measure of the set of edges in the graph, or put another way, the measure of \(\rta\) as a set of ordered pairs.
Suppose again that \((S, \rta)\) is a finite graph and that \(f_n\) is the density function of the right walk distribution of order \(n \in \N\).
From Section 1, the reliability function \(F_n\) is given by \[F_n(x) = \int_{x \rta y} f_n(y) \, d\lambda(y) = \frac{1}{w_n} \int_{x \rta y} v_n(y) \, d\lambda(y) = \frac{1}{w_n} v_{n + 1}(x), \quad x \in S\] and then the rate function \(r_n\) is given by \(r_n(x) = f_n(x) / F_n(x) = v_n(x) / v_{n + 1}(x)\) for \(x \in S\).
The results in suggest a possible convergence of \(f_m\), \(F_n\), and \(r_n\) as \(m \to \infty\). We will see this in some of the exercises below, and study this further in Section 5.
Suppose again that \((S, \rta)\) is a finite graph, and also is right regular of degree \(k \in (0, \infty)\). The right walk distribution of order \(n \in \N\) is uniform on \(S\).
The right walk function of order \(n \in \N\) is \(v_n(x) = k^n\) for \(x \in S\). The normalizing constant is \(w_n = \lambda(S) k^n\).
For finite graphs, the right walk distributions will be important in the study of random walks in Section 4 and constant rate distributions in Section 5.
Recall again that we have an underlying \(\sigma\)-finte measure space \((S, \ms S, \lambda)\), and that graphs with base set \(S\) are assumed to be measurable. In particular, in the discrete case, the reference \(\sigma\)-algebra \(\ms P(S)\) is the collection of all subsets of \(S\), and the reference measure \(\#\) is counting measure.
The standard continuous graph is mentioned in several places above, but here is a summary.
Recall that \([0, \infty)\) is given the collection of Borel measurable subsets \(\ms B\) as the reference \(\sigma\)-algebra and Lebesgue measure \(\lambda\) as the reference measure. With this measure structure, the graph \(([0, \infty), \le)\) is the standard continuous graph.
This is standard stuff in probability theory.
Again, for the standard continuous graph, the reliability function, rate function, cumulative rate functions, and average rate functions have their classical meanings, with rate replaced by failure rate. As a special case of (c), we have the standard result from elementary probability: \[\int_0^\infty \P(X \ge x) \, dx = \E(X)\]
Suppose that random variable \(X\) in \([0, \infty)\) has density function \(f\) given by \[f(x) = \frac{\alpha}{(x + 1)^{\alpha + 1}}, \quad x \in [0, \infty)\] where \(\alpha \in (0, \infty)\) is a parameter. Relative to the standard graph \(([0, \infty), \le)\), find each of the following:
Random variable \(X\) in exercise has a version of the Pareto distribution. The rate function is decreasing on \([0, \infty)\), as is the average rate fucntion of order \(n \in \N\). The standard continuous graph is studied in detail in Chapter 3.
Recall that the standard discrete graph is \((\N, \le)\).
Again, this is standard stuff in probability theory.
As a special case of (c) with \(n = 0\), we have another standard result from elementary probabiltiy theory: \(\sum_{x = 0}^\infty \P(X \ge x) = \E(X + 1)\). In (d), note that the generating function \(M\) is a close cousin of the probability generating function of \(X\). The standard discrete graph is studied in detail in Chapter 4.
Suppose that random variable \(X\) in \(\N\) has density function \(f\) given by \[f(x) = \frac{1}{(x + 1)^\alpha} - \frac{1}{(x + 2)^\alpha}, \quad x \in \N\] where \(\alpha \in (0, \infty)\) is a prameter. Relative to the standard graph \((\N, \le)\), find each of the following:
Random variable \(X\) in exercise has a discrete version of the distribution in exercise , in a sense. The rate function is dicreasing on \(\N\).
The diamond graph \((S, \lfrta)\) and a directed diamond graph \((S, \rta)\) were intreoduced in Section 1:
For the diamond graph,
For the directed diamond graph,
For the diamond graph,
For the directed diamond graph, find the density function \(f_n\) of the right walk distribution of order \(n \in \N\).
So right walk distribution of order \(n \in \N\) depends only on the remainder when \(n\) is divided by 3, which is the period of the graph. In particular, the distribution does not converge as \(n \to \infty\).
Consider the bull graph \((S, \lfrta)\) studied in Section 1 and shown below.
For the bull graph,
Recall the traingle graph \((S, \rta)\) defined in Section 1. So \(S = \{1, 2, 3\}\) and \(\rta\) is defined by \(1 \rta 2\), \(2 \rta 1\), \(2 \rta 2\), \(2 \rta 3\), \(3 \rta 1\) and \(3 \rta 3\).
For the triangle graph,
Suppose that \(S\) is a nonempty set and that \((S, \equiv)\) is the complete reflexive graph on \(S\) so that \(x \equiv y\) for all \(x, \, y \in S\).
The only reliability function \(F\) is the constant function 1 and the only probability measure \(P\) on \(\ms A\) is \(P(\emptyset) = 0\), \(P(S) = 1\).
So in example , if \(S\) has a reference \(\sigma\)-algebra \(\ms S\) that is nontrivial then a probability measure \(P\) on \(\ms S\) cannot be determined by \(F\).
Suppose that \(S\) is a nonempty set so that \((S, =)\) is the equality graph on \(S\) and \((S, \ne)\) is the complete irreflexive graph on \(S\).
So in example , if \(S\) is countable then \(\ms C = \ms P(S)\), the reference \(\sigma\)-algebra of all subsets of \(S\). In this case, a probability density function \(f\) on \(S\) is uniquely determined by the reliability function \(F\). But if \(S\) is uncountable and the reference \(\sigma\)-algebra \(\ms S\) is larger than \(\ms C\) then it is usually not the case that a probability measure \(P\) on \(\ms S\) is determined by the reliability function \(F\). For example if \(A \in \ms S\) with both \(A\) and \(A^c\) uncountable, then we cannot determine \(P(A)\) from \(F\).
For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the star graph of order \(k\) as defined in Section 1.and shown below. Recall that \(S = \{0, 1, \ldots, k\}\) where \(0\) is the center and \(\{1, 2, \ldots, k\}\) are the endpoints.
The neighbor sets of the graph are \(A_0 - \{1, 2, \ldots, k\}\) and \(A_i = \{0\}\) for \(i \in \{1, 2, \ldots, k\}\).
So in example , a probability measure \(P\) on \(\ms P(S)\) cannot be determined from the reliability functionn \(F\). Specifically, we cannot determine the probability of an individual endpoint.
For the star of order \(k \in \{3, 4, \ldots\}\), find the density function \(f_n\) of the walk distribution of order \(n \in \N\).
From the walk functions in Section 1,
So the walk distribution of order \(n \in \N\) depends only on the parity of \(n\). In particular, the walk distribution of order \(n \in \N\) does not converge as \(n \to \infty\). Recall that the star is periodic with period 2.
For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the cycle graph on \(k\) vertices, as defined in Section 1 and shown below. Recall that \(S = \{1, 2, \ldots, k\}\) and that \(x \lfrta x - 1\) and \(x \lfrta x + 1\) where we interpret \(k + 1 = 1\) and \(1 - 1 = k\).
Parts (a) and (b) were shown in Section 1. For part (c), the eigenvalues of the graph are \(2 \cos(2 \pi j / k)\) for \(j \in \{0, 2, \ldots, k - 1\}\). So \(0\) is an eigenvalue if and only if \(2 \pi j / k = \pi / 2\) for some \(j \in \{0, 2, \ldots, k - 1\}\), or equivalently \(j = k / 4\), so that \(k\) is a multiple of \(4\). In this case 0 is an eigenvalue of multiplicty 2, corresponding to \(j = k / 4\) and \(j = 3 k / 4\). An eigenfunction \(g\) corresponding to \(0\) is given by \(g(2 x) = 0\) for \(x \in \{1, \ldots k / 2\}\) and \(g(2 x + 1) = (-1)^x\) for \(x \in \{1, \ldots, k / 2 - 1\}\). So by a probability density function on \(S\) is uniquely determined by the reliability function except when \(k = 4 j\) for some \(j \in \N_+\). Again the case \(k = 4\) is exceptional (and trivial). In this case if \(P\) is a probability measure on the associated \(\sigma\)-algebra \(\ms A\) with reliability function \(F\), then \(F(1) = P\{2, 4\}\) and \(F(2) = P\{1, 3\}\).
Since the cycle is regular (of dgeree 2), the walk distributions are all uniform on \(S\).
For \(k \in \{3, 4, \ldots\}\), let \((S, \lfrta)\) denote the wheel graph of order \(k\) as defined in Section 1 and shown below. The wheel is the union of the star in and the cycle in .
Parts (a) and (b) were shown in Section 1. For part (c), the eigenvalues of the wheel are \(1 \pm \sqrt{k + 1}\) together with \(2 \cos(2 j \pi / k)\) for \(j \in \{1, 2, \ldots, k - 1\}\). So \(0\) is an eigenvalue if and only if \(2 j \pi / k = \pi / 2\) for some \(j \in \{1, 2, \ldots, k - 1\}\), or equivalently \(j = k / 4\), so that \(k\) is a multiple of \(4\). In this case 0 is an eigenvalue of multiplicty 2, corresponding to \(j = k / 4\) and \(j = 3 k / 4\). An eigenfunction \(g\) corresponding to \(0\) is given by \(g(2 x) = 0\) for \(x \in \{0, 1, \ldots k / 2\}\) and \(g(2 x + 1) = (-1)^x\) for \(x \in \{0, 1, \ldots, k / 2 - 1\}\). As in , the case \(k = 4\) is exceptional. In this case if \(P\) is a probability measure on the associated \(\sigma\)-algebra \(\ms A = \sigma\{\{0\}, \{1, 3\}, \{2, 4\}\}\) with reliability function \(F\), then \(P\{0\} = 1 - F(0)\), \(P\{1, 3\} = F(2) + F(0) - 1\), and \(P\{2, 4\} = F(1) + F(0) - 1\).
For the wheel of order \(k \in \{3, 4, \ldots\}\),