The purpose of this text is to study abstract settings that generalize many of the basic concepts in classical reliability theory. So we will start with a brief review of the classical theory followed by an overview of the abstract setting presented in this text. Please see the Preface for notational conventions and mathematical prerequisites. For brevity and to make this introduction more readable, we have left out some of the technical assumptions.
The set \(S = [0, \infty)\) or the set \(S = \N\), along with the ordinary order \(\le\), and the addition operation \(+\) form the standard models of continuous time and discrete time, respectively. The addition operator allows us to shift forward in time, and the order relation allows for comparisons, both crucial for various aging concepts. Of course, \(+\) and \(\le\) are intimately related since for \(x, \, y \in S\), \(x \le y\) if and only if \(x + t = y\) for some \(t \in S\). Another essential component is measure: Lebesgue measure \(\lambda\) on the \(\sigma\)-algebra \(\ms S\) of Borel measurable subsets of \([0, \infty)\) or counting measure \(\lambda\) on the \(\sigma\)-algebra \(\ms S\) of all subsets of \(\N\). Crucially, both are translation invariant \[\lambda(x + A) = \lambda(A), \quad x \in S, \, A \in \ms S\] providing another critical connection between the various structures. Suppose now that \(X\) is a random variable in \(S\), representing the lifetime of a device (or an organsim).
The reliability function \(F\) of \(X\) is the right distribution function, defined by \[F(x) = \P(X \ge x), \quad x \in S\]
So \(F(x)\) is the probability that the lifetime is at least \(x\). We assume that \(F(x) \gt 0\) for \(x \in S\) and we assume that \(X\) has probability density function \(f\) with respect to the underlying reference measure \(\lambda\).
The failure rate function \(r\) of \(X\) is defined by \(r(x) = f(x) / F(x)\) for \(x \in S\).
Roughly speaking, \(r(x)\) is the rate (conditional density) of failure, given a lifetime of at least \(x\). The device might have increasing, decreasing, or constant failure rate, depending on whether \(r\) is increasing, decreasing, or constant on \(S\). Or course, \(r\) might have a more complicated behavior, for example decreasing initally, then constant for a while, and then increasing.
The cumulative failure rate function \(R\) of \(X\) is given by \[R(x) = \int_0^x r(t) \, d\lambda(t), \quad x \in S\] and then the average failure rate function \(\bar r\) of \(X\) is given by \(\bar r(x) = R(x) / x\) for \(x \in S_+ = S - \{0\}\).
Once agin, \(X\) might have increasing, decreasing, or constant average failure rate, or might have a more complicated behavior. Note that only the order relation on \(S\) (and the measure structure) are necessary for these concepts. Each of the functions \(F\), \(r\), \(R\), and \(\bar r\) completely determines the distribution of \(X\).
Other aging properties arise by comparing the conditional distribution of \(X - x\) given \(X \ge x\) with the distribution of \(X\) for \(x \in S\). That is, how does the distribution of the remaining life, given survival up to time \(x\), compare with the original life distribution? Since the reliability function determines the distribution, one simple approach is to compare \(\P(X \ge x + y \mid X \ge x)\), with \(\P(X \ge y)\) for \(x, \, y \in S\). The first is the probability that the device will last at least another \(y\) units given survival up to time \(x\) while the second is simply the probaility of survival up to time \(y\). From the simple definition of conditional probability, this is equivalent to comparing \(F(x + y)\) with \(F(x) F(y)\) for \(x, \, y \in S\).
Suppose that \(X\) has reliability function \(F\).
Again, of course, none of these might apply, so that the distribution of \(X\) is more complicated. Note that the addition operation on \(S\), as well as the associated order relation, are necessary for these concepts.
Suppose now that \(X\) is a random variable in \(S\) with density function \(f\) and reliability function \(F\). Let \( \bs{X} = (X_1, X_2, \ldots) \) be a sequence of independent copies of \(X\). In the reliability context, we can think of \(\bs X\) as the lifetimes of a sequence of identical devices, operating independently. There are two simple ways to construct a new sequence of variables.
The sequence of record variables \(\bs Y = (Y_1, Y_2, \ldots)\) associated with \(\bs X\) is defined as follows: Let \(Y_1 = X_1\). Next let \(Y_2 = X_{N_2}\) where \(N_2 = \min\{n \gt 1: X_n \ge Y_1\}\) and continue in this way. The sequence \(\bs Y\) is a discrete-time Markov process with initial density \(f\) and transition density \(P\) given by \(P(x, y) = f(y) / F(x)\) when \(x \le y\).
Note that only the order relation on \(S\) is essential for this construction, so we can think of \(\bs Y\) as a random walk in the space \((S, \le)\). In the reliability context, suppose that all of the devices are operating initially, and that device 1 is designated as the primary device. As soon as this device fails, we immediately (and with no loss of time), scan the devices in order until we find the first that is still operating. We desigate this new device as the primary device, and continue in this fashion. Here is another construction:
Let \(\bs Z = (Z_1, Z_2, \ldots)\) be the partial sum sequence associated with \(\bs X\) so that \(Z_n = \sum_{i=1}^n X_i\) for \(n \in \N_+\). The sequence \(\bs Z\) is a discrete-time Markov process with initial density \(f\) and transition density \(Q\) given by \(Q(x, y) = f(y - x)\) when \(x \le y\).
Note that the addition operator on \(S\) is essential for this construction, so we can think of \(\bs Z\) as a random walk in the space \((S, +)\). If we think of \(\bs X\) as the sequence of interarrival times in a renewal process then \(\bs Z\) is the corresponding sequence of arrival times. In the reliability context, the first device is operating initially. As soon as it fails (and with no loss of time), we turn on device 2, and continue in this manner.
For both random walks the corresponding point process is \(\bs N = \{N_A: A \in \ms S\}\) where \(N_A\) is the number of random points in \(A \in \ms S\).
In the case of continuous time, we can give some specific and well-known results. Again, let \(X\) be a random variable in \(S = [0, \infty)\) with density function \(f\) (with respect to Lebesgue measure \(\lambda\)) and reliability function \(F\). As is well known in elementary probability, the expected value of \(X\) can be obtained by integrating the reliability function: \[\int_0^\infty F(x) \, dx = \E(X) \] The following generalization is less well known.
For \(n \in \N\), \[\int_0^\infty \frac{x^n}{n!} F(x) \, dx = \E\left[\frac{X^{n+1}}{(n+1)!}\right]\]
Here are a couple of fundamental results from reliability theory.
Random variable \(X\) has constant failure rate if and only if it is memoryless if and only if it has constant average failure rate if and only if the distribution is exponential: \[f(x) = \alpha e^{-\alpha x}, \; F(x) = e^{-\alpha x}, \quad x \in [0, \infty)\] where \(\alpha \in (0, \infty)\) is the failure rate constant.
Suppose that \(X\) has the exponential distribution with parameter \(\alpha \in (0, \infty)\).
most randomway to put points in \([0, \infty)\) in the sense that given \(Y_{n+1} = x \in [0, \infty)\), the random vector \((Y_1, Y_2, \ldots, Y_n)\) is uniformly distributed on \[\{(x_1, x_2, \ldots, x_n) \in [0, \infty)^n: x_1 \le x_2 \le \cdots x_n \le x\}\] a set that has measure \(x^n / n!\).
Note the repeated appearance of the expression \(x^k / k!\) for various \(x \in [0, \infty)\) and \(k \in \N\) in propositions and
In the case of discrete time, we can also give some specific and well-known results. Again, let \(X\) be a random variable in \(S = \N\) with density function \(f\) (with respect to counting measure \(\#\)) and reliability function \(F\). Summing the reliability function gives a standard moment result: \[\sum_{x=0}^\infty F(x) = \E(X) + 1 \] The following generalization is not as well known.
For \(n \in \N\), \[\sum_{x = 0}^\infty \binom{x + n}{n} F(x) = \E\left[\binom{X + n + 1}{n + 1}\right]\]
The following fundamental result parallel those in continuous time.
Random variable \(X\) has constant failure rate if and only if it is memoryless if and only if it had constant average failure rate if and only if the distribution is geometric: \[f(x) = \alpha (1 - \alpha)^x, \; F(x) = (1 - \alpha)^x, \quad x \in \N\] where \(\alpha \in (0, 1)\) is the rate constant.
Suppose that \(X\) has the geometric distribution with parameter \(\alpha \in (0, 1)\).
most randomway to put points in \(\N\) in the sense that given \(Y_{n+1} = x \in \N\), the random vector \((Y_1, Y_2, \ldots, Y_n)\) is uniformly distributed on \[\{(x_1, x_2, \ldots, x_n) \in \N^n: x_1 \le x_2 \le \cdots x_n \le x\}\] a set that has measure (cardinality) \(\binom{x + n}{n}\).
Note the repeated appearance of the expression \(\binom{x + k}{k}\) for various \(x \in \N\) and \(k \in \N\) in propositions . and
Many of the results above generalize to a surprisingly abstract setting, and exploring this generalization is the main goal of this text.
In the abstract setting,
In parts (b) and (c), measurability is with respect to the product \(\sigma\)-algebra on \(S^2 = S \times S\). Almost all of the basic theory goes through with just the measurability and \(\sigma\)-finite assumptions, and so we are able to avoid technical assumptions regarding measure theory and topology.
The generalized graph \((S, \rta)\) is the natural home for generalizations of concepts that rely only on the relation \(\le\) in the classical setting. The functions \(x \mapsto x^n/n!\) and \(x \mapsto \binom{n + x}{n}\) for \(n \in \N\) that are fundamental in the classical continuous and discrete settings, respectivley, are replaced by the function that gives the measure of the set of paths in the graph of length \(n\) terminating in \(x \in S\).
For \(n \in \N\), define \(\gamma_n: S \to [0, \infty]\) by \[\gamma_n(x) = \lambda^n\{(x_1, x_2, \ldots, x_n) \in S^n: x_1 \rta x_2 \rta \cdots \rta x_n \rta x\}, \quad x \in S\]
In terms of probability, many of the defnitions in the classical setting have natural genralizations:
Suppose that \(X\) is a random variable in \(S\) with density function \(f\) with respect to the reference measure \(\lambda\). Relative to the graph \((S, \rta)\),
Naturally in this general setting, the reliability function does not in general uniquely determine the distribution, but we can give conditions under which it does. Constant rate distributions make sense in the general setting, and when the relation is a partial order, the concepts of increasing and decreasing rate, and increasing and decreasing average rate, make sense as well. Under mild conditions, generalizations of the moment results in propositions and hold:
Suppose that \(X\) has reliability function \(F\) for the graph \((S, \rta)\). Then \[\int_S \gamma_n(x) F(x) \, d\lambda(x) = \E\left[\gamma_{n+1}(X)\right], \quad n \in \N\]
The random walk associated with random variable \(X\) can be constructed from a sequence of independent copies of \(X\) by means of record variables
, as in the classical setting.
Suppose again that \(X\) has reliability function \(F\) for the graph \((S, \rta)\). The random walk \(\bs Y = (Y_1, Y_2, \ldots)\) in \((S, \rta)\) associated with \(X\) has initial density \(f\) and transition density \(P\) given by \(P(x, y) = f(y) / F(x)\) when \(x \rta y\)
When a constant rate distribution exists, generalizations of propositions and hold.
Suppose that \(X\) has constant rate \(\alpha \in (0, \infty)\) for the graph \((S, \rta)\) and has reliability function \(F\) (and hence density \(f = \alpha F\)). Let \(\bs Y = (Y_1, Y_2, \ldots)\) be the random walk in \((S, \rta)\) associated with \(X\).
most randomway to put points in the graph, in the sense that given \(Y_{n+1} = x \in S\), the random vector \((Y_1, Y_2, \ldots, Y_n)\) is uniformly distributed on the set of paths of length \(n\) terminating in \(x\): \[\{(x_1, x_2, \ldots, x_n) \in S^n: x_1 \rta x_2 \rta \cdots \rta x_n \rta x\}\]
In terms of existence, the following result is the Peron-Frobenius theorem in disguise.
A finite, strongly connected graph has a unique constant rate distribution,
A semigroup \((S, \cdot)\) satisfying the left cancellation property is the natural home for (left) invariance of the reference measure \(\lambda\) so that \(\lambda(x A) = \lambda(A)\) for \(x \in S\) and \(A \in \ms S\). A semigroup has a natural relation \(\rta\) associated with it, defined by \(x \rta y\) if and only if there exists (a unique) \(t \in S\) with \(x t = y\), and then \(t\) is denoted \(x^{-1} y\). The brief sketch in the previous subsection applies to the graph \((S, \rta)\), of course, but in addition, we can study concepts that involve the comparison of a shifted probability distribution to the original distribution.
Suppose that \(X\) is a random variable in \(S\).
The memoryless property is generally weaker than the full exponential property, but under mild conditions, a distribution is exponential if and only if it is memoryless and has constant rate. With slightly more restrictions, the memoryless property implies the full exponential property. Given a random varaible \(X\) in \(S\) with density \(f\), the random walk on \((S, \rta)\) discussed above makes sense, but there is another random walk that can be constructed as the partial product sequence corresponding to a sequence of independent copies of \(X\).
The random walk \(\bs Z = (Z_1, Z_2, \ldots\) on \((S, \cdot)\) associated with \(X\) has initial density \(f\) and transition density \(Q\) given by \(Q(x, y) = f(x^{-1} y)\) when \(x \rta y\).
The random walk on \((S, \rta)\) associated with \(X\) is the same as the random walk on \((S, \cdot)\) associated with \(X\) if and only if \(X\) is exponential.
Here are a few random examples from the applications that hopefully suggest that the theory is worthwhile:
As noted above, the geometric dsitrubtion is the exponential distribution for the standard discrete semigroup \((\N, +)\) and hence has constant rate for the graph \((\N, \le)\). But it also has constant rate (with different constants) for the graphs \((\N, \lt)\) and \((\N, \rta)\) where \(\rta\) is the covering relation of \((\N, \le)\), and for the graph \((\N, \Rta)\) obtained by adding a loop to to the graph \((\N, \rta)\) at each vertex.
This is a pattern that repeats for other partial order graphs.
The unique constant rate distribution for a simple, undirected, finite path is a discrete version of Gilbert's sine distribution.
The app below is a simulation of the discrete sine distribution. The length of the path \(m\) can be varied with the scrollbar, and the rate constant \(\alpha\) is shown to the right of the scrollbar.
It seems surprising that a sine distribution would govern the most random way to put points in a path, but seen through the lens of spectral graph theory, the result is much clearer. So often, other areas of mathematics inform the theory presented here. As the simulation suggests, the discrete sine distribution, properly scaled, converges to the continuous sine distribution as \(m \to \infty\).
The exponential distributions for the standard arithmetic semigroup \((\N_+, \cdot)\) are the Dirichlet distributions corresponding to completely multiplicative functions.
Some of the known properties of Dirichlet distributions that seem rather mysterious in the context of analytic number theory are much clearer in the context of exponential distributions. So, conversely to the previous example, sometimes the theory presented here informs other areas of mathematics.
For \(n, \, k \in \N_+\), the relation \(\rta_k\) associated with the standard \(k\) norm on \(\R^n\) is given by \(x \rta_k y\) if and only if \(\|x\|_k \le \|y\|_k\) for \(x, \, y \in \R^n\). The constant rate distributions for the graph \((\R^n, \rta_k)\) form a class of generalized multivariate normal distributions. A random vector in \(\R^n\) with a constant rate distribution has indpenedent components only in the case that \(k = n\) and in this case, the common distribution reduces to a known class of generalized normal distributions. In particular, the Laplace distribution has constant rate when \(n = k = 1\) and a class of ordinary normal distribution have constant rate when \(n = k = 2\).
The app below is a simulation of the generalized normal distributions. The dimension \(n\) and the rate constant \(\alpha\) can be varied with the scrollbars.
As the simulation suggests, the generalized normal distributions converge to the uniform distribution on \(\left[-\frac{1}{2}, \frac{1}{2}\right]\) as \(n \to \infty\), and this uniform distribution corresponds to the case \(n = k = \infty\), properly interpreted.