\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Reliability
  2. 3. Standard Continuous Spaces
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7

1. The Standard Space

The standard space \(([0,\,\infty), +)\) with the usual Borel \(\sigma\)-algebra \(\ms B\) is a positive semigroup. The identity element is \(0\) and the corresponding partial order is the ordinary total order \(\leq\). Our reference measure is Lebesgue measure \(\lambda\), the only invariant measure, up to multiplication by positive constants. Of course, this is the setting of classical reliability theory, with \([0, \infty)\) representing continuous time, and so is one of the main motivations for the general theory presented in this text. For the most part, the results presented in this section are very well known and need no proofs.

The path function \(\gamma_n\) of order \(n \in \N\) for \(([0, \infty), \le)\) is given by \[\gamma_n(x) = \frac{x^n}{n!}, \quad x \in [0, \infty)\]

The path generating function \(\Gamma\) of \(([0, \infty), \le)\) is given by \[\Gamma(x, t) = \sum_{n=0}^\infty \frac{x^n}{n!} t^n = e^{t x}; \quad x \in [0, \infty), \, t \in \R\]

The semigroup \((S, +)\) and the ordered space \((S, \le)\) have dimension 1.

Suppose now that \(X\) is a random variable with a continuous distribution on \([0, \infty)\), thought of as the lifetime of a device or an organism.

The reliability function \(F\) of \(X\) for \(([0, \infty), \le)\) is the ordinary reliability function: \[F(x) = \P(X \ge x), \quad x \in [0, \infty)\]

Thus \(F(x)\) is the probability that the device lasts at least until time \(x \in [0, \infty)\). As usual, we assume that \(([0, \infty), +)\) supports \(X\) so that \(F(x) \gt 0\) for \(x \in [0, \infty)\).

The semigroup \(([0, \infty), +)\) is stochastic. The (right) neighbor set at \(x \in [0, \infty)\) is \([x, \infty)\). The associated \(\sigma\)-algebra \(\sigma\{[x, \infty): x \in [0, \infty)\}\) is the reference \(\sigma\)-algebra \(\ms B\). If \(P\) and \(Q\) are probability measures on \(\ms B\) with the same reliability function \(F\) then \(P = Q\).

Of course, density functions are with respect to \(\lambda\).

If \(X\) has density function \(f\) then the rate function of \(X\) for \(([0, \infty), \le)\) is the usual failure rate function \(r = f / F\).

Intuitively, \(r(x) \, dx\) is the probability that the device will fail in the infinitesimal time interval from \(x\) to \(x + dx\), given survival up to time \(x\).

The cumulative rate function \(R\) of \(X\) for \([0, \infty), \le)\) is the usual cumulative failure rate function: \[R(x) = \int_0^x r(t) \, dt, \quad x \in [0, \infty)\]

If \(X\) has a piecewise continuous density function, then the reliability function \(F\) and the cumulative failure rate function \(R\) are related by \(R = - \ln F\) or equivalently \(F = e^{-R}\).

Details:

If \(f\) is a piecewise continuous density then \(f(x) = -F^\prime(x)\) at the poinsts of continuity \(x\) of \(f\). Hence \[R(x) = \int_0^x r(t) \, dt = \int_0^x -\frac{F^\prime(t)}{F(t)} \, dt = -\ln[F(x)], \quad x \in (0, \infty)\]

Using the result in note that the cumulative rate function can be defined as \(R = - \ln F\) even in the absence of a density function (and hence a rate function).

The average rate function \(\bar r\) is the usual average failure rate function; \(\bar r(x) = R(x) / x\) for \(x \in (0, \infty)\).

The definitions of increasing or decreasing or constant failure rate and the definitions of increasing or decreasing or constant average failure rate have their usual meanings. More generally, recall that the cumulative rate functions \(R_n\) of order \(n \in \N\) for \(X\) on \(([0, \infty), \le)\) can be defined recursively by \(R_0 = 1\) and \[R_{n+1}(x) = \int_0^x R_n(t) r(t) dt, \quad x \in [0, \infty), \, n \in \N\] The order \(n\) cumulative rate function has a simple representation in terms of the basic (order 1) cumulative rate function.

If \(X\) has a piecewise continuous density then \(R_n = \gamma_n(R) = R^n / n!\) for \(n \in \N\).

Details:

Note from the definition that \(R^\prime = r\). Hence, repeated integration (or induction) gives \(R_n = R^n / n!\) for \(n \in \N\).

So from and we could define \(R_n = (-\ln F)^n / n!\) without reference to a density function or a rate function.

Recall that the average rate function \(\bar r_n\) of order \(n \in \N\) for \(X\) on \([(0, \infty), \le)\) is defined by \[\bar r_n(x) = \frac{R_n(x)}{\gamma_n(x)}, \quad x \in (0, \infty)\]

The average rate function of order \(n\) has a simple representation in terms of the average rate function (order 1).

\(\bar r_n = \bar r^n\) for \(n \in \N\).

Details:

From and we have \[\bar r_n(x) = \frac{\gamma_n[R(x)]}{\gamma_n(x)} = \frac{R^n(x) / n!}{x^n / n!} = \left[\frac{R(x)}{x}\right]^n = \bar r^n(x), \quad x \in [0, \infty)\]

In particular, \(X\) has increasing (decreasing) (constant) average failure rate of order \(n\) if and only if \(X\) has increasing (decreasing) (consant) average failure rate, respectively. Next is the basic moment result from Section 1.3

Suppose again that \(X\) has reliability function \(F\). Then \[\int_0^\infty \frac{x^n}{n!} F(x) \, dx = \E\left[\frac{X^{n+1}}{(n + 1)!}\right], \quad n \in \N\]

When \(n = 0\) this reduces to the standard result from elementary probability: \[\int_0^\infty F(x) \, dx = \E(X)\] Next is one of the main results from classical reliability theory:

Random variable \(X\) is memoryless for \(([0, \infty), +)\) if and only if \(X\) is exponential for \(([0, \infty), +)\) if and only if \(X\) has constant rate for \(([0, \infty), \le)\) if and only if \(X\) has an exponential distribution in the ordinary sense. The exponential distribution with constant rate \(\alpha \in (0, \infty)\) has density function \(f\) and reliability function \(F\) given by \[f(x) = \alpha e^{-\alpha x}, \; F(x) = \alpha e^{-\alpha x}, \quad x \in [0, \infty)\]

Open the simulation of the exponential distribution on \(([0, \infty), +)\). Vary the rate parameter \(\alpha\) with the scrollbar and note the shape of the probability density function. Run the simulation and compare the empirical density function to the probability density function.

The memoryless property has the familiar form \[P(X \ge x + y \mid X \ge x) = \P(X \ge y), \quad x, \, y \in [0, \infty)\] That is, as long as the device has not failed, it is just as good as new.

If \(X\) has the exponential distribution with rate parameter \(\alpha \in (0, \infty)\) then \(X\) maximizes entropy over all random variables with \(\E(Y) = \E(X) = 1 / \alpha\); the maximum entropy is \[H(X) = 1 - \ln \alpha\]

If \(X\) has an exponential distribution then \(X\) has a compound Poisson distribution and so is infinitely divisible.

If \(\bs {Y} = (Y_1, Y_2, \ldots)\) is the random walk on \([0, \infty), +)\), associated with random variable \(X\) then \(\bs Y\) can be constructed as the partial sum process corresponding to a sequence \(\bs X = (X_1, X_2, \ldots)\) of independent copies of \(X\). That is, \(Y_n = \sum_{i=1}^n X_i\) for \(n \in \N_+\). If \(\bs Y\) is the random walk on \(([0, \infty), \le)\) associated with \(X\) then \(\bs Y\) can be constructed as the sequence of record values associated with \(\bs X\). That is, \(Y_1 = X_1\) and then \(Y_n\) is the \(n\)th record in the sequence \(\bs X\) for \(n \in \{2, 3, \ldots\}\). The following results follow from the general theory in Section 2.5, but in this context are well known.

Suppose \(X\) has the exponential distribution with rate parameter \(\alpha \in (0, \infty)\)

  1. The random walk \(\bs Y = (Y_1, Y_2, \ldots)\) on \(([0, \infty), +)\) associated with \(X\) is also the random walk on \(([0, \infty), \le)\) associated with \(X\). The transition density \(P\) given by \[P(x,y) = \alpha e^{-(y-x)}, \quad 0 \le x \le y\]
  2. For \(n \in \N_+\), the sequence \((Y_1, Y_2, \ldots, Y_n)\) has density \(g_n\) defined by \[g_n(x_1, x_2, \ldots, x_n) = \alpha^n e^{-x_n}, \quad 0 \le x_1 \le x_2 \le \cdots \le x_n\]
  3. For \(n \in \N_+\), random variable \(Y_n\) has probability density function \(f_n\) defined by \[f_n(x) = \alpha^n \gamma_{n-1}(x) F(x) = \alpha^n \frac{x^{n-1}}{(n-1)!} e^{-\alpha x}, \quad x \in [0, \infty)\]

Of course, \(\bs Y\) is the sequence of arrival times for the Poisson process with rate \(\alpha\), and so \(f_n\) is the ordinary gamma density function with parameters \(n\) and \(\alpha\).

Open the simulation of the gamma distribution on \(([0, \infty), +)\), with order \(n\) and rate \(\alpha\). Vary the parameters with the scrollbar and note the shape of the density function. Run the simulation and compare the empirical density function to the probability density function.

Now let \(\bs N = \{N_x: x \in [0, \infty)\}\) denote the counting process associated with the random walk \(\bs Y\) where \(N_x = \#\{n \in \N_+: X_n \le x\}\) for \(x \in [0, \infty)\). Then \(\bs N\) is the ordinary Poisson counting process. To check our results, we will compute the renewal function via the formula in Section 1.4

The renewal function \(m\) of \(\bs N\) is given by \(m(x) = \E(N_x) = \alpha x\) for \(x \in [0, \infty)\).

Details:

\begin{align*} m(x) &= \E(N_x) = \E[\Gamma(X, \alpha), X \le x] = \E(e^{\alpha X}, X \le x)\\ &= \int_0^x e^{\alpha t} \alpha e^{-\alpha t} dt = \alpha x, \quad x \in [0, \infty) \end{align*}

We also check the thinning result from Section 1.4

Consider the process obtained by thinning \(\bs Y\) with parameter \(p \in (0, 1)\). The distribution of \(Y_N\), the first accepted point, is exponential with parameter \(p \alpha\).

Details:

The probability density function \(h\) of \(Y_N\), the first accepted point is given by \[h(x) = p \alpha \Gamma[x, (1 - p) \alpha] F(x) = p \alpha e^{(1 - p) \alpha} e^{-\alpha x} = p \alpha e^{-p \alpha x}, \quad x \in [0, \infty)\]

For \(t \in (0, \infty)\), the sub-semigroup generated by \(t\) is \(\{nt: n \in \N\}\) and the corresponding quotient space as defined in Section 2.8 is \([0, t)\). The basic assumptions in that section are satisfied since \(\{n \in \N: n t \le x\}\) is finite for every \(x \in [0, \infty)\). So \(x \in [0, \infty)\) can be written uniquely as \[x = t n_t(x) + z_t(x)\] where \(n_t(x)= \lfloor x/t \rfloor \in \N\) and \(z_t(x) = x - t n_t(x) = x \bmod t \in [0, t)\). and of course \(x \mapsto n_t(x)\) is measurable. The following is a summary of results from Section 2.8.

Suppose that \(X\) has the exponential distribution on \(([0, \infty), +)\) with rate parameter \(\alpha \in (0, \infty)\). For \(t \in (0, \infty)\), we can write \(X = t N_t + Z_t\) where \(N_t\) takes values in \(\N\) and \(Z_t\) takes values in \([0, t)\).

  1. \(N_t\) and \(Z_t\) are independent.
  2. \(N_t\) has the exponential distribution on \((\N, +)\) (that is, the geometric distribution) with rate parameter \(p_t = 1 - e^{-\alpha t}\).
  3. The distribution of \(Z_t\) is the same as the conditional distribution of \(X\) given \(X \lt t\) and has density function \(z \mapsto \alpha e^{-\alpha z} / (1 - e^{-\alpha t})\) on \([0, t)\).

In this standard setting, we can do better than the general converse in Section 2.8 (see the paper by Galambos and Kotz and the book by Azlarov and Volodin).

Suppose that \(X\) is a random variable taking values in \([0, \infty)\). If \(N_t\) has a geometric distribution for all \(t \in (0, \infty)\) then \(X\) has an exponential distribution.

We now explore a converse based on independence properties of \(N_t\) and \(Z_t\). Suppose that \(X\) has a continuous distribution on \([0, \infty)\) with density function \(f\) and with reliability function \(F\) for \(([0, \infty), \le)\).

If \(Z_t\) and \(\{N_t = 0\}\) are independent for each \(t \in (0, \infty)\) then we can assume (by an appropriate choice of the density function) that \[f(s) = [1 - F(t)] \sum_{n=0}^\infty f(nt + s)\] for \(t \in (0, \infty)\) and \(s \in [0, t)\).

However, it is easy to see that if \(X\) has an exponential distribution, then Condition holds for all \(t \in (0, \infty)\) and \(s \in [0, \infty)\). Thus, our converse is best stated as follows:

Suppose that the condition in holds for \(s = 0\) and for \(s = t\), for all \(t \in (0, \infty)\). Then \(X\) has an exponential distribution.

Details:

The hypotheses are that \begin{align*} f(0) &= [1 - F(t)] \sum_{n=0}^\infty f(nt), \quad t \in (0, \infty) \\ f(t) &= [1 - F(t)] \sum_{n=0}^\infty f[(n + 1)t], \quad t \in (0, \infty) \end{align*} Combining the two displayed equations with a bit of algebra gives \(f(t) = f(0) F(t)\) for \(t \in [0, \infty)\) and hence \(X\) has a constant rate distribution on \(([0, \infty), \le)\) with rate \(\alpha = f(0)\). Equvialently, \(X\) has an exponential distribution on \(([0, \infty), +)\) with rate \(\alpha\).

The quotient space here can also be viewed as a lexicographic product. That is, \(([0, \infty), \le)\) is isomorphic to the lexicographic product of \((t \N, \lt)\) with \(([0, t), \le)\), as explored in Section 1.9.