\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\hn}{\text{hn}}\)
  1. Reliability
  2. 3. Standard Continuous Spaces
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8
  11. 7
  12. 8

4. Murshall-Olkin Distributions

Preliminaries

Quite a large number of multivariate reliabilitiy models, and in particular multivariate exponential distributions have been formulated over the years. Our goal in this section and the next is not an exhaustive review of this literature, but rather to consider a few that can be generalized to, or have special connections to, the semmigroup and graph theory in this text. We start once again with the standard space \(([0, \infty), +, \lambda)\) where \(\lambda\) is Lebesgue measure on the \(\sigma\)-algebra \(\ms B\) of Borel subsets of \([0, \infty)\). The corresponding graph, of course, is \(([0, \infty), \le)\). For \(n \in \N_+\), let \(\left([0, \infty)^n, +, \lambda^n\right)\) denote the power space of order \(n\). The corresponding graph \(\left([0, \infty)^n, \le^n\right)\) is the power graph of \(([0, \infty), \le)\) of order \(n\). Since \(\le\) is a total order \([0, \infty)\), our usual lattice notation becomes \(x \wedge y = \min\{x, y\}\) and \(x \vee y = \max\{x, y\}\) for \(x, \, y \in [0, \infty)\).

In the most general sense, a random vector \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a multivariate exponential distribution on \([0, \infty)^n\) if \(X_i\) has an exponential distribution on \(([0, \infty), +)\) for each \(i \in \{1, 2, \ldots, n\}\).

That is, \(X_i\) has a standard exponential distribution on \([0, \infty)\) for each \(i \in \{1, 2, \ldots, n\}\). Of course, the statement that \(\bs{X}\) has a multivariate exponential distribution on \([0, \infty)^n\) does not mean that \(\bs{X}\) has an exponential distribution on the semigroup \( \left([0, \infty)^n, +\right) \), except of course when \(n = 1\). We know the full story:

Random vector \(\bs{X}\) has an exponential distribution on \( \left([0, \infty)^n, +\right) \) if and only if \((X_1, X_2, \ldots, X_n)\) are independent and \(X_i\) has an exponential distribution on \(([0, \infty), +)\) for each \(i \in \{1, 2, \ldots, n\}\).

Definition is too weak and proposition is too strong for an interesting multivariate reliability model. Other conditions need to be imposed on \(\bs{X}\), often multivariate memoryless conditions of some sort. Among the best known of the multivariate exponential distributions are the Marshall-Olkin distributions. These distributions were discussed in Section 2.7 because these distributions can be defined in the setting of a general positive semigroup \((S, \cdot)\) whose partial order graph \((S, \preceq)\) is a lattice. In this section we review the results for the standard setting above (the orginal setting of Marshall and Olkin), including results that do not make sense in the general semigroup setting. We start with the bivariate case where the notation and results are simplest before moving on to the general multivariate case.

Bivariate Distributions

Suppose that \(U, \, V, \, W\) are independent and have exponential distributions on \(([0, \infty), +)\), with rates \(\alpha, \, \beta, \, \delta \in (0, \infty)\) respectively. Let \(X = U \wedge W\) and \(Y = V \wedge W\). Then \((X, Y)\) has the Marshall-Olkin distribution on \( \left([0, \infty)^2, +\right) \) with parameters \(\alpha, \, \beta, \, \delta\).

The motivation for definition is a two-component system. Random variable \(U\) is the arrival time of a shock that is fatal for component 1, but not component 2; random variable \(V\) is the arrival time of a shock that is fatal for component 2, but not component 1; and random variable \(W\) is the arrival time of an event that is fatal for both components.

Suppose that \((X, Y)\) has the Marshall-Olkin distribution on \( \left([0, \infty)^2, +\right) \) with parameters \( \alpha, \, \beta, \, \delta \in (0, \infty) \). Then each of the following has an exponential distribution on \( ([0, \infty), +) \).

  1. \( X = U \wedge W \) with rate \(\alpha + \delta\)
  2. \( Y = V \wedge W \) with rate \(\beta + \delta\)
  3. \( X \wedge Y = U \wedge V \wedge W \) with rate \(\alpha + \beta + \delta\)

In particular, \( (X, Y) \) has a bivariate exponential distribution in the sense of definition .

Suppose that \((X, Y)\) has the Marshall-Olkin distribution on \( \left([0, \infty)^2, +\right) \) with parameters \(\alpha, \, \beta, \, \delta \in (0, \infty)\). Then \((X, Y)\) has reliability function \(H\) for \( \left([0, \infty)^2, +\right) \) given by \[H(x, y) = \exp[-\alpha x - \beta y - \delta (x \vee y)], \quad (x, y) \in [0, \infty)^2 \]

Suppose that \((X, Y)\) has reliability function \(H\) on \( \left([0, \infty)^2, + \right) \). Then \((X, Y)\) has a Marshall-Olkin distribution if and only if \((X, Y)\) has a bivariate exponential distribution as in definition and satisfies the partial memoryless property

\[ H(t + x, t + y) = H(t, t) H(x, y), \quad x, \, y, \, t \in [0, \infty) \]

Let \(\Delta = \{(x, x): x \in [0, \infty)\}\) denote the diagonal of \([0, \infty)^2\). The sub-semigroup \((\Delta, +)\) is isomorphic to \(([0, \infty), +)\) with invariant measure \(\lambda_1\) given by \(\lambda_1(A) = \lambda(A_1)\) for measurable \(A \subseteq \Delta\) where \(A_1 = \{x \in [0, \infty): (x, x) \in A\}\). A Marshall-Olkin distribution on \([0, \infty)^2\) has a mixed distribution, with positive probability on \(\Delta\).

Suppose that \((X, Y)\) has the Marshall-Olkin distribution on \([0, \infty)^2\) with parameters \(\alpha, \, \beta, \, \delta \in (0, \infty)\). Then

  1. \((X, Y)\) has density \(h_a\) on \([0, \infty)^2 - \Delta\) with resepect to \(\lambda^2\) given by \[h_a(x, y) = \begin{cases} \alpha (\beta + \delta) \exp[-\alpha x - (\beta + \delta) y], \quad 0 \le x \lt y \lt \infty \\ \beta (\alpha + \delta) \exp[-\beta y - (\alpha + \delta) x], \quad 0 \le y \lt x \lt \infty \end{cases}\]
  2. \((X, Y)\) has density \(h_s\) on \(\Delta\) with respect to \(\lambda_1\) given by \[h_s(x, x) = \delta \exp[-(\alpha + \beta + \delta) x], \quad (x, x) \in \Delta \]
Details:

This follows from standard results on independent exponential variables, by consider the 6 possible orderins of \((U, V, W)\):

  1. The event \(\{U \lt V \lt W\}\) has probability \[\P(U \lt V \lt W) = \frac{\alpha}{\alpha + \beta + \delta} \frac{\beta}{\beta + \delta}\] Moreover, given this event we have \(X = U\) and \(Y = V\) so the conditional density of \((X, Y)\) given the event is \[(x, y) \mapsto (\alpha + \beta + \delta) \exp[-(\alpha + \beta + \delta) x] (\beta + \delta) \exp[-(\beta + \delta)(y - x)], \quad 0 \le x \lt y \lt \infty\]
  2. The event \(\{V \lt U \lt W\}\) has probability \[\P(V \lt U \lt W) = \frac{\beta}{\alpha + \beta + \delta} \frac{\alpha}{\alpha + \delta}\] Moreover, given this event we have \(X = U\) and \(Y = V\) so the conditional density of \((X, Y)\) given the event is \[(x, y) \mapsto (\alpha + \beta + \delta) \exp[-(\alpha + \beta + \delta) x] (\alpha + \delta) \exp[-(\alpha + \delta)(x - y)], \quad 0 \le y \lt x \lt \infty\]
  3. The event \(\{U \lt W \lt V\}\) has probability \[\P(U \lt W \lt V) = \frac{\alpha}{\alpha + \beta + \delta} \frac{\delta}{\beta + \delta}\] Moreover, given this event we have \(X = U\) and \(Y = W\) so the conditional density of \((X, Y)\) given the event is \[(x, y) \mapsto (\alpha + \beta + \delta) \exp[-(\alpha + \beta + \delta) x] (\beta + \delta) \exp[-(\beta + \delta)(y - x)], \quad 0 \le x \lt y \lt \infty\]
  4. The event \(\{V \lt W \lt U\}\) has probability \[\P(V \lt W \lt U) = \frac{\beta}{\alpha + \beta + \delta} \frac{\delta}{\alpha + \delta}\] Moreover, given this event we have \(X = W\) and \(Y = V\) so the conditional density of \((X, Y)\) given the event is \[(x, y) \mapsto (\alpha + \beta + \delta) \exp[-(\alpha + \beta + \delta) y] (\alpha + \delta) \exp[-(\alpha + \delta)(x - y)], \quad 0 \le y \lt x \lt \infty\]
  5. Finally, the event \(\{W \lt U, W \lt V\} = \{W \lt U \wedge V\}\) has probability \[\P(W \lt U \wedge V) = \frac{\delta}{\alpha + \beta + \delta}\] Moreover, given this event we have \(X = Y = W\) and so the conditional density of \((X, Y)\) given the event is \[(x, x) \mapsto (\alpha + \beta + \delta) \exp[-(\alpha + \beta + \delta) x], \quad (x, x) \in \Delta \]

So in particular \[\P(X = Y) = \frac{\delta}{\alpha + \beta + \delta}\]

Suppose again that \((X, Y)\) has the Marshall-Olkin distribution on \(([0, \infty)^2, +)\) with parameters \(\alpha, \, \beta, \, \delta \in (0, \infty)\). Then

  1. \(\E(X) = 1 / (\alpha + \delta)\), \(\var(X) = 1 / (\alpha + \delta)^2\)
  2. \(\E(Y) = 1 / (\beta + \delta)\), \(\var(X) = 1 / (\beta + \delta)^2\)
  3. \(\cor(X, Y) = \delta / (\alpha + \beta + \delta)\)
Details:

Parts (a) and (b) follow immediately from . Part (c) can be computed using the density function in , but a conditioninng argument, as in that proposition, also works: \begin{align*} \E(XY, U \lt V \lt W) &= \frac{\alpha^2 \beta + 3 \alpha \beta^2 + 3 \alpha \beta \delta}{(\alpha + \beta + \delta)^2 (\beta + \delta)^2} \\ \E(XY, V \lt U \lt W) &= \frac{\alpha \beta^2 + 3 \alpha^2 \beta + 3 \alpha \beta \delta}{(\alpha + \beta + \delta)^2 (\beta + \delta)^2} \\ \E(XY, U \lt W \lt V) &= \frac{\alpha^2 \delta + 3 \alpha \delta^2 + 3 \alpha \beta \delta}{(\alpha + \beta + \delta)^2 (\beta + \delta)^2} \\ \E(XY, V \lt W \lt U) &= \frac{\beta^2 \delta + 3 \beta \delta^2 + 3 \alpha \beta \delta}{(\alpha + \beta + \delta)^2 (\beta + \delta)^2} \\ \E(XY, W \lt U \wedge V) &= \frac{2 \delta}{(\alpha + \beta + \delta)^2} \\ \end{align*} Summing and simplifying gives \[\E(X Y) = \frac{1}{\alpha + \beta + \delta} \left(\frac{1}{\alpha + \delta} + \frac{1}{\beta + \delta}\right)\] and then part (c) follows.

It's interesting that \(\cor(X, Y) = \P(X = Y)\).

Multivariate Distributions

The extension of the Marshall-Olkin distribution to higher dimensions is a bit complicated and requires some additional notation to state the definition and results cleanly, just as in Section 2.7. For \(n \in \N_+\) let \(B_n\) denote the set of bit strings of length \(n\), excluding the all 0 string.

Suppose that \(\{Z_b: b \in B_n\}\) is a collection of independent variables, and that \(Z_b\) has the exponential distribution on \([0, \infty), +)\) with rate \(\alpha_b \in (0, \infty)\). Define \[ X_i = \min\{Z_b: b \in B_n, b_i = 1\}, \quad i \in \{1, 2, \ldots, n\} \] Then \((X_1, X_2, \ldots, X_n)\) has the Marshall-Olkin distribution on \(([0, \infty)^n, +)\) with parameters \(\{\alpha_b: b \in B_n\}\).

So a collection of \(2^n - 1\) independent, exponential variables on \(([0, \infty), +)\) is required for the construction of the Marshall-Olkin variable on \(([0, \infty)^n, +)\). The marginal distributions are of the same type.

Suppose that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \(([0, \infty)^n, +)\) with parameters \(\{\alpha_b \in (0, \infty): b \in B_n\}\), and that \((j_1, j_2, \ldots, j_k)\) is a subsequence of \((1, 2, \ldots, n)\). Then

  1. \( \left(X_{j_1}, X_{j_2}, \ldots, X_{j_k}\right) \) has a Marshall-Olkin distribution on \(([0, \infty)^k, +)\). If we let \(C_k\) denote the bitstrings indexed by \(\{j_1, j_2, \ldots, j_k\}\) (except the all 0 string) then the new set of parameters is given by \[\beta_c = \sum\left\{\alpha_b: b \in B_n, b_i = c_i \text{ for } i \in \{j_1, j_2, \ldots, j_k\right\}\}, \quad c \in C_k\]
  2. \( \min\left\{X_{j_1}, X_{j_2}, \ldots, X_{j_k}\right\} \) has the exponential distribution on \(([0, \infty), +)\) with rate \( \sum \left\{\alpha_b: b \in B_n, b_{j_1} = b_{j_2} = \cdots = b_{j_k} = 1\right\} \)
  3. \[ \P\left(X_{j_1} = X_{j_2} = \cdots X_{j_k}\right) = \frac{\sum\left\{\alpha_b: b \in B_n, b_{j_i} = 1 \text{ for all } i \in \{1, 2, \ldots, k\}\right\}} {\sum\left\{\alpha_b: b \in B_n, b_{j_i} = 1 \text{ for some } i \in \{1, 2, \ldots, k\}\right\}} \]

In partricular, \(X_i\) has an exponential distribution on \(([0, \infty), +)\) with rate \(\sum\{\alpha_b: b \in B_n, b_i = 1\}\), so \(\bs{X}\) has a mutivariate exponential distribution in the sense of definition . From part (c), \(\bs{X}\) has positive probability on each hyperplane in \([0, \infty)^n\).

Suppose that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \(([0, \infty)^n, +)\) with parameters \(\{\alpha_b \in (0, \infty): b \in B_n\}\). Let \(H\) denote the reliability function of \(\bs{X}\) on \((S^n, \cdot)\). Then \[H(x_1, x_2, \ldots, x_n) = \exp\left(-\sum_{b \in B_n} \alpha_b \, \max\{x_i: b_i = 1\}\right), \quad (x_1, x_2, \ldots, x_n) \in [0, \infty)^n \]

The generalization of the partial memoryless property is straightforward.

Suppose again that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \(([0, \infty)^n, +)\) with reliability function \(H\). Then \(\bs{X}\) has the partial memoryless property \[ H(t + x_1, t + x_2, \ldots, t + x_n) = H(t, t, \ldots, t) H(x_1, x_2, \ldots, x_n), \quad t \in [0, \infty), \, (x_1, x_2, \ldots, x_n) \in [0, \infty)^n \]

Suppose again that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \(([0, \infty)^n, +)\) with parameters \(\{\alpha_b \in (0, \infty): b \in B_n\}\). For distinct \(i, \, j \in \{1, 2, \ldots, n\}\), \[\cor(X_i, X_j) = \P(X_i = X_j) = \frac{\sum\left\{\alpha_b: b \in B_n, b_i = b_j = 1\}\right\}} {\sum\left\{\alpha_b: b \in B_n, b_i = 1 \text{ or } b_j = 1\right\}}\]

Details:

This follows from propositions and since \((X_i, X_j)\) has the bivariate Marshall-Olkin distribution with parameters \( \sum\{\alpha_b: b \in B_n, b_i = 1\} \), \( \sum\{\alpha_b: b \in B_n, b_j = 1\} \), \( \sum\{\alpha_b: b \in B_n, b_i = b_j = 1\} \).