\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\ms}{\mathscr}\) \(\newcommand{\rta}{\rightarrow}\) \(\newcommand{\Rta}{\Rightarrow}\) \(\newcommand{\upa}{\uparrow}\) \(\newcommand{\nea}{\nearrow}\) \(\newcommand{\bs}{\boldsymbol}\)
  1. Reliability
  2. 2. Semigroups
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8

7. Semigroup Products

Basics

In this section we consider semigroups relative to underlying measurable spaces \((S, \ms S)\) and \((T, \ms T)\). Recall that the product measurable space is \((S \times T, \ms S \times \ms T)\). We will use \(\cdot\) (and concatenation) generically as the semigroup operator, regardless of the underlying base set, but the semigroup under consideration should be clear from context.

Suppose that \((S, \cdot)\) and \((T, \cdot)\) are measurable semigroups. The direct product is the semigroup \((S \times T, \cdot)\) with the binary operation \(\cdot\) defined by \[(x, y)(u, v) = (xu, yv)\]

Details:

We need to show that the product space is a semigroup and satisfies the assumptions we have imposed. Let \((u, v), \, (x, y), \, (z, w) \in S \times T\). Then \[[(x, y) (u, v)](z, w) = (x u, y v)(z, w) = (x u z, y v w) = (x, y) (u v, z w) = (x, y) [(u, v) (z, w)]\] so the associative property holds. Next suppose that \((u, v)(x, y) = (u, v)(z, w)\). Then \(u x = u z\) and \(v y = v w\). Hence \(x = z\) and \(y = w\) so \((x, y) = (z, w)\). Therefore the left cancellation law holds. Finally, \[[(x, y), (u, v)] \mapsto (x, y)(u, v) = (x u, y v)\] is measurable since \((x, u) \mapsto x u\) and \((y, v) \mapsto y v\) are measurable. The graph \((S \times T, =)\) is measurable since \((S, =)\) and \((T, =)\) are measurable.

Note that if \((x, y) \in S \times T\), \(A \in \ms S\), and \(B \in \ms T\) then \begin{align*} (x, y) (A \times B) & = (x A) \times (y B) \\ (x, y)^{-1}(A \times B) & = (x^{-1} A) \times (y^{-1} B) \end{align*} Of special importance is the case where \((S, \ms S, \cdot) = (T, \ms T, \cdot)\) so that the direct product \((S^2, \cdot)\) is the direct power of order 2.

Suppose again that \((S, \cdot)\) and \((T, \cdot)\) are semigroups with associated graphs \((S, \rta)\) and \((T, \upa)\), respectively. Then the graph \((S \times T, \nea)\) associated with the product semigroup \((S \times T, \cdot)\) is the direct product of the graphs \((S, \rta)\) and \((T, \upa)\).

Details:

Let \((u, v), \, (x, y) \in S \times T\). By definition, \((u, v) \nea (x, y)\) if and only if \((x, y) \in (u, v)(S \times T) = (u S) \times (v T)\) if and only if \(x \in u S\) and \(y \in v T\) if and only if \(u \rta x\) and \(v \upa y\).

So all of the results in Section 1.8 on direct products of graphs apply to direct products of semigroups.

Suppose again that \((S, \cdot)\) and \((T, \cdot)\) are semigroups with direct product \((S \times T, \cdot)\).

  1. If \((S, \cdot)\) and \((T, \cdot)\) are positive semigroups, then so is \((S \times T, \cdot)\).
  2. If \((S, \cdot)\) or \((T, \cdot)\) is a strict positive semigroup, then so is \((S \times T, \cdot)\).
Details:
  1. Let \(e \in S\) and \(\epsilon \in T\) denote the identity elements of \((S, \cdot)\) and \((T, \cdot)\) respectively. Then for \((x, y) \in S \times T\), \((x, y) (e, \epsilon) = (x e, y \epsilon) = (x, y)\) and \((e, \epsilon) (x, y) = (e x, \epsilon y) = (x, y)\). So \((e, \epsilon)\) is the identity for \((S \times T, \cdot)\). Suppose now that \((u, v), \, (x, y) \in (S \times T)_+ = (S \times T) \setminus \{(e, \epsilon)\}\). Then either \(x \ne e\) or \(y \ne \epsilon\). In the first case, \(u x \ne u\) and in the second case \(v y \ne v\). In both cases, \((u, v) (x, y) = (u x, v y) \ne (u, v)\).
  2. Suppose again that \((x, y), \, (u, v) \in S \times T\). Then \((u, v) (x, y) = (u x, v y)\). Since one of the semigroups is strictly positive, either \(u x \ne u\) or \(v y \ne v\). In both cases, \((u, v) (x, y) \ne (u, v)\).

In part (b), the strict positive semigroup \((S \times T, \cdot)\) can be made into a positive semigroup with the addition of an identity element \((e, \epsilon)\) as described in Section 1.

If \((S, \cdot)\) and \((T, \cdot)\) are right zero semigroups then so is the direct product \((S \times T, \cdot)\).

Details:

By definition, \[(u, v) (x, y) = (u x, v y) = (x, y), \quad (u, v), \, (x, y) \in S \times T\]

For the next result, suppose that \(\mu\) and \(\nu\) are \(\sigma\)-finite measures on \((S, \ms S)\) and \((T, \ms T)\) respectively. Recall that \(\mu \times \nu\) denotes the product measure on \((S \times T, \ms S \times \ms T)\), also \(\sigma\)-finite.

If \(\mu\) and \(\nu\) are left-invariant for \((S, \cdot)\) and \((T, \cdot)\), respectively, then \(\mu \times \nu\) is left invariant for \((S \times T, \, \cdot)\).

Details:

For \(x \in S\), \(y \in T\), \(A \in \ms S\), and \(B \in \ms T\), \begin{align*} (\mu \times \nu)[(x, y) (A \times B)] &= (\mu \times \nu)(xA \times yB) \\ &= \mu(xA) \nu(yB) = \mu(A) \nu(B) = (\mu \times \nu)(A \times B) \end{align*} Therefore, for fixed \((x, y) \in S \times T\), the measures \(C \mapsto (\mu \times \nu)[(x, y) C]\) and \(C \mapsto (\mu \times \nu)(C)\) on \((S \times T, \ms S \times \ms T)\) agree on the measurable rectangles \(A \times B\) where \(A \in \ms S\) and \(B \in \ms T\). Hence, these measures must agree on all of \(\ms S \times T\), and hence \(\mu \times \nu\) is left-invariant for \((S \times T, \cdot)\).

Suppose now that \((S, \cdot)\) and \((T, \cdot)\) are positive semigroups with identity elements \(e\) and \(\epsilon\), respectively, and that the left-invariant measures \(\mu\) and \(\nu\) are unique, up to multiplication by positive constants. We show that \(\mu \times \nu\) has the same property. Let \(\ms C(T) = \{B \in \ms T: \nu(T) \in (0, \infty)\}\) and suppose that \(\lambda\) is a \(\sigma\)-finite, left-invariant measure for \((S \times T, \cdot)\). For \(C \in \ms C(T)\), define \[\mu_C(A) = \lambda(A \times C), \quad A \in \ms S\] Then \(\mu_C\) is a regular measure on \(S\) (although it may not have support \(S\)). Moreover, for \(x \in S\) and \(A \in \ms S\), \[\mu_C(x A) = \lambda(x A \times C) = \lambda[(x, \epsilon) (A \times C)] = \lambda(A \times C) = \mu_C(A)\] so \(\mu_C\) is left invariant for \((S, \cdot)\). It follows that for each \(C \in \ms C(T)\), there exists \(\rho(C) \in [0, \infty)\) such that \(\mu_C = \rho(C) \mu\); that is, \[ \lambda(A \times C) = \mu(A) \rho(C), \quad A \in \ms B(S), \, C \in \ms C(T) \] Fix \(A \in \ms S\) with \(\mu(A) \in (0, \infty)\). If \(C, \, D \in \ms C(T)\) and \(C \subseteq D\) then \[\mu(A) \rho(C) = \lambda(A \times C) \le \lambda(A \times D) = \mu(A) \rho(D)\] so \(\rho(C) \le \rho(D)\). If \(C, \, D \in \ms C(T)\) are disjoint then \begin{align*} \mu(A) \rho(C \cup D) &= \lambda[A \times (C \cup D)] = \lambda[(A \times C) \cup (A \times D)] \\ &= \lambda(A \times C) + \lambda(A \times D) = \mu(A) \rho(C) + \mu(A) \rho(D) \end{align*} so \(\rho(C \cup D) = \rho(C) + \rho(D)\). If \(C, \, D \in \ms C(T)\) then \begin{align*} \mu(A) \rho(C \cup D) &= \lambda[A \times (C \cup D)] = \lambda[(A \times C) \cup (A \times D)] \\ &\le \lambda(A \times C) + \lambda(A \times D) = \mu(A) \rho(C) + \mu(A) \rho(D) \end{align*} so \(\rho(C \cup D) \le \rho(C) + \rho(D)\). Thus, \(\rho\) is a content in the sense of Halmos, and hence can be extended to a regular measure on \(T\) (which we will continue to call \(\rho\)). Thus from the equation above we have \[\lambda(A \times C) = (\mu \times \rho)(A \times C), \quad A \in \ms S, \, B \in \ms C(T)\] By regularity, it follows that \(\lambda = \mu \times \rho\). Again fix \(A \in \ms S\) with \(0 \lt \mu(A) \lt \infty\). If \(y \in T\) and \(B \in \ms T\) then \[\mu(A) \rho(y B) = \lambda(A \times y B) = \lambda[(e, y) (A \times B)] = \lambda(A \times B) = \mu(A) \rho(B)\] so it follows that \(\rho(y B) = \rho(B)\) and hence \(\rho\) is left-invariant for \((T, \cdot)\). Thus, \(\rho = c \nu\) for some positive constant \(c\) and so \(\lambda = c (\mu \times \nu)\). Therefore \(\mu \times \nu\) is the unique left-invariant measure for \((S \times T, \cdot)\) , up to multiplication by positive constants.

Probability

Naturally our interest is the relationship between memoryless and exponential distributions for the individual semigroups \((S, \cdot)\) and \((T, \cdot)\), and for the product semigroup \((S \times T, \cdot)\).

Suppose that random variable \(X\) has an exponential distribution for \((S, \cdot)\), random variable \(Y\) has an exponential distribution for \((T, \cdot)\), and that \(X\) and \(Y\) are independent. Then \((X, Y)\) has an exponential distribution for the product semigroup \((S \times T, \cdot)\)

Details:

If \(A \in \ms S\), \(B \in \ms T\), and \((x, y) \in S \times T\) then \begin{align*} \P[(X, Y) \in (x, y) (A \times B)] &= \P(X \in x A, Y \in y B) = \P(X \in x A) \P(Y \in y B) \\ &= \P(X \in x S) \P(X \in A) \P(Y \in y T) \P(Y \in B) \\ &= \P(X \in x S, Y \in y T) \P(X \in A, Y \in B) \\ &= \P[(X, Y) \in (x, y) (S \times T)] \P[(X, Y) \in A \times B], \quad (x, y) \in S \times T \end{align*} Hence for fixed \((x, y) \in S \times T\), the finite measures on \(\ms S \times \ms T\) given by \begin{align*} C &\mapsto \P[(X, Y) \in (x, y) C] \\ C &\mapsto \P[(X, Y) \in (x, y) (S \times T)] \P[(X, Y) \in C] \end{align*} agree on the measurable rectangles \(A \times B\) where \(A \in \ms S\) and \(B \in \ms B\). Hence these measures agree on \(\ms S \times \ms T\) and so \((X, Y)\) is exponential for \((S \times T, \cdot)\).

Suppose that \((S, \dot)\) has identity element \(e\) and that \((T, \cdot)\) has identity element \(\epsilon\). Then \((X, Y)\) is memoryless for \((S \times T, \cdot)\) if and only if \(X\) is memoryless for \((S, \cdot)\), \(Y\) is memoryless for \((T, \cdot)\), and \(X, \, Y\) are right independent.

Details:

Let \(F\), \(G\), and \(H\) denote the reliability functions for \(X\), \(Y\), and \((X, Y)\) with respect to the semigroups \((S, \cdot)\), \((T, \cdot)\) and \((S \times T, \cdot)\) respectively. Suppose first that \((X, Y)\) is memoryless for \((S \times T, \cdot)\). Then from Section 1.8, \[F(u x) = H(u x, \epsilon) = H[(u, \epsilon) (x, \epsilon)] = H(u, \epsilon) H(x. \epsilon) = F(u) F(x), \quad u, \, x \in S\] So \(X\) is memoryless for \((S, \cdot)\) By a symmetric argument, \(Y\) is memoryless for \((T, \cdot)\). Next note that \[H(x, y) = H[(x, \epsilon) (e, y)] = H(x, \epsilon) H(e, y) = F(x) G(y), \quad x \in S, \, y \in T\] so \(X\) and \(Y\) are right independent. Conversely, suppose that \(X\) and \(Y\) are memoryless and are right independent. Then \begin{align*} H[(u, v) (x, y)] &= H(u x, v y) = F(u x) G(v y) = F(u) F(x) G(v) G(y) \\ &= [F(u) G(v)][F(x) G(y)] = H(u, v) H(x, y), \quad (u, v), \, (x, y) \in S \times T \end{align*} Hence \((X, Y)\) is memoryless for \((S \times T, \cdot)\).

Suppose again that \((S, \cdot)\) has identity element \(e\) and that \((T, \cdot)\) has identity element \(\epsilon\). If \((X, Y)\) is exponential for \((S \times T, \cdot)\) then \(X\) is exponential for \((S, \cdot)\), \(Y\) is exponential for \((T, \cdot)\), and \(X\) and \(Y\) are right independent.

Details:

Since \((X, Y)\) is exponential for \((S \times T, \cdot)\), \begin{align*} \P(X \in x A) &= \P[(X, Y) \in x A \times T] = \P[(X, Y) \in (x, \epsilon) (A \times T)] \\ &= \P[(X, Y) \in (x, \epsilon) (S \times T)] \P[(X, Y) \in A \times T] = \P(X \in x S) \P(X \in A), \quad x \in S, \, A \in \ms S \end{align*} Hence \(X\) is exponential for \((S, \cdot)\). By a symmetric argument, \(Y\) is exponential for \((T, \cdot)\). Finally, since \((X, Y)\) is exponential for \((S \times T, \cdot)\), it is also memeoryless and hence \(X\) and \(Y\) are right independent by .

From Section 5, the random variable \((X, Y)\) in has constant rate \(\delta \in (0, \infty)\) for \((S \times T, \cdot)\) with respect to a left-invariant measure \(\lambda\) on \((S \times T, \ms S \times \ms T)\). Hence \((X, Y)\) has density function \(h\) given by \(h(x, y) = \delta H(x, y) = \delta F(x) G(y)\). But we cannot conclude that that \(X\) and \(Y\) are fully independent since we don't know that \(\lambda\) is a product measure on \((S \times T, \ms S \times \ms T)\). Note that the canonical such measure \(\lambda\) is given by \[\lambda(A \times B) = \E\left[\frac{1}{H(X, Y)}; (X, Y) \in A \times B\right] = \E\left[\frac{1}{F(X) G(Y)}; X \in A, Y \in B\right]\] But we cannot factor the expression further without full independence of \(X\) and \(Y\). However, we have the following corollary:

Suppose that \(\lambda = \mu \times \nu\) is the unique left-invariant measure for \((S \times T, \ms S \times \ms T)\), up to multiplication by positive constants, as in Proposition . Then \((X, Y)\) is exponential for \((S \times T, \cdot)\) if and only if \(X\) is exponential for \((S, \cdot)\), \(Y\) is exponential for \((T, \cdot)\), and \(X\) and \(Y\) are independent.

The direct product \((S \times T, \, \cdot)\) has several natural sub-semigroups. First, \((\{(x, \epsilon): x \in S\}, \cdot)\) is a complete sub-semigroup isomorphic to \((S, \cdot)\). Similalry, \((\{(e, y): y \in T\}, \cdot)\) is a complete sub-semigroup isomorphic to \((T, \cdot)\). If \((S, \cdot) = (T, \cdot)\), then the diagonal \((\{(x, x): x \in S\}, \cdot)\) is a complete sub-semigroup isomorphic to \((S, \cdot)\). The results of this supsection apply to positive semigroups of course since such semigroups have identities.

Higher Order Products

Naturally, the results above can be extended to the direct product of \(n\) semigroups \((S_1, \cdot), \, (S_2, \cdot), \ldots (S_n, \cdot)\) for \(n \in \N_+\), and in particular to the \(n\)-fold direct power \((S^n, \cdot)\) of a semigroup \((S, \cdot)\). In the latter case, if \(\lambda\) is left invariant for \((S, \cdot)\) then \(\lambda^n\) is left invariant for \((S^n, \cdot)\) for each \(n \in \N_+\). The following definition gives an infinite construction that will be useful.

Suppose that \((S_i, \cdot)\) is a discrete semigroup with identity element \(e_i\) for \(i \in \N_+\). Let \[T = \{ (x_1, x_2, \ldots): x_i \in S_i \text{ for each } i \text{ and } x_i = e_i \text{ for all but finitely many } i \in \N_+ \}\] As before, we define the component-wise operation: \[\bs{x} \cdot \bs{y} = (x_1 y_1, x_2 y_2, \ldots), \quad \bs{x} = (x_1, x_2, \ldots), \, \bs{y} = (y_1, y_2, \ldots) \in T\] Then \((T, \cdot)\) is a discrete semigroup with identity \(e = (e_1, e_2, \ldots)\).

In particular, if \((S_i, \cdot)\) is a positive semigroup for each \(i \in \N_+\) then \((T, \cdot)\) is also a positive semigroup.

Marshall-Olkin Distributions

In this subsection we generalize the multivariate exponential distribution defined and studied by Marshall and Olkin. To set the stage, suppose that \((S, \cdot)\) is a positive semigroup with identity \(e\) whose associated partial order graph \((S, \preceq)\) is a lattice. For \(n \in \N_+\), \((S^n, \cdot)\) denotes the power semigroup of \((S, \cdot)\) of order \(n\), whose partial order graph \((S^n, \preceq_n)\) is the power of \((S, \preceq)\) of order \(n\), also a lattice. Once again, \((S, \cdot)\) is measurable with respect to underlying reference space \((S, \ms S)\), so that \((S^n, \cdot)\) and the graph \((S^n, \preceq_n)\) are measurable with respect to \((S^n, \ms S^n)\). We start with our generalized definition in the bivariate case.

Suppose that \(U\), \(V\), and \(W\) are right independent and have memoryless distributions on \((S, \cdot)\). Let \(X = U \wedge W\) and \(Y = V \wedge W\). Then \((X, Y)\) has a Marshall-Olkin distribution on \((S^2, \cdot)\).

Our first result follows immediatley from the definition and a basic result from Section 6.

Suppose that \((X, Y)\) has a Marshall-Olkin distribution on \((S^2, \cdot)\) as in definition . Let \(F_1\), \(F_2\), and \(F_3\) denote the reliability functions of \(U\), \(V\), and \(W\) on \((S, \cdot)\) respectively. Then

  1. \(X = U \wedge W\) is memoryless with reliability function \(F_1 F_3\).
  2. \(Y = V \wedge W\) is memoryless with reliability function \(F_2 F_3\).
  3. \(X \wedge Y = U \wedge V \wedge W\) is memoryless with reliability function \(F_1 F_2 F_3\).

But of course, \(X\) and \(Y\) are dependent. Moreover, a Marshall-Olkin distribution places positive probability on the diagonal.

Suppose that \((X, Y)\) has a Marshall-Olkin distribution on \((S^2, \cdot)\). Then \(\P(X = Y) \gt 0\).

Details: Suppose that \(X = U \wedge W\) and \(Y = V \wedge W\) as in definition . Then \[W \preceq U, \, W \preceq V \implies X = W, \, Y = W \implies X = Y \] Hence \[ \P(X = Y) \ge \P(U \succeq W, V \succeq V) = \E[\P(U \succeq W, V \succeq W \mid W)] = \E[\P(U \succeq W \mid W) \P(V \succeq W \mid W)] = \E[F_1(W) F_2(W)] \] From our usual support assumption, the right hand side is positive.

Suppose that \(\lambda\) is the left-invariant reference measure for \((S, \cdot)\), so that \(\lambda^2\) is the left-invariant measure for \((S^2, \cdot)\). In the continuous case with \(S\) uncountable, we typically have \(\lambda^2\{(x, x): x \in S\} = 0\), so a Marshall-Olkin distribution has an absolutely continuous part and a singular part.

Suppose that \((X, Y)\) has a Marshall-Olkin distribution on \((S^2, \cdot)\) as in definition , with reliability function \(H\). Then \[ H(x, y) = F_1(x) F_2(y) F_3(x \vee y), \quad (x, y) \in S^2 \]

Details

By definition and right independence \begin{align*} H(x, y) &= \P(U \wedge W \succeq x, V \wedge W \succeq y) = \P(U \succeq x, W \succeq x, V \succeq y, W \succeq y) \\ &= \P(U \succeq x, V \succeq y, W \succeq x \vee y) = \P(U \succeq x) \P(V \succeq y) \P(W \succeq x \vee y) = F_1(x) F_2(y) F_3(x \vee y), \quad (x, y) \in S^2 \end{align*}

Our next result is the abstract version of one of the original characterizations of the Marshall-Olkin distribution.

Suppose again that \((X, Y)\) has a Marshall-Olkin distribution on \((S^2, \cdot)\) as in definition , with reliability function \(H\). Then \((X, Y)\) satisfies the partial memoryless property \[ H[(t, t) (x, y)] = H(t, t) H(x, y), \quad x, \, y, \, t \in S \]

Details

Let \(t, \, x, \, y \in S\). Using we have \[ H[(t, t) (x, y)] = H(t x, t y) = F_1(t x) F_2(t y) F_3[(t x) \vee (t y)] = F_1(t x) F_2(t y) F_3[t (x \vee y)] \] But since \(U\), \(V\), and \(W\) are memoryless, \[ H[(t, t) (x, y)] = F_1(t) F_1(x) F_2(t) F_2(y) F_3(t) F_3(x \vee y) = H(t, t) H(x, y) \]

Stated in terms of conditional probability, the partial memoryless property has the form \[ \P(X \succeq t x, Y \succeq t y \mid X \succeq t, Y \succeq t) = \P(X \succeq x, Y \succeq y), \quad x, \, y, \, t \in S \] The extension of the Marshall-Olkin distribution to higher dimensions is a bit complicated and requires some additional notation to state the definition and results cleanly. For \(n \in \N_+\) let \(B_n\) denote the bit strings of length \(n\), excluding the 0 string \(0 0 \cdots 0\).

Suppose that \(n \in \N_+\) and that \(\{Z_b: b \in B_n\}\) is a collection of right independent variables, each memoryless on \((S, \cdot)\). Define \[ X_i = \inf\{Z_b: b \in B_n, b_i = 1\}, \quad i \in \{1, 2, \ldots, n\} \] Then \((X_1, X_2, \ldots, X_n)\) has the Marshall-Olkin distribution on \((S^n, \cdot)\).

So a collection of \(2^n - 1\) right independent, memoryless variables on \((S, \cdot)\) is required for the construction of the Marshall-Olkin variable on \((S^n, \cdot)\). The marginal distributions are of the same type. For the following results, let \(F_b\) denote the reliability function of \(Z_b\) for \(b \in B_n\).

Suppose again that \(n \in \N_+\) and that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \((S^n, \cdot)\) as in definition . For \(k \in \N_+\) with \(k \le n\), let \((j_1, j_2, \ldots, j_k)\) be a subsequence of \((1, 2, \ldots, n)\). Then

  1. \( \left(X_{j_1}, X_{j_2}, \ldots, X_{j_k}\right) \) has a Marshall-Olkin distribution on \((S^k, \cdot)\).
  2. \( \inf\left\{X_{j_1}, X_{j_2}, \ldots, X_{j_k}\right\} \) is memoryless on \((S, \cdot)\) with reliability function \( \prod \left\{F_b: b \in B_n, b_{j_1} = b_{j_2} = \cdots = b_{j_k} = 1\right\} \)

Suppose again that \(n \in \N_+\) and that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \((S^n, \cdot)\) as in definition . Let \(H\) denote the reliability function of \(\bs{X}\) on \((S^n, \cdot)\). Then \[H(x_1, x_2, \ldots, x_n) = \prod_{b \in B_n} F_b(\sup\{x_i: b_i = 1\}), \quad (x_1, x_2, \ldots, x_n) \in S^n \]

The generalization of the partial memoryless property is straightforward.

Suppose again that \(n \in \N_+\) and that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) has a Marshall-Olkin distribution on \((S^n, \cdot)\) as in definition . Then \(\bs{X}\) has the partial memoryless property \[ H[(t, t, \ldots, t) (x_1, x_2, \ldots, x_n)] = H(t, t, \ldots t) H(x_1, x_2, \ldots, x_n), \quad t \in S, \, (x_1, x_2, \ldots, x_n) \in S^n \]

We will revisit Marshall-Olkin distributions for the standard continous semigroup \(([0, \infty), +)\) in Section 3.4.