Most characterizations of the exponential distribution (and its generalizations) in the classical setting are based on the equivalence of the time-shifted distribution with the original distribution, in some sense. In the semigroup setting (and particularly in the positive semigroup setting), there are natural generalizations of these concepts. Once again we start with measurable space \((S, \ms S)\) and a measurable semigroup \((S, \cdot)\) as discussed in Section 1. The relation \(\rta\) associated with \((S, \cdot)\) is given by \(x \rta y\) if and only if \(y \in x S\). So if \(X\) is a random variable in \(S\) then the reliability function \(F\) of \(X\) for the graph \((S, \rta)\) is given by \[F(x) = \P(x \rta X) = \P(X \in x S), \quad x \in S\] We assume that \(X\) is supported by \((S, \cdot)\) so that \(F(x) \gt 0\) for \(x \in S\).
Suppose that \(X\) is a random variable in \(S\) with reliability function \(F\) for \((S, \rta)\).
As with the other terms from reliability that we have used, exponential distribution and memoryless distribution are used in an abstract sense. Recall that a positive measure \(\mu\) on a measurable group \((S, \cdot)\) is relatively invariant if \(\mu(x A) = F(x) \mu(A)\) for \(x \in S\) and \(A \in \ms{S}\) where \(F: S \to [0, \infty)\) is measurable (see the book by Halmos). So an exponential distribution is simply a relatively invariant probability measure, but on a semigroup rather than a group.
If \(X\) has an exponential distribution on \((S, \cdot)\) then \(X\) has a memoryless distribution on \((S, \cdot)\).
Generalizing the relation \(\rta\), recall that the relation \(\rta_A\) associated with \(A \in \ms S\) is defined by \(x \rta_A y\) if and only if \(y \in x A\), so that \(y = x a \) for some \(a \in A\). In terms of the relations associated with \((S, \cdot)\), the exponential and memoryless properties have the form \begin{align*} \P(x \rta_A X \mid x \rta X) &= \P(X \in A), \quad x \in S, \, A \in \ms S \\ \P(x y \rta X \mid x \rta X) & = \P(y \rta X), \quad x, \, y \in S \end{align*} Sepcializing further, if \((S, \cdot)\) is a positive semigroup, so that the associated relation is a partial order \(\preceq\), the exponential property and memoryless properties have the more familiar form \begin{align*} \P(X \in x A \mid X \succeq x) &= \P(X \in A), \quad x \in S, \, A \in \ms S \\ \P(X \succeq x y \mid X \succeq x) &= \P(X \succeq y), \quad x, \, y \in S \end{align*} Here are alternative formulations of the exponential and memoryless properties:
Suppose again that \(X\) is a random variable in \(S\).
The proofs rely on basic algebraic properties of the semiroup.
As a simple corollary we can answer the question of when the random walk on semigroup \((S, \cdot)\) associated with random variable \(X\) is the same as the random walk on the graph \((S, \rta)\) associated with \(X\).
Suppose again that \(X\) is a random variable in \(S\). The random walk on the semigroup \((S, \cdot)\) associated with \(X\) is the same as the random walk on the graph \((S, \rta)\) associated with \(X\) if and only if \(X\) has an exponential distribution.
Let \(\bs{Y} = (Y_1, Y_2, \ldots)\) be a discrete-time, homogeneous Markov process in \(S\). For both random walks, the distribution of \(Y_1\) is the same as the distribution of \(X\). For the random walk on the graph \((S, \rta)\), the conditional distribution of \(Y_{n + 1}\) given \(Y_n = x\) is the same as the distribution of \(X\) given \(x \rta X\) (that is, \(X \in x S\)). For the random walk on the semigroup \((S, \cdot)\), the conditional distribution of \(Y_{n + 1}\) given \(Y_n = x\) is the same as the distribution of \(x X\). Hence the two random walks are the same if and only if the conditional distribution of \(X\) given \(X \in x S\) is the same as the distribution of \(x X\). By , this is the case if and only if \(X\) has an exponential distribution.
Our next result gives expected value characterizations of the exponential property.
Suppose again that \(X\) is a random variables in \(S\) and reliability function \(F\) for \((S, \cdot)\). The following are equivalent:
If \(X\) has an exponential distribution on \((S, \cdot)\) then (b) and (c) holds more generally for measurable \(g: S \to \R\), assuming that the expected values exist.
The following two results deal with the set of points and the collection of sets satisfying the exponential property.
Suppose again that \(X\) is a random variable in \(S\) with reliability function \(F\) for \((S, \rta)\). Define \[S_X = \{x \in S: \P(X \in x A) = F(x) \P(X \in A) \text{ for all } A \in \ms S\}\] If \(S_X \ne \emptyset\) then \((S_X, \cdot)\) is a complete sub-semigroup of \((S, \cdot)\).
We first show closure. Suppose that \(x, \, y \in S_X\) and \(A \in \ms S\). Then \[\P[X \in (x y) A] = \P[X \in x (y A)] = F(x) \P(X \in y A) = F(x) F(y) \P(X \in A)\] In particular, letting \(A = S\) we have \(F(x y) = F(x) F(y)\), so substituting back we have \[\P[X \in (x y) A] = F(x y) \P(X \in A)\] and so \(x y \in S_X\). Next we show completeness. Suppose that \(x, \, y \in S_X\) and that \(x \rta y\) so that \(y = x u\) for some \(u \in S\). We need to show that \(u = x^{-1} y \in S_X\). Let \(A \in \ms S\). First, since \(x \in S_X\) we have \[\P[X \in (x u) A] = \P[X \in x (u A)] = F(x) \P(X \in u A)\] On the other hand, since \(y = x u \in S_X\) we have \[\P[X \in (x u) A] = F(x u) \P(X \in A)\] Again since \(x \in S_X\) we have \[F(x u) = \P[X \in (x u) S] = \P[X \in x (u S)] = F(x) \P(X \in u S) = F(x) F(u)\] Combining the displayed equations we have \[F(x) \P(X \in u A) = F(x) F(u) \P(X \in A)\] Since \(F(x) \gt 0\) we have \(\P(X \in u A) = F(u) \P(X \in A)\), so \(u \in S_X\).
In the case that \((S, \cdot)\) is a positive semigroup, note that the identity element \(e \in S_X\) so \((S_X, \cdot)\) is a complete positive sub-semigroup. Of course in general, \(S_X\) may be empty, or in the case of a positive semigroup we could have \(S_X = \{e\}\). These cases aside, every distribution satisfies the exponential property on some complete sub-semigroup.
Suppose again that \(X\) is a random variable in \(S\) with reliability function \(F\) for \((S, \rta)\). Define \[\ms S_X = \{A \in \ms S: \P(X \in x A) = F(x) \P(X \in A) \text{ for all } x \in S\}\] Then \(\ms S_X\) is closed under countable disjoint unions, proper differences, countable increasing unions, and countable decreasing intersections.
Recall that for \(x \in S\), the mapping \(A \mapsto x A\) from \(\ms S\) into \(\ms S\) preserves all of the set operations. Let \((A_1, A_2, \ldots)\) be a sequence of disjoint sets in \(\ms S_X\) and let \(x \in S\). Then \((x A_1, xA_2, \ldots)\) is a disjoint sequence and \begin{align*} \P\left(X \in x \bigcup_{i = 1}^\infty A_i \right) &= \P\left(X \in \bigcup_{i = 1}^\infty x A_i\right) = \sum_{i = 1}^\infty \P(X \in x A_i) = \sum_{i=1}^\infty F(x) \P(X \in A_i)\\ &= F(x) \sum_{i=1}^\infty \P(X \in A_i) = F(x) \P\left(X \in \bigcup_{i=1}^\infty A_i \right) \end{align*} Hence \(\bigcup_{i=1}^\infty A_i \in \ms S_X\). Next let \(A, \, B \in \ms S_X\) with \(A \subseteq B\) and let \(x \in S\). Then \(x A \subseteq x B\) and \(x (B - A) = x B - x A\). Hence \begin{align*} \P[X \in x (B - A)] &= \P[X \in (x B - x A)] = \P(X \in x B) - \P(X \in x A) \\ &= F(x) \P(X \in A) - F(x) \P(X \in B) = F(x)[\P(X \in B) - \P(X \in A)] \\ &= F(x) \P(X \in B - A) \end{align*} Hence \(B - A \in \ms S_X\). Of course \(S \in \ms S_X\) and hence the other results follow.
From it follows that \(\ms S_X\) is a \(\lambda\)-system and hence also a monotone class (in the standard terminology of measure theory). From the monotone class theorem, it follows that if \(\ms A\) is a collection of sets that generates the reference \(\sigma\)-algebra \(\ms S\) and \(\ms A \subseteq \ms S_X\) then \(\ms S_X = \ms S\) and hence \(X\) has an exponential distribution. For the following related result, recall that the (right) \(\sigma\)-algebra \(\ms S_0\) associated with \((S, \cdot)\) is the \(\sigma\)-algebra generated by the collection of right neighbor sets. That is, \(\ms S_0 = \sigma(\ms A)\) where \(\ms A = \{x S: x \in S\}\).
Suppose that the collection of right neighbor sets \(\ms A\) is closed under intersection. If random variable \(X\) in \(S\) is memoryless, then \(X\) is exponential relative to the associated \(\sigma\)-algebra \(\ms S_0\).
Let \(F\) denote the reliabiltiy function of \(X\). Since \(X\) is memoryless, \[\P(X \in x y S) = F(x y) = F(x) F(y) = F(x) \P(X \in y S), \quad x, \, y \in S\] It follows that \(y S \in \ms S_X\) for \(y \in S\) and hence \(\ms A \subseteq \ms S_X\). Since \(\ms A\) is closed under intersection, it is a \(\pi\)-system, again in the standard terminology of measure theory. Since \(\ms S_X\) is a \(\lambda\)-system, a basic result in measure theory states that \(\ms S_0 = \sigma(\ms A) \subseteq \ms S_X\).
Here is the most important special case of .
Suppose that \((S, \cdot)\) is a positive semigroup and that the assocated partial order \(\preceq\) is an upper semi-lattice. If random variable \(X\) in \(S\) is memoryless then \(X\) is exponential relative to the associated \(\sigma\)-algebra \(\ms S_0\).
Of course, propositions and are most interesting when the assocated \(\sigma\)-algebra \(\ms S_0\) is the reference \(\sigma\)-algebra \(\ms S\). In these cases, the memoryless property implies the full exponential property.
Suppose that \((S, \cdot)\) is a topological semigroup with the property that \(\{t \in S: t \rta x\}\) and \(\{y \in S: x \rta y\}\) have nonempty interiors for each \(x \in S\). If \(X\) has an exponential distribution on \((S, \cdot)\) then the reliability function \(F\) of \(X\) is continuous.
Recall that by definition, \(S\) has a locally compact Hausdorff topology with a countable base, and \((x, y) \mapsto x y\) is continuous. If \((S, \cdot)\) is discrete (so that \(S\) is countable with the discrete topology), then all functions, including \(F\), are continuous. So assume that \(S\) is uncountable. Suppose that \((x_n: n \in \N_+)\) is a sequence in \(S\) converging to \(x \in S\). Let \(y \in x S\) and \(z \in y S\) where \(x, \, y , \, z\) are distinct. Then \(\{t \in S: y \in t S\}\) is a neighborhood of \(x\) so \(y \in x_n S\) for \(n\) sufficiently large. Also, \(y S\) is a neighborhood of \(z\) so there exists a compact neigbhorhood \(K\) of \(z\) and an open neighborhonnd \(U\) of \(z\) with \(K \subset U \subseteq y S\). By Urysohn's lemma, there exists \(\varphi: S \to [0, 1]\) with \(\varphi = 1\) on \(K\) and \(\varphi = 0\) on \(U^c\). By the continuity of multiplication, \(x_n X \to x X\) as \(n \to \infty\) and by the continuity of \(\varphi\), \(\varphi(x_n X) \to \varphi(x X)\) as \(n \to \infty\). By the bounded convergence theorem, \( \E[\varphi(x_n X)] \to \E[\varphi(x X)] \) as \(n \to \infty\). Since \(U \subseteq x S\), and \(\varphi = 0\) on \(U^c\), note that \(\E[\varphi(X), X \in x S] = \E[\varphi(X), X \in U]\). Similarly, \(\E[\varphi(X), X \in x_n S] = \E[\varphi(X), X \in U]\) for \(n\) sufficiently large. By the exponential property, \(\E[\varphi(X), X \in x S] = F(x) \E[\varphi(x X)] \) and similarly, \(\E[\varphi(X) X \in x_n S] = F(x_n) \E[\varphi(x_n S)]\) for \(n\) sufficiently large. Therefore we have \[ F(x) = \frac{\E[\varphi(X), X \in U]}{\E[\varphi(x X)]}\] Similarly, for \(n\) sufficently large, \[ F(x_n) = \frac{\E[\varphi(X), X \in U]}{\E[\varphi(x_n X)]}\] Hence \(F(x_n) \to F(x)\) as \(n \to \infty\).
Of course, \(\{t \in S: t \rta x\}\) and \(\{y \in S: x \rta y\}\) are the left and right neighbor sets of \(x \in S\), respectively.
In the setting of proposition , \(x \mapsto \P(X \in x A)\) is continuous for every \(A \in \ms{S}\).
So far, we have not needed to refer to a reference measure \(\lambda\) on \((S, \ms S)\) or a density function of \(X\) with respect to \(\lambda\), as we did for constant rate distributions. The following theorem bridges the gap and gives one of the main characterization of exponential distributions.
Suppose again that \(X\) is a random variable in \(S\). Then \(X\) has an exponential distribution on \((S, \cdot)\) if and only if \(X\) is memoryless on \((S, \cdot)\) and has constant rate on \((S, \rta)\) with respect to a \(\sigma\)-finite measure that is left invariant for \((S, \cdot)\).
Let \(F\) denote the reliability function of \(X\) on \((S, \rta)\). Suppose first that \(X\) has an exponential distribution on \((S, \cdot)\). Then by , \(X\) has a memoryless distribution. Now let \(\mu\) be the \(\sigma\)-finite measure defined by \[\mu(A) = \E\left[\frac{1}{F(X)}, X \in A\right], \quad A \in \ms S\] Then from Section 1.5, \(X\) has density \(F\) with respect to \(\mu\) and hence has contant rate 1 with respect to \(\mu\): \[\P(X \in A) = \int_A F(x) d \mu(x), \quad A \in \ms S\] Let \(x \in S\) and \(A \in \ms S\). By the integral version of the exponential property in and by the memoryless property, \begin{align*} \mu(x A) &= \E\left[\frac{1}{F(X)}, X \in x A\right] = F(x) \E\left[\frac{1}{F(x X)}, X \in A\right] \\ &= F(x) \E\left[\frac{1}{F(x)F(X)}, X \in A\right] = \E\left[\frac{1}{F(X)}, X \in A\right] = \mu(A) \end{align*} so \(\mu\) is left invariant on \((S, \cdot)\). Conversely, suppose that the distribution of \(X\) is memoryless on \((S, \cdot)\) and has constant rate \(\alpha \in (0, \infty)\) on \((S, \rta)\) with respect to a \(\sigma\)-finite measure \(\lambda\) that is left invairant on \((S, \cdot)\). Thus \(f = \alpha F\) is a density function of \(X\) with respect to \(\lambda\). Let \(x \in S\) and \(A \in \ms S\). Using the memoryless property and the integral version of left invariance in Section 3, \begin{align*} \P(X \in x A) &= \int_{x A} \alpha F(y) d \lambda(y) = \int_A \alpha F(x z) d\lambda(z) = \int_A \alpha F(x) F(z) d \lambda(z) \\ &= F(x) \int_A \alpha F(z) d \lambda(z) = F(x) \P(X \in A) \end{align*} Hence \(X\) has an exponential distribution.
In particular, if \((S, \cdot)\) has an exponential distribution then \((S, \cdot)\) must have a left-invariant measure, not surprising since the existence of an exponential distribution requires somewhat more of the semigroup \((S, \cdot)\) than the existence of a left-invariant measure.
Suppose that \(\lambda\) is the unique left-invariant measure for \((S, \cdot)\) up to multiplication by positive constants. Then \(X\) has an expoential distribution on \((S, \cdot)\) if and only if \(X\) is memoryless and constant rate with respect to \(\lambda\).
Suppose that \(\lambda\) is a left-invariant measure on \((S, \cdot)\) and that \(F: S \to (0, 1]\) is measurable. Then \(F\) is the reliability function of an exponential distribution on \((S, \cdot)\) that has constant rate with respect to \(\lambda\) if and only if
Suppose first that \(F\) is the reliability function of an exponential distribution for \((S, \cdot)\) that has constant rate \(\alpha \in (0, \infty)\) with respect to \(\lambda\). By , the distribution is memoryless, so (a) holds. Also \(\alpha F\) is a probability density function so \[\int_S F(x) d\lambda(x) = \frac{1}{\alpha} \lt \infty\] and hence (b) holds. Conversely, suppose that (a) and (b) hold. Let \(f = \alpha F\) where \(\alpha = 1 \big / \int_S F(x) d\lambda(x)\). Then by (b), \(f\) is a probability density function. Let \(X\) be a random variable with density \(f\), and let \(x \in S\) and \(A \in \ms S\). Using (a) and the integral version of the left invariance property in Section 3, \begin{align*} \P(X \in x A) &= \int_{x A} f(y) d\lambda(y) = \int_{x A} \alpha F(y) d \lambda(y) = \int_A \alpha F(x z) d \lambda(z) \\ &= F(x) \int_A \alpha F(z) d\lambda(z) = F(x) \int_A f(z) d\lambda(z) = F(x) \P(X \in A) \end{align*} Letting \(A = S\) we see that \(F\) is the reliability function of \(X\), and so it then follows that \(X\) has an expeontnial distribution with rate \(\alpha\).
If \(\lambda\) is the unique left invariant measure for \((S, \cdot)\), up to multiplication by positive constants, then gives a method for finding all exponential distributions. It also follows that the memoryless property almost implies the constant rate property (and hence the full exponential property). More specifically, if \(F\) is a reliability function satisfing (a) and (b) of , then \(f = \alpha F\) is the probability density function of an exponential distribution with reliability function \(F\) (where again, \(\alpha = 1 \big / \int_S F(x) d\lambda(x)\)). But in general, there may be other probability density functions with same reliability function \(F\) that do not have constant rate. It may also be possible for \(F\) to satisfy (a) but with \(\int_S F(x) d\lambda(x) = \infty\). But to emphasize, we do have the following:
Suppose that \((S, \cdot)\) is a semigroup in which the reliability function uniquely determines the underlying distribution. Then a distribution is exponential on \((S, \cdot)\) if and only if it is memoryless.
Section 4.4 gives an example of a discrete, positive semigroup where the reliability function does not determine the distribution and where there are memoryless distributions that are not exponential. Conversely, the constant rate property does not imply the memoryless property. The following general example shows that mixtures of distinct exponential distributions with the same constant rate will still have the constant rate property, but not the memoryless property. The free semigroup studied in Chapter 5 gives a specific example where there are different exponential distributions with the same rate.
Suppose that \((S, \cdot)\) is a semigroup with a fixed left-invariant measure \(\lambda\). Suppose that \(F\) and \(G\) are reliability functions for distinct exponential distributions on \((S, \cdot)\), each having constant rate \(\alpha \in (0, \infty)\) with respect to \(\lambda\). Let \(p \in (0, 1)\) and \(H = p F + (1 - p) G\). Then \(H\) is also the reliability function for a distribution with constant rate \(\alpha\). The distributions corresponding to \(F\) and \(G\) are memoryless, but not the distribution corresponding to \(H\)
From Section 1.5, \(H\) is the reliability function for a distribution with constant rate \(\alpha\). On the other hand, \[H(x y) = p F(x y) + (1 - p) G(x y) = p F(x) F(y) + (1 - p) G(x) G(y), \quad x, \, y \in S\] while \begin{align*} H(x) H(y) &= [p F(x) + (1 - p) G(x)][p F(y) + (1 - p) G(y)] \\ &= p^2 F(x) F(y) + p (1 - p) [F(x) G(y) + F(y) G(x)] + (1 - p)^2 G(x) G(y), \quad x, \, y \in S \end{align*} So \[H(x y) - H(x) H(y) = p (1 - p)[F(x) F(y) - F(x) G(y) - F(y) G(x) + G(x) G(y)], \quad x, \, y \in S\] With \(x = y\) we have \[H(x^2) - H^2(x) = p (1 - p)[F(x) - G(x)]^2, \quad x \in S\] So if \(F(x) \ne G(x)\) for some \(x \in S\) then \(H(x^2) \ne H^2(x)\).
Suppose that \(F\) is the reliability function of an exponential distribution on \((S, \cdot)\) that has constant rate with respect to the left invariant measure \(\lambda\). If \(m \in (0, \infty)\) and \[\frac{1}{\alpha_m} := \int_S F^m(x) d\lambda(x) \lt \infty\] then \(F^m\) is the reliability function of an exponential distribution on \((S, \cdot)\) that has rate \(\alpha_m\) with respect to \(\lambda\).
In particular, the condition in holds if \(m \ge 1\).
Suppose that \((S, \cdot)\) is a positive semigroup with left invariant measure \(\lambda\) and that \(f\) is a probability density function with respect to \(\lambda\) satisfying \[f(x) f(y) = G(x y), \quad x, \, y \in S\] for some measurable function \(G: S \to (0, \infty)\). Then \(f\) is the density of an exponential distribution on \((S, \cdot)\).
Let \(e\) denote the identity element of \((S, \cdot)\). Letting \(y = e\) in the equation above gives \(G(x) = \alpha f(x)\) where \(\alpha = f(e) \in (0, \infty)\). Let \(F\) denote the reliability function of \(f\) on \((S, \cdot)\). Then using the integral version of the left-invariance property in Section 3, \begin{align*} F(x) &= \int_{x S} f(y) d\lambda(y) = \int_{x S} \frac{1}{\alpha} G(y) d\lambda(y) = \frac{1}{\alpha} \int_S G(x u) d\lambda(u) \\ &= \frac{1}{\alpha} \int_S f(x) f(u) d\lambda(u) = \frac{1}{\alpha} f(x), \quad x \in S \end{align*} Thus the distribution has constant rate \(\alpha\). Finally, \[F(x y) = \frac{1}{\alpha} f(x y) = \frac{1}{\alpha^2} G(x y) = \frac{1}{\alpha^2} f(x) f(y) = F(x) F(y)\] so the distribution is memoryless. Hence \(f\) is the density of an exponential distribution by .
The following definition gives the abstract version of the new better than used and new worse than used properties. Once again, we have a measurable semigroup \((S, \cdot)\) with associated relation \(\rta\).
Suppose that \(X\) is a random variable in \(S\).
In terms of the relation \(\rta\), the NBU and NWU properties are, respectively \begin{align*} \P(y \rta x^{-1} X \mid x \rta X) &\le \P(y \rta X), \quad x, \, y \in S\\ \P(y \rta x^{-1} X \mid x \rta X) &\ge \P(y \rta X), \quad x, \, y \in S \end{align*} Once again, in the case of a positive semigroup, with a partial order \(\preceq\) as the relation, these properties take the more recognizable form \begin{align*} \P(x^{-1} X \succeq y \mid X \succeq x) &\le \P(X \succeq y), \quad x, \, y \in S \\ \P(x^{-1} X \succeq y \mid X \succeq x) &\ge \P(X \succeq y), \quad x, \, y \in S \end{align*} We will concentrate primarily on the memoryless and exponential properties in this text.
Suppose that \((S, \cdot)\) is a semigroup with left-invariant measure \(\lambda\) and whose associated relation is the complete reflexive relation \(\equiv\), so that \(x \equiv y\) for every \(x, \, y \in S\). If \(\lambda(S) \lt \infty\) then the uniform distribution on \(S\) (with respect to \(\lambda\)) is exponential for \((S, \cdot)\).
Suppose that \(\lambda(S) \lt \infty\) and let \(X\) have the uniform distribution on \(S\). Since the relation associated with \((S, \cdot)\) is complete, (equivalently \(x S = S\) for all \(x \in S\)), the reliability function \(F\) of \(X\) is the constant 1: \(\P(X \in x S) = 1\) for \(x \in S\). Hence \[\P(X \in x A) = \frac{\lambda(x A)}{\lambda(S)} = \frac{\lambda(A)}{\lambda(S)} = \P(X \in x S) \P(X \in A), \quad x \in S, \, A \in \ms S \]
Here is a concrete example:
Suppose that \(S\) has an LCCB topology as defined in the Preface and that \((S, \cdot)\) is a topological group. Let \(\lambda\) denote the left invariant measure for \((S, \cdot)\) (unique up to multiplication by positive constants).
Since \((S, \cdot)\) is a group, the associated relation is the complete reflexive relation \(\equiv\).
In particular, the unique exponential distribution for a finite group \((S, \cdot)\) is the uniform distribution on \(S\) (with respect to counting measure \(\#\)). But once again, we see that a full group is not particularly interesting in the reliability context. Rather, a maximal positive sub-semigroup is the appropriate object of study.
As usual, the right zero semigroup on a set provides an extreme example.
Suppose that \((S, \cdot)\) is the right zero semigroup on \(S\), so that \(x y = y\) for \(x, \, y \in S\). It follows that \(x A = A\) for \(x \in S\) and \(A \in \ms S\) so if \(X\) is a random variable in \(S\) then \[\P(X \in x A) = \P(X \in A) = \P(X \in S) \P(X \in A) = \P(X \in x S) \P(X \in A), \quad x \in S, \, A \in \ms S\] So every probability distribution is exponential on \((S, \cdot)\).
Proposition is not surprising, since we also know that every \(\sigma\)-finite measure on \((S, \cdot)\) is left invariant. The right zero semigroup is also a trivial example of a semigroup in which the exponential property does not necessarily imply the constant rate property.
Suppose again that \((S, \cdot)\) is the right zero semigroup on \(S\) and that that \(\lambda\) is a \(\sigma\)-finite reference measure on \(S\).
The corresponding relation is the complete reflexive relation \(\equiv\) so that \(x \equiv y\) for every \((x, y) \in S^2\). Hence the reliability function on \((S, \equiv)\) of every distribution is the constant function 1, as noted in . So this is a trivial example of a semigroup that has exponential (and hence memoryless) distributions that do not have constant rate with respect to a given left-invariant measure. We can also view this example through the lens of . If \(F(x y) = F(x) F(y)\) then \(F(y) = F(x) F(y)\) for \(x, \, y \in S\). From our support assumption, \(F(y) \gt 0\) so \(F(x) = 1\) for \(x \in S\). So \(\int_S F(x) \, d\lambda(x) = \lambda(S)\) and hence condition (b) holds if and only if \(\lambda\) is a finite measure, in which case the corresponding constant rate distribution is simply the uniform distribution on \(S\) (with respect to \(\lambda\)) \[\P(X \in A) = \frac{\lambda(A)}{\lambda(S)}, \quad A \in \ms S\] On the other hand, if \(\lambda(S) = \infty\) then there is no distribution that has constant rate with respect to \(\lambda\). So to summarize, a distributionn on \(S\) (which is necessarily exponential for \((S, \cdot)\)) has contant rate with respect to a measure \(\lambda\) (which is necessarily left invariant) if and only if \(\lambda\) is a finite measure, in which case the distribution is uniform with respect to \(\lambda\). This statement is not as restrictive as it might seem. The canonical measure associated with \(X\) is simply the distribution of \(X\): \[\E\left[\frac{1}{F(X)}; X \in A\right] = \P(X \in A), \quad A \in \ms S\] and trivially, \(X\) has the uniform distribution with repsect to itself: \[\P(X \in A) = \frac{\P(X \in A)}{\P(X \in S)}, \quad A \in \ms S\]
Corollary applies to discrete positive semigroups, with counting measure \(\#\) as the left invariant measure.
Suppose that \((S, \cdot)\) is a (nontrivial) discrete positive semigroup with associated partial order \(\preceq\). Then \(X\) has an exponential distribution on \((S, \cdot)\) if and only if \(X\) is memoryless on \((S, \cdot)\) and has constant rate on \((S, \preceq)\), with rate constant \(\alpha \in (0, 1)\).
From Section 3, \(\#\) is the unique left invariant measure for \((S, \cdot)\), up to multiplication by positive constants. From Section 1.5 if \(X\) has constant rate \(\alpha\) for a discrete partial order graph \((S, \preceq)\) then \(\alpha \in (0, 1]\), and \(\alpha = 1\) if and only if the graph is \((S, =)\). But a discrete positive semigroup \((S, \cdot)\) has associted partial order graph \((S, =)\) if and only if the semigroup is trivial: \(S = \{e\}\).
Suppose that \((S, \cdot)\) is a discrete positive semigroup with \(I\) as the set of irreducible elements and with \((S, \preceq)\) as the associated partial order. Suppose that \(X\) has an exponential distribution on \((S, \cdot)\) with constant rate \(\alpha \in (0, 1)\) for \((S, \preceq)\) and with probability density function \(f\). Then \(X\) also has constant rate for the covering graph \((S, \upa)\), with rate \[\beta := \frac{\alpha}{\sum_{i \in I} f(i)}\]
We assume of course that \(I \ne \emptyset\) so that \((S, \cdot)\) is nontrivial. Let \(F\) and \(F_1\) denote the reliability functions of \(X\) for \((S, \preceq)\) and \((S, \upa)\) respectively. Then since \(X\) has constant rate \(\alpha\) for \((S, \preceq)\) and is memoryless for \((S, \cdot)\), \begin{align*} F_1(x) & = \sum_{x \upa y} f(y) = \sum_{i \in I} f(x i) = \sum_{i \in I} \alpha F(x i) \\ & = \sum_{i \in I} \alpha F(x) F(i) = \alpha F(x) \sum_{i \in I} F(i) = f(x) \sum_{i \in I} \frac{1}{\alpha} f(i) \end{align*}
In the context of , suppose that \((S, \cdot)\) is completely uniform with \(c_n\) as the number of factorings of \(x \in S\) over \(I\) when the length of a factoring is \(d(x) = n \in \N\). Then \(X\) has constant rate for \((S, \upa^n)\). The rate constant is \[c_n \beta^n = c_n \left(\frac{\alpha}{\sum_{i \in I} f(i)}\right)^n\]
This follows from a general result in Section 1.5
In particular, applies to the free semigroup studied in Chapter 5 and the subset semigroup studied in Chapter 8.
Recall that the standard continuous semigroup is \(([0,\,\infty), +)\) with the usual Borel \(\sigma\)-algebra \(\ms B\) and with Lebesgue measure \(\lambda\) as the reference measure. It is a positive semigroup with identity element is \(0\), and the corresponding partial order is the ordinary total order \(\leq\). Also \(\lambda\) is the only invariant measure, up to multiplication by positive constants. Of course, this is the setting of classical reliability theory, with \([0, \infty)\) representing continuous time, and so is one of the main motivations for the general theory presented in this text.
Random variable \(X\) is memoryless for \(([0, \infty), +)\) if and only if \(X\) is exponential for \(([0, \infty), +)\) if and only if \(X\) has constant rate for \(([0, \infty), \le)\) if and only if \(X\) has an exponential distribution in the ordinary sense. The exponential distribution with constant rate \(\alpha \in (0, \infty)\) has density function \(f\) and reliability function \(F\) given by \[f(x) = \alpha e^{-\alpha x}, \; F(x) = e^{-\alpha x}, \quad x \in [0, \infty)\]
Suppose \(X\) has the exponential distribution with rate parameter \(\alpha \in (0, \infty)\)
Of course, \(\bs Y\) is the sequence of arrival times for the Poisson process with rate \(\alpha\), and so \(f_n\) is the ordinary gamma density function with parameters \(n\) and \(\alpha\). The standard continuous space, and related spaces, are studied in more detail in Chapter 3.
Recall that the standard discrete semigroup is \((\N, +)\). As with all discrete spaces, the reference \(\sigma\)-algebra is \(\ms P(\N)\) and the reference measure is counting measure \(\#\). It is a positive semigroup with identity element is \(0\), and the corresponding partial order is the ordinary total order \(\leq\). Also \(\#\) is the only invariant measure, up to multiplication by positive constants.
Random variable \(X\) is memoryless for \((\N, +)\) if and only if \(X\) is exponential for \((\N, +)\) if and only if \(X\) has constant rate for \((\N, \le)\) if and only if \(X\) has a geometric distribution in the ordinary sense. The exponential distribution with constant rate \(p \in (0, 1)\) for \((\N, +)\) has density function \(f\) and reliability function \(F\) given by \[f(x) = p (1 - p)^x, \; F(x) = (1 - p)^x, \quad x \in \N\]
Of course, \(X\) can be interpreted as the number of failures before the first success in a sequence of Bernoulli trials with success probability \(p\).
Suppose \(X\) has the exponential distribution on \((\N, +)\) with rate \(p \in (0, 1)\)
Of course, \(Y_n\) is the number of failures before the \(n\)th success in the Bernoulli trials sequence with success probability \(p\), and so \(f_n\) is the ordinary negative binomial density function with parameters \(n\) and \(p\). The standard discrete space, and related spaces, are studied in more detail in Chapter 4.