For \(k \in \N_+\), let \(\N_k = \{1, 2, \ldots, k\}\). For \(m, \, n \in \N_+\), let \(\R^{m \times n}\) denote the set of \(m \times n\) matrices with entries in \(\R\). The set \(\R^{m \times n}\) is given the usual Euclidean topology (the same topology as \(\R^{m n}\)), with corresponding Borel \(\sigma\)-algebra \(\ms{R}^{m \times n}\) and Lebesgue measure \(\lambda^{m \times n}\). For \(i \in \N_m\) and \(j \in\N_n\), the \(i j\) entry of \(\bs{x} \in \R^{m \times n}\) is denoted \(x_{i j}\). For \(n \in \N_+\), the determinant of \(\bs{x} \in \R^{n \times n}\) is denoted \(\det(\bs{x})\).
For \(m, \, n \in \N_+\) the space \((\R^{m \times n}, +)\) is a group with identity \(\bs{0}\) (the \(m \times n\) zero matrix), and where \(+\) is ordinary matrix addition (so component-wise addition). The natural positive semigroup is \((S, +)\) where \(S = \left\{\bs{x} \in \R^{m \times n}: x_{i j} \ge 0 \text{ for all } i \in \N_m, \, j \in \N_n \right\}\). This semigroup is isomorphic to \(([0, \infty)^{m n}, +)\) and so we know the main result from Section 2.7.
Random variable \(\bs{X}\) in \(S\) has an exponential distribution for \((S, +)\) if and only if \(X_{i j}\) has an exponential distribution on \([0, \infty)\) for \(i \in \N_m\) and \(j \in \N_n\) and the component variables are independent. The rate constant of \(\bs{X}\) is the product of the component rate constants.
Other types of multivariate exponential distributions are studied in Section 4 and Section 5.
The general linear group of order \(n \in \N_+\) is the group \((M_n, \cdot)\) where \(M_n = \left\{\bs{x} \in \R^{n \times n}: \det(\bs{x}) \gt 0\right\}\) and where \(\cdot\) is matrix multiplication. The identity \(\bs{e}\) is the \(n \times n\) identity matrix and the invariant measure \(\mu\) (unique up to multiplication by positive constants) is given by \[ d\mu(\bs{x}) = \frac{1}{\det^n(\bs{x})} d\bs{x}\]
The definition and results are well known. Recall that if \(\bs{x}, \, \bs{y} \in M_n\) then \(\det(\bs{x} \bs{y}) = \det(\bs{x}) \det(\bs{y})\) so \(\bs{x} \bs{y} \in M_n\). If \(\bs{x} \in M_n\) then the inverse matrix \(\bs{x}^{-1}\) exists and \(det(\bs{x}^{-1}) = 1 / \det(\bs{x})\), and so \(\bs{x}^{-1} \in M_n\).
Often the general linear group of order \(n \in \N_+\) includes all \(\bs x \in \R^{n \times n}\) with \(\det(\bs x) \ne 0\). However, the definition above is better for our reliability purposes. There are two natural strict positive sub-semigroups of \((M_n, \cdot)\).
For \(n \in \N_+\), the following are strict positive sub-semigroups of \((M_n, \cdot)\):
When \(n = 1\), these reduce to the strict versions of the positive semigroups \(((0, 1] \cdot)\) and \(([1, \infty), \cdot)\) that we studied in Section 2. For general \(n \in \N_+\), the study of exponential distributions on positive sub-semigroups of \((M_n, \cdot)\) is complicated, so we will consider a few examples in the simplest case when \(n = 2\).
Matrix multiplication can reduce to simple addition and simple multiplcation on appropriate sub-semigroups.
Let \(S = \left\{\begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix}: x \in [0, \infty) \right\}\).
The results follow immediately from the fact that \(\begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & y \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & x + y \\ 0 & 1 \end{bmatrix} \).
Let \(D = \left\{\begin{bmatrix} x & 0 \\ 0 & 1 \end{bmatrix}: x \in [1, \infty) \right\}\).
The results follow immediately from the fact that \(\begin{bmatrix} x & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} u & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} x u & 0 \\ 0 & 1 \end{bmatrix} \).
Our next example provides some negative results.
Let \(T = \left\{\begin{bmatrix} x & y \\ 0 & 1 \end{bmatrix}: x \in [1, \infty), y \in [0, \infty) \right\}\). Then \((T, \cdot)\) is a positive sub-semigroup of \((M_2, \cdot)\). The set \(T\) can be identified with \([1, \infty) \times [0, \infty)\), with the operation given by \((x, y) (u, v) = (x u, x v + y)\).
Note first that if \(\begin{bmatrix}x & y \\ 0 & 1\end{bmatrix}, \, \begin{bmatrix}u & v \\ 0 & 1\end{bmatrix} \in T\) then \(\begin{bmatrix}x & y \\ 0 & 1\end{bmatrix} \begin{bmatrix}u & v \\ 0 & 1\end{bmatrix} = \begin{bmatrix}x u & x v + y \\ 0 & 1\end{bmatrix} \in T\) so the representation as ordered pairs is correct. The ordered pair \((1, 0) \in T\) represents \(\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\), the identity matrix. Suppose that \((x, y) \preceq (z, w)\). Then there exists \((s, t) \in T\) such that \((x, y) (s, t) = (z, w)\). That is, \(x s = z\) and \(x t + y = w\). But \(s \ge 1\) so \(x \le z\) and \(x t \ge 0\) so \(y \le w\). Conversely, suppose that \((x, y) \in T\), \((z, w) \in T\) and that \(x \le z\) and \(y \le w\). Let \(s = z / x\) and \(t = (w - y) / x\). Then \(s \ge 1\), \(t \ge 0\) and \(x s = z\), \(x t + y = w\). Thus \((s, t) \in T\) and \((x, y) (s, t) = (z, w)\), so \((x, y) \preceq (z, w)\).
So the partial order graph \((T, \preceq)\) is associated with two very different positive semigroups. One is the direct product of \(([1, \infty), \cdot)\) and \(([0, \infty), +)\), that is, \[(x, y) (u, v) = (xu, y + v)\] The other is the semigroup corresponding to matrix multiplication \[(x, y)(u, v) = (xu, xv + y)\]
\((T, \cdot)\) has no exponential distributions.
Note that \((1, k) \prec (1, k + 1)\) for \(k \in \N\) and hence \((1, k) T \downarrow \emptyset \text{ as } k \to \infty\). To see this, note that if \((x, y) \in (1, k)T\) for all \(k \in \N\) then \(y \ge k\) for all \(k \in \N\). Now suppose that \(F\) is a nontrivial continuous homomorphisms from \((T, \cdot)\) into \(((0,\,1], \cdot)\). Then \(F\) must satisfy \[F(x u, x v + y) = F(x, y) F(u, v), \quad (x, y), \, (u, v) \in T\] Letting \(x = u = 1\) we have \(F(1, y + v) = F(1, y) F(1, u)\) for \(y \ge 0\), \(v \ge 0\) so there exists \(\beta > 0\) such that \(F(1, y) = e^{-\beta y}\) for \(y \ge 0\). Next letting \(v = y = 0\) we have \(F(xu, 0) = F(x, 0) F(u, 0)\) for \(x \ge 1\), \(u \ge 1\) so there exists \(\alpha > 0\) such that \(F(x, 0) = x^{-\alpha x}\) for \(x \ge 1\). But then \((x, y) = (1, y) (x, 0)\) so \[F(x, y) = F(1, y) F(x, 0) = x^{-\alpha} e^{-\beta}, \quad x \ge 1, y \ge 0\] But then one more application of the first displayed equatioon gives \[(x u)^{-\alpha} e^{-\beta(x v + y)} = [x^{-\alpha} e^{-\beta y}] [u^{-\alpha} e^{-\beta v}] = (xu)^{-\alpha} e^{-\beta(y + v)}\] and this forces \(\beta = 0\). Therefore \(F(x, y) = x^{-\alpha}\) for \((x, y) \in T\). In particular, \(F(1, k) = 1\) for \(k \in \N_+\). But in order for \(F\) to be the reliability function of a probability measure on \(T\) we must have \(F(1, k) \to 0\) as \(k \to \infty\) Therefore, there are no memoryless distributions.
Any positive semigroup of \((M_2, \cdot)\) that contains \((T, \cdot)\) as a sub-semigroup will also fail to have exponential distributions.
Let \(S = \left\{\begin{bmatrix} x & y \\ w & z \end{bmatrix} \colon x \geq 1, \, xz - wy \geq 1 \right\}\). Then \((S, \cdot)\) is a sub-semigroup of \((M_2, \cdot)\). However, \((S, \cdot)\) has no exponential distributions.
\((T, \preceq)\) has no constant rate distributions with respect to the left-invariant measure \(\mu\).
Let \(F\) denote the reliability function of a distribution with constant rate \(\alpha\). Then the density function relative to Lebesuge measure is given by \(f(x, y) = \partial^2 F(x, y) / \partial x \partial y\). Hence the density function relative to \(\lambda\) is \(g(x, y) = x^2 f(x, y)\). So \(F\) must satisfy \[\frac{\partial^2}{\partial x \partial y} F(x, y) = \frac{\alpha}{x^2} F(x, y), \quad x \gt 1, y \gt 0\] This is a second order, linear hyperbolic partial differential equation. The conditions on \(F\) are \(0 \le F \le 1\), \(F(1, 0) = 1\), \(F(x, y) \to 0\) as \(x \to \infty\) or as \(y \to \infty\), and the positivity condition \[F(x, y) - F(u, y) - F(x, v) + F(u, v) \ge 0, \quad 1 \le x \le u \le \infty, \, 0 \le y \le v \le \infty\] We use separation of variables to find a basic solution to the PDE. Assume that \(F(x, y) = A(x) B(y)\) for \(x \in [1, \infty)\) and \(y \in [0, \infty)\). Then we have \begin{align*} A^\prime(x) &= -\frac{a}{x^2} A(x), \quad x \in [1, \infty)\\ B^\prime(y) &= -b \, B(y), \quad y \in [0, \infty) \end{align*} where \(a, \, b \in (0, \infty)\) with \(a b = \alpha\). Solving gives \(A(x) = e^{a / x}\) for \(x \in [1, \infty)\) and \(B(y) = e^{-b y}\) for \(y \in [0, \infty)\), so our basic solution is \(F(x, y) = e^{a / x - b y}\) for \((x, y) \in T\). Hence linearly independent solutions are \(F_1(x, y) = e^{a_1 / x - b_1 y}\) and \(F_2(x, y) = e^{a_2 / x - b_2 y}\) for \((x, y) \in T\) where \(a_1, \, b_1, \, a_2, \, b_2 \in (0, \infty\) with \(a_1 b_1 = a_2 b_2 = \alpha\) but \(a_1 \ne a_2\) (and therefore \(b_1 \ne b_2\)). So a general solution is \[F(x, y) = c_1 F_1(x, y) + c_2 F_2(x, y) = c_1 e^{a_1 / x - b_1 y} + c_2 e^{a_2 / x - b_2 y}, \quad (x, y) \in T\] But then the condition that \(F(x, y) \to 0\) as \(x \to \infty\) for fixed \(y \in [0, \infty)\) gives \[c_1 e^{-b_1 y} + c_2e^{-b_2 y} = 0, \quad y \in [0, \infty)\] and hence \(c_1 = c_2 = 0\).
On the other hand, \((T, \preceq)\) has constant rate distributions with respect to other natural measures.
Suppose that \(X\) and \(Y\) are independent, \(X\) has the Pareto distribution on \([1, \infty)\) with parameter \(a \in (0, \infty)\), and \(Y\) has the ordinary exponential distribution on \([0, \infty)\) with parameter \(b \in (0, \infty)\). Then \((X, Y)\) has constant rate \(a b\) for \((T, \preceq)\) with respect to the measure \(\nu\) given by \(d\nu(x, y) = dx \, dy / x\).
This is a simple consequence of results in Section 2.7 and Section 3.2. The measure \(\mu\) is invariant for the direct product of \(([1, \infty), \cdot)\) (ordinary multiplcation) with \(([0, \infty), +)\) (ordinary addition). As noted earlier, the graph associated with the product semigroup is also \((T, \preceq)\). Moreover, \(X\) is exponential for \(([0, \infty), \cdot)\) and \(Y\) is exponential for \(([0, \infty), +)\) and since the variables are independent, \((X, Y)\) is exponential for the product semigroup. The reliability function \(F\) of \((X, Y)\) is given by \(F(x, y) = x^{-a} e^{-b y}\) for \((x, y) \in T\). The density function \(g\) of \((X, Y)\) with respect to Lebesgue measure is given by \(g(x, y) = a b x^{-(a + 1)} e^{-b y}\) for \((x, y) \in T\). The density function \(f\) of \((X, Y)\) with respect to the invariant measure \(\nu\) is given by \(f(x, y) = x g(x, y) = a b F(x, y) = a b x^{-a} e^{-b y}\) for \((x, y) \in T\). Note that the rate function of \((X, Y)\) with respect to Lebesgue measure is \((x, y) \mapsto 1 / x\), so \((X, Y)\) has decreasing failure rate relative to this measure.
Suppose that \(X\) and \(Y\) are independent, \(X\) has the shifted exponential distribution on \([1, \infty)\) with parameter \(a \in (0, \infty)\), and \(Y\) has the exponential distribution on \([0, \infty)\) with parameter \(b \in (0, \infty)\). Then \((X, Y)\) has constant rate \(a b\) for \((T, \preceq)\) with respect to the Lebesgue measure \(\lambda\).
This is a simple consequence of results in Section 2.7 and Section 3.2. The measure \(\lambda\) is invariant for the direct product of \(([1, \infty), \oplus)\) with \(([0, \infty), +)\) where \(x \oplus y = x + y -1\) for \((x, y) \in T\). The graph associated with this product semigroup is also \((T, \preceq)\). Moreover, \(X\) is exponential for \(([0, \infty), \oplus)\) and \(Y\) is exponential for \(([0, \infty), +)\) and since the variables are independent, \((X, Y)\) is exponential for the product semigroup. The reliability function \(F\) of \((X, Y)\) is given by \(F(x, y) = e^{-a (x - 1)} e^{-b y}\) for \((x, y) \in T\). The density function \(f\) of \((X, Y)\) with respect to Lebesgue measure is given by \(f(x, y) = a b e^{-a (x - 1)} e^{-b y}\) for \((x, y) \in T\).
Let \(\C = \{x + y i: x, \, y \in \R\}\) denote the set of complex numbers where \(i\) is the imaginary unit; \(\C\) is endowed with the same Euclidean topology and measure structure as \(\R^2\). As an algebraic field, \((\C, +, \cdot)\) is isomorphic to a field of matrices in \(\R^{2 \times 2}\).
\((\C, +, \cdot)\) is isomorphic to \((C, +, \cdot)\) where \(C = \left\{\begin{bmatrix} x & -y \\ y & x \end{bmatrix}: x, \, y \in \R \right\}\).
Note that addition and multiplication behave properly: \begin{align*} \begin{bmatrix} u & -v \\ v & u \end{bmatrix} + \begin{bmatrix} x & -y \\ y & x \end{bmatrix} &= \begin{bmatrix} u + x & -(v + y) \\ (v + y) & u + x \end{bmatrix} \\ \begin{bmatrix} u & -v \\ v & u \end{bmatrix} \cdot \begin{bmatrix} x & -y \\ y & x \end{bmatrix} &= \begin{bmatrix} u x - v y & -(u y + v x) \\ (u y + v x) & u x + - v y \end{bmatrix} \end{align*} So \(0 \in \C\) is identified with the zero matrix \(\begin{bmatrix} 0 & 0 \\ 0 & 0\end{bmatrix} \in C\), and \(1 \in \C\) is identified with the identity matrix \(\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix} \in C\), and \(i \in \C\) is identified with the matrix \(\begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix} \in C\). Note also that the square of the absolute value of \(z = x + y i \in \C\) maps to determinant in \(C\), namely \(|z|^2 = x^2 + y^2\).
So it makes sense to study complex semigroups in this section, although of course we will use the standard notation of complex numbers rather than matrix notation. The first result is yet another repeat of the direct product theorem that we have seen several times before.
Let \(S = \{x + y i \in \C: x, \, y \in [0, \infty)\}\).
Of course \((S, +)\) is isomorphic to \(([0, \infty)^2, +)\).
A multiplicative sub-semigroup is a bit more interesting.
Let \(T = \{z \in \C: |z| \gt 1\}\).
In terms of the matrix representation in Proposition , \((T, \cdot)\) is a sub-semigroup of the general linear group \((M_2, \cdot)\). Expressed in terms of polar coordinates, \((T, \cdot)\) is isomorphic to a product space.
The semigroup \((T, \cdot)\) is isomorphic to the product of \(((1, \infty), \cdot)\) and \(([0, 2 \pi), \oplus)\) where \(\cdot\) is ordinary multiplication and where \(\oplus\) is addition modulo \(2 \pi\).
The polar coorindate function \((r, \theta) \mapsto r e^{i \theta}\) maps \((1, \infty) \times [0, 2 \pi)\) one-to-one onto \(T\). If \(z = r e^{i \theta} \in \C\) and \(w = s e^{i \phi} \in \C\) where \(r, \, s \in (1, \infty)\) and \(\theta, \phi \in [0, 2 \pi)\) then \[z w = r s e^{i (\theta + \phi)} = r s e^{i (\theta \oplus \phi)}\]
Exponential variables for \((T, \cdot)\) can easily be expressed in terms of polar coordinates.
Suppose that \(Z = R e^{i \Theta}\) is a random variable in \(T\) where \(R \in (1, \infty)\) and \(\Theta \in [0, 2 \pi)\). Then \(Z\) has an exponential distribution on \((T, \cdot)\) if and only if
From Section 2.7, \((R, \Theta)\) is exponential for the product space if and only if \(R\) is exponential for \(((1, \infty), \cdot)\), \(\Theta\) is exponential for \(([0, 2 \pi), \oplus)\), and \(R\) and \(\Theta\) are independent. Exponential distributions on \(((1, \infty), \cdot)\) are Pareto distributions and the uniform distribution is the only distribution that is exponential for \(([0, 2 \pi), \oplus)\).
Here is the result stated in terms of the complex variable.
Suppose that \(Z\) has the exponential distribution on \((T, \cdot)\), with rate constant in the form \(\beta / 2 \pi\) where \(\beta \in (0, \infty)\). Then
These results follow from . In terms of polar coordinates, \(R\) has the Pareto distribution on \((1, \infty)\) with parameter \(\beta\) for some \(\beta \in (0, \infty)\) and then \(\beta\) is the rate constant of \(R\) as an exponential variable on \(((1, \infty), \cdot)\). Siimilarly, \(\Theta\) is uniformly distributed on \([0, 2 \pi)\) and then \(1 / 2 \pi\) is the rate constant of \(\Theta\) as an exponential variable on \(([0, 2 \pi), \oplus)\). So then \(Z\) has reliability function \(F\) for \((T, \preceq)\) given by \(F(z) = |z|^{-\beta}\) for \(z \in T\). The rate constant is \(\beta / 2 \pi\).