\(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\R}{\mathbb{R}}\) \( \newcommand{\Li}{\text{Li}} \) \( \newcommand{\bs}{\boldsymbol} \) \( \newcommand{\skw}{\text{skew}} \)
  1. Random
  2. 4. Special Distributions
  3. The Exponential-Logarithmic Distribution

The Exponential-Logarithmic Distribution

The exponential-logarithmic distribution arises when the rate parameter of the exponential distribution is randomized by the logarithmic distribution. The exponential-logarithmic distribution has applications in reliability theory in the context of devices or organisms that improve with age, due to hardening or immunity.

The Standard Exponential-Logarithmic Distribution

Distribution Functions

The standard exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) is a continuous distribution on \( [0, \infty) \) with probability density function \( g \) given by \[ g(x) = -\frac{(1 - p) e^{-x}}{\ln(p)[1 - (1 - p) e^{-x}]}, \quad x \in [0, \infty) \]

  1. \( g \) is decreasing on \( [0, \infty) \) with mode \( x = 0 \).
  2. \( g \) is concave upward on \( [0, \infty) \).
Details:

Substituting \( u = (1 - p) e^{-x} \), \( du = -(1 - p) e^{-x} dx \) gives \[ \int_0^\infty \frac{(1 - p) e^{-x}}{1 - (1 - p) e^{-x}} dx = \int_0^{1-p} \frac{du}{1 - u} = -\ln(p) \] so it follows that \(g\) is a PDF. For the shape of the graph of \(g\) note that \begin{align} g^\prime(x) & = \frac{(1 - p) e^{-x}}{\ln(p) [1 - (1 - p) e^{-x}]^2}, \quad x \in [0, \infty) \\ g^{\prime\prime}(x) & = -\frac{(1 - p) e^{-x} [1 + (1 - p) e^{-x}}{\ln(p) [1 - (1 - p) e^{-x}]^3}, \quad x \in [0, \infty) \end{align}

Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.

The distribution function \( G \) is given by \[ G(x) = 1 - \frac{\ln\left[1 - (1 - p) e^{-x}\right]}{\ln(p)}, \quad x \in [0, \infty) \]

Details:

This follows from the same integral substitution used in the details of .

The quantile function \( G^{-1} \) is given by \[ G^{-1}(u) = \ln\left(\frac{1 - p}{1 - p^{1 - u}}\right) = \ln(1 - p) - \ln\left(1 - p^{1 - u}\right), \quad u \in [0, 1) \]

  1. The first quartile is \( q_1 = \ln(1 - p) - \ln\left(1 - p^{3/4}\right) \).
  2. The median is \( q_2 = \ln(1 - p) - \ln\left(1 - p^{1/2}\right) = \ln\left(1 + \sqrt{p}\right)\).
  3. The third quartile is \( q_3 = \ln(1 - p) - \ln\left(1 - p^{1/4}\right)\).
Details:

The formula for \( G^{-1} \) follows from the distribution function in by solving \(u = G(x) \) for \( x \) in terms of \( u \).

Open the quantile app and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the shape parameter, compute the quantiles of order 0.1 and 0.9.

The reliability function \( G^c \) given by \[ G^c(x) = \frac{\ln\left[1 - (1 - p) e^{-x}\right]}{\ln(p)}, \quad x \in [0, \infty) \]

Details:

This follows trivially from the distribution function in since \( G^c = 1 - G \).

The standard exponential-logarithmic distribution has decreasing failure rate.

The failure rate function \( r \) is given by \[ r(x) = -\frac{(1 - p) e^{-x}}{\left[1 - (1 - p) e^{-x}\right] \ln\left[1 - (1 - p) e^{-x}\right]}, \quad x \in (0, \infty) \]

  1. \( r \) is decreasing on \( [0, \infty) \).
  2. \( r \) is concave upward on \( [0, \infty) \).
Details:

Recall that \( r(x) = g(x) \big/ G^c(x) \) so the formula follows from the probability density function in and the distribution function in .

The Polylogarithm

The moments of the standard exponential-logarithmic distribution cannot be expressed in terms of the usual elementary functions, but can be expressed in terms of a special function known as the polylogarithm.

The polylogarithm of order \( s \in \R \) is defined by \[ \Li_s(x) = \sum_{k=1}^\infty \frac{x^k}{k^s}, \quad x \in (-1, 1) \] The polylogarithm is a power series in \( x \) with radius of convergence is 1 for each \( s \in \R \).

Details:

To show that the radius of convergence is 1, we use the ratio test from calculus. For \( s \in \R \), \[ \frac{|x|^{k+1} / (k + 1)^s}{|x|^k / k^s} = |x| \left(\frac{k}{k + 1}\right)^s \to |x| \text{ as } k \to \infty \] Hence the series converges absolutely for \( |x| \lt 1 \) and diverges for \( |x| \gt 1 \).

In this section, we are only interested in nonnegative integer orders, but the polylogarithm will show up again, for non-integer orders, in the study of the zeta distribution.

The polylogarithm functions of orders 0, 1, 2, and 3.

  1. The polylogarithm of order 0 is \[ \Li_0(x) = \sum_{k=1}^\infty x^k = \frac{x}{1 - x}, \quad x \in (-1, 1) \]
  2. The polylogarithm of order 1 is \[ \Li_1(x) = \sum_{k=1}^\infty \frac{x^k}{k} = -\ln(1 - x), \quad x \in (-1, 1) \]
  3. The polylogarithm of order 2 is known as the dilogarithm
  4. The polylogarithm of order 3 is known as the trilogarithm.

Thus, the polylogarithm of order 0 is a simple geometric series, and the polylogarithm of order 1 is the standard power series for the natural logarithm. Note that the probability density function of \( X \) can be written in terms of the polylogarithms of orders 0 and 1: \[ g(x) = -\frac{\Li_0\left[(1 - p) e^{-x}\right]}{\ln(p)} = \frac{\Li_0\left[(1 - p) e^{-x}\right]}{\Li_1(1 - p)}, \quad x \in [0, \infty) \] The most important property of the polylogarithm is given next.

The polylogarithm satisfies the following recursive integral formula: \[ \Li_{s+1}(x) = \int_0^x \frac{\Li_s(t)}{t} dt; \quad s \in \R, \; x \in (-1, 1) \] Equivalently, \( x \, \Li_{s+1}^\prime(x) = \Li_s(x) \) for \( x \in (-1, 1) \) and \( s \in \R \).

Details:

Recall that a power series may integrated term by term, and the integrated series has the same radius of convergence. Hence for \(s \in \R \), \[ \int_0^x \frac{\Li_s(t)}{t} dt = \sum_{k=1}^\infty \int_0^x \frac{t^{k-1}}{k^s} dt = \sum_{k=1}^\infty \frac{x^k}{s^{k+1}} = \Li_{s+1}(x), \quad x \in (-1, 1) \]

When \( s \gt 1 \), the polylogarithm series converges at \( x = 1 \) also, and \[ \Li_s(1) = \zeta(s) = \sum_{k=1}^\infty \frac{1}{k^s} \] where \( \zeta \) is the Riemann zeta function, named for Georg Riemann. The polylogarithm can be extended to complex orders and defined for complex \( z \) with \( |z| \lt 1 \), but the simpler version suffices for our work here.

Moments

We assume that \( X \) has the standard exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \).

The moments of \( X \) (about 0) are \[ \E(X^n) = -n! \frac{\Li_{n+1}(1 - p)}{\ln(p)} = n! \frac{\Li_{n+1}(1 - p)}{\Li_1(1 - p)}, \quad n \in \N \]

  1. \( \E(X^n) \to 0 \) as \( p \downarrow 0 \)
  2. \( \E(X^n) \to n! \) as \( p \uparrow 1 \)
Details:

As noted earlier in the discussion of the polylogarithm, the PDF of \( X \) can be written as \[ g(x) = -\frac{1}{\ln(p)} \sum_{k=1}^\infty (1 - p)^k e^{-kx}, \quad x \in [0, \infty) \] Hence \[ \E(X^n) = -\frac{1}{\ln(p)} \int_0^\infty \sum_{k=1}^\infty (1 - p)^k x^n e^{-k x} dx = -\frac{1}{\ln(p)} \sum_{k=1}^\infty (1 - p)^k \int_0^\infty x^n e^{-k x} dx \] But \( \int_0^\infty x^n e^{-k x} dx = n! \big/ k^{n + 1} \) and hence \[ \E(X^n) = -\frac{1}{\ln(p)} n! \sum_{k=1}^\infty \frac{(1 - p)^k}{k^{n+1}} = - n! \frac{\Li_{n+1}(1 - p)}{\ln(p)}\]

  1. As \( p \downarrow 0 \), the numerator in the last expression for \( \E(X^n) \) converges to \( n! \zeta(n + 1) \) while the denominator diverges to \( \infty \).
  2. As \( p \uparrow 1 \), the expression for \( \E(X^n) \) has the indeterminate form \( \frac{0}{0} \). An application of L'Hospital's rule and the derivative rule in gives \[ \lim_{p \uparrow 1} \E(X^n) = \lim_{p \uparrow 1} n! p \frac{\Li_n(1 - p)}{1 - p} \] But from definition , \( \Li_n(x) \big/ x \to 1 \) as \( x \to 0 \).

We will get some additional insight into the asymptotics in where we consider the limiting distribution as \( p \downarrow 0 \) and \( p \uparrow 1 \). The mean and variance of the standard exponential logarithmic distribution follow easily from the general moment formula.

The mean and variance of \( X \) are

  1. \( \E(X) = - \Li_2(1 - p) \big/ \ln(p) \)
  2. \( \var(X) = -2 \Li_3(1 - p) \big/ \ln(p) - \left[\Li_2(1 - p) \big/ \ln(p)\right]^2 \)

From the asymptotics of the general moments in , note that \( \E(X) \to 0 \) and \( \var(X) \to 0 \) as \( p \downarrow 0 \), and \( E(X) \to 1 \) and \( \var(X) \to 1 \) as \( p \uparrow 1 \).

Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape parameter and note the size and location of the mean \( \pm \) standard deviation bar. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

Related Distributions

The standard exponential-logarithmic distribution has the usual connections to the standard uniform distribution by means of the distribution function in and the quantile function in .

Suppose that \( p \in (0, 1) \).

  1. If \( U \) has the standard uniform distribution then \[ X = \ln\left(\frac{1 - p}{1 - p^U}\right) = \ln(1 - p) - \ln\left(1 - p^U \right) \] has the standard exponential-logarithmic distribution with shape parameter \( p \).
  2. If \( X \) has the standard exponential-logarithmic distribution with shape parameter \( p \) then \[ U = \frac{\ln\left[1 - (1 - p) e^{-X}\right]}{\ln(p)} \] has the standard uniform distribution.
Details:
  1. Recall that if \( U \) has the standard uniform distribution, then \( G^{-1}(U) \) has the exponential-logarithmic distribution with shape parameter \( p \). But \( 1 - U \) also has the standard uniform distribution and hence\( X = G^{-1}(1 - U) \) also has the exponential-logarithmic distribution with shape parameter \( p \).
  2. Similarly, if \( X \) has the exponential-logarithmic distribution with shape parameter \( p \) then \( G(X) \) has the standard uniform distribution. Hence \( U = 1 - G(X) \) also has the standard uniform distribution.

Since the quantile function of the basic exponential-logarithmic distribution has a simple closed form, the distribution can be simulated using the random quantile method.

Open the random quantile experiment and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.

As the name suggests, the standard exponential-logarithmic distribution arises from the exponential distribution and the logarithmic distribution via a certain type of randomization.

Suppose that \( \bs{T} = (T_1, T_2, \ldots) \) is a sequence of independent random variables, each with the standard exponential distribution. Suppose also that \( N \) has the logarithmic distribution with parameter \( 1 - p \in (0, 1) \) and is independent of \( \bs T \). Then \( X = \min\{T_1, T_2, \ldots, T_N\} \) has the standard exponential-logarithmic distribution with shape parameter \( p \).

Details:

It's best to work with reliability functions. For \( n \in \N_+ \), \( \min\{T_1, T_2, \ldots, T_n\} \) has the exponential distribution with rate parameter \( n \), and hence \( \P(\min\{T_1, T_2, \ldots T_n\} \gt x) = e^{-n x} \) for \( x \in [0, \infty) \). Recall also that \[ \P(N = n) = -\frac{(1 - p)^n}{n \ln(p)} \quad, n \in \N_+ \] Hence, using the polylogarithm of order 1 in (the standard power series for the logarithm), \[ \P(X \gt x) = \E[\P(X \gt x \mid N)] = -\frac{1}{\ln(p)} \sum_{n=1}^\infty e^{-n x} \frac{(1 - p)^n}{n} = -\frac{1}{\ln(p)} \sum_{n=1}^\infty \frac{\left[e^{-x}(1 - p)\right]^n}{n} = \frac{\ln\left[1 - e^{-x} (1 - p)\right]}{\ln(p)}\] As a function of \( x \), this is the reliability function of the exponential-logarithmic distribution with shape parameter \( p \).

Also of interest, of course, are the limiting distributions of the standard exponential-logarithmic distribution as \(p \to 0\) and as \( p \to 1 \).

The standard exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) converges to

  1. Point mass at 0 as \( p \to 0 \).
  2. The standard exponential distribution as \( p \to 1 \).
Details:

It's slightly easier to work with the reliability function \( G^c \) in rather than the ordinary (left) distribution function \( G \).

  1. Note that \( G^c(0) = 1 \) for every \( p \in (0, 1) \). On the other hand, if \( x \gt 0 \) then \( G^c(x) \to 0 \) as \( p \to 0 \).
  2. \( G^c(x) \) has the indeterminate form \( \frac{0}{0} \) as \( p \to 1 \). An application of L'Hospital's rule shows that \[ \lim_{p \to 1} G^c(x) = \lim_{p \to 1} \frac{p e^{-x}}{1 - (1 - p) e^{-x}} = e^{-x}, \quad x \in [0, \infty) \] As a function of \( x \), this is the reliability function of the standard exponential distribution.

The General Exponential-Logarithmic Distribution

The standard exponential-logarithmic distribution is generalized, like so many distributions on \( [0, \infty) \), by adding a scale parameter. Scale transformations often correspond to a change of units (hours into minutes, for example).

Suppose that \( Z \) has the standard exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \). If \( b \in (0, \infty) \), then \( X = b Z \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \).

Using the same terminology as the exponential distribution, \( 1 / b \) is called the rate parameter.

Distribution Functions

Suppose that \( X \) has the exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) and scale parameter \( b \in (0, \infty) \).

\( X \) has probability density function \( f \) given by \[ f(x) = -\frac{(1 - p) e^{-x / b}}{b \ln(p)[1 - (1 - p) e^{-x / b}]}, \quad x \in [0, \infty) \]

  1. \( f \) is decreasing on \( [0, \infty) \) with mode \( x = 0 \).
  2. \( f \) is concave upward on \( [0, \infty) \).
Details:

Recall that \( f(x) = \frac{1}{b}g\left(\frac{x}{b}\right) \) for \( x \in [0, \infty) \) where \( g \) is the PDF of the standard distribution in .

Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

\( X \) has distribution function \( F \) given by \[ F(x) = 1 - \frac{\ln\left[1 - (1 - p) e^{-x / b}\right]}{\ln(p)}, \quad x \in [0, \infty) \]

Details:

Recall that \( F(x) = G(x / b) \) for \( x \in [0, \infty) \) where \( G \) is the CDF of the standard distribution in .

\( X \) has quantile function \( F^{-1} \) given by \[ F^{-1}(u) = b \ln\left(\frac{1 - p}{1 - p^{1 - u}}\right) = b \left[\ln(1 - p) - \ln\left(1 - p^{1 - u}\right)\right], \quad u \in [0, 1) \]

  1. The first quartile is \( q_1 = b \left[\ln(1 - p) - \ln\left(1 - p^{3/4}\right)\right] \).
  2. The median is \( q_2 = b \left[\ln(1 - p) - \ln\left(1 - p^{1/2}\right)\right] = b \ln\left(1 + \sqrt{p}\right)\).
  3. The third quartile is \( q_3 = b \left[\ln(1 - p) - \ln\left(1 - p^{1/4}\right) \right]\).
Details:

Recall that \( F^{-1}(u) = b G^{-1}(u) \) where \( G^{-1} \) is the quantile function of the standard distribution in .

Open the quantile app and select the exponential-logarithmic distribution. Vary the shape and scale parameter and note the shape and location of the probability density and distribution functions. For selected values of the parameters, computer quantiles of order 0.1 and 0.9.

\( X \) has reliability function \( F^c \) given by \[ F^c(x) = \frac{\ln\left[1 - (1 - p) e^{-x / b}\right]}{\ln(p)}, \quad x \in [0, \infty) \]

Details:

This follows trivially from the distribution function in since \( F^c = 1 - F \).

The exponential-logarithmic distribution has decreasing failure rate.

The failure rate function \( R \) of \( X \) is given by. \[ R(x) = -\frac{(1 - p) e^{-x / b}}{b \left[1 - (1 - p) e^{-x / b}\right] \ln\left[1 - (1 - p) e^{-x / b}\right]}, \quad x \in [0, \infty) \]

  1. \( R \) is decreasing on \( [0, \infty) \).
  2. \( R \) is concave upward on \( [0, \infty) \).
Details:

Recall that \( R(x) = \frac{1}{b} r\left(\frac{x}{b}\right) \) for \( x \in [0, \infty) \), where \( r \) is the failure rate function of the standard distribution in . Alternately, \( R(x) = f(x) \big/ F^c(x) \).

Moments

Suppose again that \( X \) has the exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) and scale parameter \( b \in (0, \infty) \). The moments of \( X \) can be computed easily from the representation \( X = b Z \) where \( Z \) has the basic exponential-logarithmic distribution.

The moments of \( X \) (about 0) are \[ \E(X^n) = -b^n n! \frac{\Li_{n+1}(1 - p)}{\ln(p)}, \quad n \in \N \]

  1. \( \E(X^n) \to 0 \) as \( p \downarrow 0 \)
  2. \( \E(X^n) \to b^n n! \) as \( p \uparrow 1 \)
Details:

These results follow from basic properties of expected value and the corresponding results for the standard distribution in . We can write \( X = b Z \) where \( Z \) has the standard exponential-logarithmic distribution with shape parameter \( p \). Hence \( \E(X^n) = b^n \E(Z^n) \).

The mean and variance of \( X \) are

  1. \( \E(X) = - b \Li_2(1 - p) \big/ \ln(p) \)
  2. \( \var(X) = b^2 \left(-2 \Li_3(1 - p) \big/ \ln(p) - \left[\Li_2(1 - p) \big/ \ln(p)\right]^2 \right)\)

From the general moment results in , note that \( \E(X) \to 0 \) and \( \var(X) \to 0 \) as \( p \downarrow 0 \), while \( \E(X) \to b \) and \( \var(X) \to b^2 \) as \( p \uparrow 1 \).

Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the size and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

Related Distributions

Since the exponential-logarithmic distribution is a scale family for each value of the shape parameter, it is trivially closed under scale transformations.

Suppose that \( X \) has the exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) and scale parameter \( b \in (0, \infty) \). If \( c \in (0, \infty) \), then \( Y = c X \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b c \).

Details:

By definition , we can take \( X = b Z \) where \( Z \) has the standard exponential-logarithmic distribution with shape parameter \( p \). But then \( Y = c X = (b c) Z \).

Once again, the exponential-logarithmic distribution has the usual connections to the standard uniform distribution by means of the distribution function in and quantile function in .

Suppose that \( p \in (0, 1) \) and \( b \in (0, \infty) \).

  1. If \( U \) has the standard exponential distribution then \[ X = b \left[\ln\left(\frac{1 - p}{1 - p^U}\right)\right] = b \left[\ln(1 - p) - \ln\left(1 - p^U \right)\right] \] has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \).
  2. If \( X \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \), then \[ U = \frac{\ln\left[1 - (1 - p) e^{-X / b}\right]}{\ln(p)} \] has the standard uniform distribution.
Details:

These results follow from the representation \(X = b Z \), where \( Z \) has the standard exponential-logarithmic distribution with shape parameter \( p \), and the corresponding result for \( Z \) in .

Again, since the quantile function of the exponential-logarithmic distribution has a simple closed form, the distribution can be simulated using the random quantile method.

Open the random quantile experiment and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

Suppose that \( \bs{T} = (T_1, T_2, \ldots) \) is a sequence of independent random variables, each with the exponential distribution with scale parameter \( b \in (0, \infty) \). Suppose also that \( N \) has the logarithmic distribution with parameter \( 1 - p \in (0, 1) \) and is independent of \( \bs{T} \). Then \( X = \min\{T_1, T_2, \ldots, T_N\} \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \).

Details:

Note that \( V_i = T_i / b \) has the standard exponential distribution. Hence by the corresponding result in , \( Z = \min\{V_1, V_2, \ldots, V_N\} \) has the standard exponential-logarithmic distribution with shape parameter \( p \). Hence \( X = b Z \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \).

The limiting distributions as \( p \downarrow 0 \) and as \( p \uparrow 1 \) also follow easily from the corresponding results for the standard case.

For fixed \( b \in (0, \infty) \), the exponential-logarithmic distribution with shape parameter \( p \in (0, 1) \) and scale parameter \( b \) converges to

  1. Point mass at 0 as \( p \downarrow 0 \).
  2. The exponential distribution with scale parameter \( b \) as \( p \uparrow 1 \).
Details:

Suppose that \( X \) has the exponential-logarithmic distribution with shape parameter \( p \) and scale parameter \( b \), so that \( X = b Z \) where \( Z \) has the standard exponential-logarithmic distribution with shape parameter \( p \). Using the corresponding result in ,

  1. The distribution of \( Z \) converges to point mass at 0 as \( p \downarrow 0 \) and hence so does the distribution of \( X \).
  2. The distribution of \( Z \) converges to the standard exponential distribution as \( p \uparrow 1 \) and hence the the distribution of \( X \) converges to the exponential distribution with scale parameter \( b \).