\(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\skw}{\text{skew}}\) \(\newcommand{\kur}{\text{kurt}}\)
  1. Random
  2. 4. Special Distributions
  3. The Logistic Distribution

The Logistic Distribution

The logistic distribution is used for various growth models, and is used in a certain type of regression, known appropriately as logistic regression.

The Standard Logistic Distribution

Distribution Functions

The standard logistic distribution is a continuous distribution on \( \R \) with distribution function \( G \) given by \[ G(z) = \frac{e^z}{1 + e^z}, \quad z \in \R \]

Details:

Note that \( G \) is continuous, and \( G(z) \to 0 \) as \( z \to -\infty \) and \( G(z) \to 1 \) as \( z \to \infty \). Moreover, \[ G^\prime(z) = \frac{e^z}{\left(1 + e^z\right)^2} \gt 0, \quad z \in \R \] so \( G \) is increasing.

The probability density function \(g\) of the standard logistic distribution is given by \[ g(z) = \frac{e^z}{\left(1 + e^z\right)^2}, \quad z \in \R \]

  1. \(g\) is symmetric about \(x = 0\).
  2. \(g\) increases and then decreases with the mode \(x = 0\).
  3. \( g \) is concave upward, then downward, then upward again with inflection points at \( x = \pm \ln\left(2 + \sqrt{3}\right) \approx = \pm 1.317 \).
Details:

These result follow from standard calculus. First recall that \( g = G^\prime \) was computed in the details of .

  1. The symmetry of \( g \) is not obvious at first, but note that \[ g(-z) = \frac{e^{-z}}{\left(1 + e^{-z}\right)^2} \frac{e^{2z}}{e^{2z}} = \frac{e^z}{\left(1 + e^z\right)^2} = g(z) \]
  2. The first derivative of \( g \) is \[ g^\prime(z) = \frac{e^z (1 - e^z)}{(1 + e^z)^3} \]
  3. The second derivative of \( g \) is \[ g^{\prime \prime}(z) = \frac{e^z \left(1 - 4 e^z + e^{2z}\right)}{(1 + e^z)^4} \]

In the special distribution simulator, select the logistic distribution. Keep the default parameter values and note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.

The quantile function \( G^{-1} \) of the standard logistic distribution is given by \[ G^{-1}(p) = \ln \left( \frac{p}{1 - p} \right), \quad p \in (0, 1) \]

  1. The first quartile is \( -\ln 3 \approx -1.0986\).
  2. The median is 0.
  3. The third quartile is \( \ln 3 \approx 1.0986 \)
Details:

The formula for \( G^{-1} \) follows from by solving \( p = G(z) \) for \( z \) in terms of \( p \).

Recall that \(p : 1 - p\) are the odds in favor of an event with probability \(p\). Thus, the logistic distribution has the interesting property that the quantiles are the logarithms of the corresponding odds ratios. Indeed, this function of \(p\) is sometimes called the logit function. The fact that the median is 0 also follows from symmetry, of course.

In the quantile app, select the logistic distribution. Keep the default parameter values and note the shape of the probability density function and the distribution function. Find the quantiles of order 0.1 and 0.9.

Moments

Suppose that \( Z \) has the standard logistic distribution. The moment generating function of \( Z \) has a simple representation in terms of the beta function \( B \), and hence also in terms of the gamma function \( \Gamma \)

The moment generating function \( m \) of \( Z \) is given by

\[ m(t) = B(1 + t, 1 - t) = \Gamma(1 + t) \, \Gamma(1 - t), \quad t \in (-1, 1) \]
Details:

Note that \[ m(t) = \int_{-\infty}^\infty e^{t z} \frac{e^z}{\left(1 + e^z\right)^2} dx \] Let \(u = \frac{e^z}{1 + e^z}\) so that \( du = \frac{e^z}{\left(1 + e^z\right)^2} dz \) and \( e^z = \frac{u}{1 - u} \). Hence \[ m(t) = \int_0^1 \left(\frac{u}{1 - u}\right)^t du = \int_0^1 u^t (1 - u)^{-t} \, du \] The last integral, by definition, is \( B(1 + t, 1 - t) \) for \( t \in (-1, 1) \)

Since the moment generating function is finite on an open interval containing 0, random variable \( Z \) has moments of all orders. By symmetry, the odd order moments are 0. The even order moments can be represented in terms of Bernoulli numbers, named of course for Jacob Bernoulli. Let \( \beta_n \) Bernoulli number of order \( n \in \N \).

Let \( n \in \N \)

  1. If \( n \) is odd then \( \E(Z^n) = 0 \).
  2. If \( n \) is even then \( \E\left(Z^n\right) = (2^n - 2) \pi^n \left|\beta_n\right| \)
Details:
  1. Again, this follows from symmetry
  2. Recall that the moments of \( Z \) can be computed by integrating powers of the quantile function. Hence \[ \E\left(Z^n\right) = \int_0^1 \left[\ln\left(\frac{p}{1 - p}\right)\right]^n dp \] This integral evaluates to the expression above involving the Bernoulli numbers.

In particular, we have the mean and variance.

The mean and variance of \( Z \) are

  1. \(\E(Z) = 0\)
  2. \(\var(Z) = \frac{\pi^2}{3}\)
Details:
  1. Again, \( \E(Z) = 0 \) by symmetry.
  2. The second Bernoulli number is \( \beta_2 = \frac{1}{6} \). Hence \( \var(Z) = \E\left(Z^2\right) = (2^2 - 2) \pi^2 \frac{1}{6} = \frac{\pi^2}{ 3 } \).

In the special distribution simulator, select the logistic distribution. Keep the default parameter values and note the shape and location of the mean \( \pm \) standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

The skewness and kurtosis of \( Z \) are

  1. \( \skw(Z) = 0 \)
  2. \( \kur(Z) = \frac{21}{5} \)
Details:
  1. Again, \( \skw(Z) = 0 \) by the symmetry of the distribution.
  2. Recall that by symmetry, \( \E(Z) = \E\left(X^3\right) = 0 \). Also, \( \left|\beta_4\right| = \frac{1}{30} \), so \( \E\left(Z^4\right) = (2^4 - 2) \pi^4 \frac{1}{30} = \frac{7 \pi^4}{15} \). Hence from the usual computational formula for kurtosis, \[ \kur(Z) = \frac{\E\left(Z^4\right)}{[\var(Z)]^2} = \frac{7 \pi^4 / 15}{\pi^4 / 9} = \frac{21}{5} \]

It follows that the excess kurtosis of \( Z \) is \( \kur(Z) - 3 = \frac{6}{5} \).

Related Distributions

The standard logistic distribution has the usual connections with the standard uniform distribution by means of the distribution function in and quantile function in . Recall that the standard uniform distribution is the continuous uniform distribution on the interval \( (0, 1) \).

Connections with the standard uniform distribution.

  1. If \( Z \) has the standard logistic distribution then \[ U = G(Z) = \frac{e^Z}{1 + e^Z} \] has the standard uniform distribution.
  2. If \( U \) has the standard uniform distribution then \[ Z = G^{-1}(U) = \ln\left(\frac{U}{1 - U}\right) = \ln(U) - \ln(1 - U) \] has the standard logistic distribution.

Since the quantile function has a simple closed form, we can use the usual random quantile method to simulate the standard logistic distribution.

Open the random quantile experiment and select the logistic distribution. Keep the default parameter values and note the shape of the probability density and distribution functions. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.

The standard logistic distribution also has several simple connections with the standard exponential distribution (the exponential distribution with rate parameter 1).

Connections with the standard exponential distribution:

  1. If \( Z \) has the standard logistic distribution, then \( Y = \ln\left(e^X + 1\right) \) has the standard exponential distribution.
  2. If \( Y \) has the standard exponential distribution then \( Z = \ln\left(e^Y - 1\right) \) has the standard logistic distribution.
Details:

These results follow from the standard change of variables formula. The transformations, inverses of each other of course, are \( y = \ln\left(e^z + 1\right) \) and \( z = \ln\left(e^y - 1\right) \) for \( z \in \R \) and \( y \in (0, \infty) \). Let \( g \) and \( h \) denote the PDFs of \( Z \) and \( Y \) respectively.

  1. By definition, \( g(z) = e^z \big/ (1 + e^z)^2 \) for \( z \in \R \) so \[ h(y) = g(z) \frac{dz}{dy} = \frac{\exp\left[\ln\left(e^y - 1\right)\right]}{\left(1 + \exp\left[\ln\left(e^y - 1\right)\right]\right)^2} \frac{e^y}{e^y - 1} = e^{-y}, \quad y \in (0, \infty) \] which is the PDF of the standard exponential distribution.
  2. By definition, \( g(y) = e^{-y} \) for \( y \in (0, \infty) \) so \[ g(z) = h(y) \frac{dy}{dz} = \exp\left[-\ln\left(e^z + 1\right)\right] \frac{e^z}{e^z + 1} = \frac{e^z}{\left(e^z + 1\right)^2}, \quad z \in \R \] which is the PDF of the standard logistic distribution.

Suppose that \( X \) and \( Y \) are independent random variables, each with the standard exponential distribution. Then \( Z = \ln(X / Y) \) has the standard logistic distribution.

Details:

For \( z \in \R \), \[ \P(Z \le z) = \P[\ln(X / Y) \le z] = \P\left(X / Y \le e^z\right) = \P\left(Y \ge e^{-z} X\right) \] Recall that \( \P(Y \ge y) = e^{-y} \) for \( y \in (0, \infty) \) and \( X \) has PDF \( x \mapsto e^{-x} \) on \( (0, \infty) \). We condition on \( X \): \[ \P(Z \le z) = \E\left[\P\left(Y \ge e^{-z} X \mid X\right)\right] = \int_0^\infty e^{-e^{-z} x} e^{-x} dx = \int_0^\infty e^{(e^{-z} + 1)x} dx = \frac{1}{e^{-z} + 1} = \frac{e^z}{1 + e^z} \] As a function of \( z \), this is the distribution function of the standard logistic distribution.

There are also simple connections between the standard logistic distribution and the Pareto distribution.

Connections with the Pareto distribution:

  1. If \( Z \) has the standard logistic distribution, then \( Y = e^Z + 1 \) has the Pareto distribution with shape parameter 1.
  2. If \(Y\) has the Pareto distribution with shape parameter 1, then \(Z = \ln(Y - 1)\) has the standard logistic distribution.
Details:

These results follow from the basic change of variables theorem. The transformation, inverses of one another of course, are \( y = e^z + 1 \), \( z = \ln(y - 1) \) for \( z \in \R \) and \( y \in (1, \infty) \). Let \( g \) and \( h \) denote PDFs of \( Z \) and \( Y \) respectively.

  1. By definition, \( g(z) = e^z \big/ \left(1 + e^z\right)^2 \) for \( z \in \R \). Hence \[ h(y) = g(z) \frac{dz}{dy} = \frac{\exp[\ln(y - 1)]}{\left(1 + \exp[\ln(y - 1)]\right)^2} \frac{1}{y - 1} = \frac{1}{y^2}, \quad y \in (1, \infty) \] which is the PDF of the Pareto distribution with shape parameter 1.
  2. By definition, \( h(y) = 1 / y^2 \) for \( y \in (1, \infty) \). Hence \[ g(z) = h(y) \frac{dy}{dz} = \frac{1}{(e^z + 1)^2} e^z, \quad z \in \R\] which is the PDF of the standard logistic distribution.

Finally, there are simple connections to the extreme value distribution.

If \( X \) and \( Y \) are independent and each has the standard Gumbel distribution, them \( Z = Y - X \) has the standard logistic distribution.

Details:

The distribution function of \( Y \) is \( G(y) = \exp\left(-e^{-y}\right) \) for \( y \in \R \) and the density function of \( X \) is \( g(x) = e^{-x} \exp\left(-e^{-x}\right) \) for \( x \in \R \). For \( z \in \R \), conditioning on \( X \) gives \[ \P(Z \le z) = \P(Y \le X + z) = \E[\P(Y \le X + z \mid X)] = \int_{-\infty}^\infty \exp\left(-e^{-(x + z)}\right) e^{-x} \exp\left(-e^{-x}\right) dx\] Substituting \( u = -e^{-(x + z)} \) gives \[ \P(Z \le z) = \int_{-\infty}^0 e^u \exp(e^z u) e^z du = e^z \int_{-\infty}^0 \exp\left[u(1 + e^z)\right] du = \frac{e^z}{1 + e^z}, \quad z \in \R \] As a function of \( z \), this is the standard logistic distribution function.

The General Logistic Distribution

The general logistic distribution is the location-scale family associated with the standard logistic distribution.

Suppose that \(Z\) has the standard logistic distribution. For \(a \in \R\) and \( b \in (0, \infty) \), random variable \( X = a + b Z \) has the logistic distribution with location parameter \(a\) and scale parameter \(b\).

Distribution Functions

Analogies of the results above for the general logistic distribution follow easily from basic properties of the location-scale transformation. Suppose that \( X \) has the logistic distribution with location parameter \( a \in \R \) and scale parameter \( b \in (0, \infty) \).

The probability density function \( f \) of \( X \) is given by \[ f(x) = \frac{\exp \left(\frac{x - a}{b} \right)}{b \left[1 + \exp \left(\frac{x - a}{b} \right) \right]^2}, \quad x \in \R \]

  1. \( f \) is symmetric about \( x = a \).
  2. \( f \) increases and then decreases, with mode \( x = a \).
  3. \( f \) is concave upward, then downward, then upward again, with inflection points at \( x = a \pm \ln\left(2 + \sqrt{3}\right) b \).
Details:

Recall that \[ f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right), \quad x \in \R \] where \( g \) is the standard logistic PDF in .

In the special distribution simulator, select the logistic distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

The distribution function \( F \) of \( X \) is given by \[ F(x) = \frac{\exp \left( \frac{x - a}{b} \right)}{1 + \exp \left( \frac{x - a}{b} \right)}, \quad x \in \R \]

Details:

Recall that \[ F(x) = G\left(\frac{x - a}{b}\right), \quad x \in \R \] where \( G \) is the standard logistic CDF in .

The quantile function \( F^{-1} \) of \( X \) is given by \[ F^{-1}(p) = a + b \ln \left( \frac{p}{1 - p} \right), \quad p \in (0, 1) \]

  1. The first quartile is \( a - b \ln 3\).
  2. The median is \( a \).
  3. The third quartile is \( a + b \ln 3 \)
Details:

Recall that \( F^{-1}(p) = a + b G^{-1}(p) \) for \( p \in (0, 1) \), where \( G^{-1} \) is the standard logistic quantile function in .

In the quantile app, select the logistic distribution. Vary the parameters and note the shape and location of the probability density function and the distribution function. For selected values of the parameters, find the quantiles of order 0.1 and 0.9.

Moments

Suppose again that \( X \) has the logistic distribution with location parameter \( a \in \R \) and scale parameter \( b \in (0, \infty) \). Recall that \( B \) denotes the beta function and \( \Gamma \) the gamma function.

The moment generating function \( M \) of \( X \) is given by \[ M(t) = e^{a t} B(1 + b t, 1 - b t) = e^{a t} \Gamma(1 + b t) \, \Gamma(1 - b t), \quad t \in (-1, 1) \]

Details:

Recall that \( M(t) = e^{a t} m(b t) \) where \( m \) is the standard logistic MGF in .

The mean and variance of \( X \) are

  1. \(\E(X) = a\)
  2. \(\var(X) = b^2 \frac{\pi^2}{3}\)
Details:

By definition we can assume \( X = a + b Z \) where \( Z \) has the standard logistic distribution. Using the mean and variance of \( Z \) in we have

  1. \( \E(X) = a + b \E(Z) = a \)
  2. \( \var(X) = b^2 \var(Z) = b^2 \frac{\pi^2}{3} \)

In the special distribution simulator, select the logistic distribution. Vary the parameters and note the shape and location of the mean \( \pm \) standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.

The skewness and kurtosis of \( X \) are

  1. \( \skw(X) = 0 \)
  2. \( \kur(X) = \frac{21}{5} \)
Details:

Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under location-scale transformations. So the skewness and kurtosis of \( Z \) are the same as the skewness and kurtosis of \( Z \) in .

Once again, it follows that the excess kurtosis of \( X \) is \( \kur(X) - 3 = \frac{6}{5} \). The central moments of \( X \) can be given in terms of the Bernoulli numbers. As before, let \( \beta_n \) denote the Bernoulli number of order \( n \in \N \).

Let \( n \in \N \).

  1. If \( n \) is odd then \( \E\left[(X - a)^n\right] = 0 \).
  2. If \( n \) is even then \( \E\left[(X - a)^n\right] = (2^n - 2) \pi^n b^n \left|\beta_n\right| \)
Details:

Again by definition we can take \( X = a + b Z \) where \( Z \) has the standard logistic distribution. Then \( \E\left[(X - a)^n\right] = b^n \E(Z^n) \) so the results follow from the moments of \( Z \) in .

Related Distributions

The general logistic distribution is a location-scale family, so it is trivially closed under location-scale transformations.

Suppose that \( X \) has the logistic distribution with location parameter \( a \in \R \) and scale parameter \( b \in (0, \infty) \), and that \( c \in \R \) and \( d \in (0, \infty) \). Then \( Y = c + d X \) has the logistic distribution with location parameter \( c + a d \) and scale parameter \( b d \).

Details:

Again by definition we can take \( X = a + b Z \) where \( Z \) has the standard logistic distribution. Then \( Y = c + d X = (c + a d) + (b d) Z \).