1. Random
  2. 4. Special Distributions
  3. The Cauchy Distribution

The Cauchy Distribution

The Cauchy distribution, named of course for the ubiquitous Augustin Cauchy, is interesting for a couple of reasons. First, it is a simple family of distributions for which the expected value (and other moments) do not exist. Second, the family is closed under the formation of sums of independent variables, and hence is an infinitely divisible family of distributions.

The Standard Cauchy Distribution

Distribution Functions

The standard Cauchy distribution is a continuous distribution on R with probability density function g given by g(x)=1π(1+x2),xR

  1. g is symmetric about x=0
  2. g increases and then decreases, with mode x=0.
  3. g is concave upward, then downward, and then upward again, with inflection points at x=±13.
  4. g(x)0 as x and as x
Details:

Note that 11+x2dx=arctanx|=π2(π2)=π and hence g is a PDF. Parts (a)–(d) follow from basic calculus.

Thus, the graph of g has a simple, symmetric, unimodal shape that is qualitatively (but certainly not quantitatively) like the standard normal probability density function. The probability density function g is obtained by normalizing the function x11+x2,xR The graph of this function is known as the witch of Agnesi, named for the Italian mathematician Maria Agnesi.

Open the special distribution simulator and select the Cauchy distribtion. Keep the default parameter values for the standard Cauchy distribution, and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.

The standard Cauchy distribution function G given by G(x)=12+1πarctanx for xR

Details:

For xR, G(x)=xg(t)dt=1πarctant|x=1πarctanx+12

The standard Cauchy quantile function G1 is given by G1(p)=tan[π(p12)] for p(0,1). In particular,

  1. The first quartile is G1(14)=1
  2. The median is G1(12)=0
  3. The third quartile is G1(34)=1
Details:

As usual, G1 is computed from the CDF G in [3] by solving G(x)=p for x in terms of p.

Of course, the fact that the median is 0 also follows from the symmetry of the distribution, as does the fact that G1(1p)=G1(p) for p(0,1).

Open the quantile app and select the Cauchy distribution. Keep the default parameter values and note the shape of the distribution and probability density functions. Compute the quantiles of order 0.1 and 0.9.

Moments

Suppose that random variable X has the standard Cauchy distribution. As we noted in the introduction, part of the fame of this distribution comes from the fact that the expected value does not exist.

E(X) does not exist.

Details:

By definition, E(X)=xg(x)dx. For the improper integral to exist, even as an extended real number, at least one of the integrals axg(x)dx and axg(x)dx must be finite, for some (and hence every) aR. But by a simple substitution, axg(x)dx=ax1π(1+x2)dx=12πln(1+x2)|a= and similarly, axg(x)dx=.

By symmetry, if the expected value did exist, it would have to be 0, just like the median and the mode, but alas the mean does not exist. Moreover, this is not just an artifact of how mathematicians define improper integrals, but has real consequences. Recall that if we think of the probability distribution as a mass distribution, then the mean is center of mass, the balance point where the moment (in the sense of physics) to the right is balanced by the moment to the left. But as the details of [6] show, the moments to the right and to the left at any point aR are infinite. In this sense, 0 is no more important than any other aR. Finally, if you are not convinced by the argument from physics, the next exercise may convince you that the law of large numbers fails as well.

Open the special distribution simulator and select the Cauchy distribution. Keep the default parameter values, which give the standard Cauchy distribution. Run the simulation 1000 times and note the behavior of the sample mean.

Earlier we noted some superficial similarities between the standard Cauchy distribution and the standard normal distribution (unimodal, symmetric about 0). But clearly there are huge quantitative differences. The Cauchy distribution is a heavy tailed distribution because the probability density function g(x) decreases at a polynomial rate as x and x, as opposed to an exponential rate. This is yet another way to understand why the expected value does not exist.

In terms of the higher moments, E(Xn) does not exist if n is odd, and is if n is even. It follows that the moment generating function m(t)=E(etX) cannot be finite in an interval about 0. In fact, m(t)= for every t0, so this generating function is of no use to us. But every distribution on R has a characteristic function, and for the Cauchy distribution, this generating function will be quite useful.

X has characteristic function χ0 given by χ0(t)=exp(|t|) for tR.

Details:

By definition, χ0(t)=E(eitX)=eitx1π(1+x2)dx We will compute this integral by evaluating a related contour integral in the complex plane using, appropriately enough, Cauchy's integral formula (named for you know who).

Suppose first that t0. For r>1, let Γr denote the curve in the complex plane consisting of the line segment Lr on the x-axis from r to r and the upper half circle Cr of radius r centered at the origin. We give Γr the usual counter-clockwise orientation. On the one hand we have Γreitzπ(1+z2)dz=Lreitzπ(1+z2)dz+Creitzπ(1+z2)dz On Lr, z=x and dz=dx so Lreitzπ(1+z2)dz=rreitxπ(1+x2)dx On Cr, let z=x+iy. Then eitz=ety+itx=ety[cos(tx)+isin(tx)]. Since y0 on Cr and t0, we have |eitz|1. Also, |11+z2|1r21 on Cr. It follows that |Creitzπ(1+z2)dz|1π(r21)πr=rr210 as r On the other hand, eitz/[π(1+z2)] has one singularity inside Γr, at i. The residue is limzi(zi)eitzπ(1+z2)=limzieitzπ(z+i)=et2πi Hence by Cauchy's integral formula, Γreitzπ(1+z2dz=2πiet2πi=et. Putting the pieces together we have et=rreitxπ(1+x2)dx+Creitzπ(1+z2)dz Letting r gives eitxπ(1+x2)dx=et For t<0, we can use the substitution u=x and our previous result to get eitxπ(1+x2)dx=ei(t)uπ(1+u2)du=et

Related Distributions

The standard Cauchy distribution a member of the Student t family of distributions.

The standard Cauchy distribution is the Student t distribution with one degree of freedom.

Details:

The Student t distribution with one degree of freedom has PDF g given by g(t)=Γ(1)πΓ(1/2)(1+t2)1=1π(1+t2),tR which is the standard Cauchy PDF.

The standard Cauchy distribution also arises naturally as the ratio of independent standard normal variables.

Suppose that Z and W are independent random variables, each with the standard normal distribution. Then X=Z/W has the standard Cauchy distribution.

Details:

By definition, W2 has the chi-square distribution with 1 degree of freedom, and is independent of Z. Hence, also by definition, X=Z/W2=Z/W has the Student t distribution with 1 degree of freedom, so the result follows from [9].

If X has the standard Cauchy distribution, then so does Y=1/X

Details:

This is a corollary of [10]. Suppose that Z and W are independent variables, each with the standard normal distribution. Then X=Z/W has the standard Cauchy distribution. But then 1/X=W/Z also has the standard Cauchy distribution.

The standard Cauchy distribution has the usual connections to the standard uniform distribution via the distribution function in [3] and the quantile function in [4].

The standard Cauchy distribution and the standard uniform distribution are related as follows:

  1. If U has the standard uniform distribution then X=G1(U)=tan[π(U12)] has the standard Cauchy distribution.
  2. If X has the standard Cauchy distribution then U=G(X)=12+1πarctan(X) has the standard uniform distribution.
Details:

Recall that if U has the standard uniform distribution, then G1(U) has distribution function G. Conversely, if X has distribution function G, then since G is strictly increasing, G(X) has the standard uniform distribution.

Since the quantile function has a simple, closed form (at least in terms of a standard special function), it's easy to simulate the standard Cauchy distribution using the random quantile method.

Open the random quantile experiment and select the Cauchy distribution. Keep the default parameter values and note again the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.

For the Cauchy distribution, the random quantile method has a nice physical interpretation. Suppose that a light source is 1 unit away from position 0 of an infinite, straight wall. We shine the light at the wall at an angle Θ (to the perpendicular) that is uniformly distributed on the interval (π2,π2). Then the position X=tanΘ of the light beam on the wall has the standard Cauchy distribution. Note that this follows since Θ has the same distribution as π(U12) where U has the standard uniform distribution.

Open the Cauchy app and keep the default parameter values.

  1. Run the experiment in single-step mode a few times, to make sure that you understand the experiment.
  2. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.

The General Cauchy Distribution

Like so many other standard distributions, the Cauchy distribution is generalized by adding location and scale parameters. Most of the results in this subsection follow immediately from results for the standard Cauchy distribution above and general results for location scale families.

Suppose that Z has the standard Cauchy distribution and that aR and b(0,). Then X=a+bZ has the Cauchy distribution with location parameter a and scale parameter b.

Distribution Functions

Suppose that X has the Cauchy distribution with location parameter aR and scale parameter b(0,).

X has probability density function f given by f(x)=bπ[b2+(xa)2],xR

  1. f is symmetric about x=a.
  2. f increases and then decreases, with mode x=a.
  3. f is concave upward, then downward, then upward again, with inflection points at x=a±13b.
  4. f(x)0 as x and as x.
Details:

Recall that f(x)=1bg(xab) where g is the standard Cauchy PDF in [1].

Open the special distribution simulator and select the Cauchy distribution. Vary the parameters and note the location and shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.

X has distribution function F given by F(x)=12+1πarctan(xab),xR

Details:

Recall that F(x)=G(xab) where G is the standard Cauchy CDF in [3].

X has quantile function F1 given by F1(p)=a+btan[π(p12)],p(0,1) In particular,

  1. The first quartile is F1(14)=ab.
  2. The median is F1(12)=a.
  3. The third quartile is F1(34)=a+b.
Details:

Recall that F1(p)=a+bG1(p) where G1 is the standard Cauchy quantile function in [4].

Open the quantile app and select the Cauchy distribution. Vary the parameters and note the shape and location of the distribution and probability density functions. For selected values of the parameters, compute the quantiles of order 0.1 and 0.9.

Moments

Suppose again that X has the Cauchy distribution with location parameter aR and scale parameter b(0,). Since the mean and other moments of the standard Cauchy distribution do not exist, they don't exist for the general Cauchy distribution either.

Open the special distribution simulator and select the Cauchy distribution. For selected values of the parameters, run the simulation 1000 times and note the behavior of the sample mean.

But of course the characteristic function of the Cauchy distribution exists and is easy to obtain from the characteristic function of the standard distribution.

X has characteristic function χ given by χ(t)=exp(aitb|t|) for tR.

Details:

Recall that χ(t)=eitaχ0(bt) where χ0 is the standard Cauchy characteristic function in [8].

Related Distributions

Like all location-scale families, the general Cauchy distribution is closed under location-scale transformations.

Suppose that X has the Cauchy distribution with location parameter aR and scale parameter b(0,), and that cR and d(0,). Then Y=c+dX has the Cauchy distribution with location parameter c+da and scale parameter bd.

Details:

Once again, we give the standard proof. By definition [15] we can take X=a+bZ where Z has the standard Cauchy distribution. But then Y=c+dX=(c+ad)+(bd)Z.

Much more interesting is the fact that the Cauchy family is closed under sums of independent variables. In fact, this is the main reason that the generalization to a location-scale family is justified.

Suppose that Xi has the Cauchy distribution with location parameter aiR and scale parameter bi(0,) for i{1,2}, and that X1 and X2 are independent. Then Y=X1+X2 has the Cauchy distribution with location parameter a1+a2 and scale parameter b1+b2.

Details:

This follows easily from the characteristic function in [22]. Let χi denote the characteristic function of Xi for i=1,2 and χ the charactersitic function of Y. Then χ(t)=χ1(t)χ2(t)=exp(a1itb1|t|)exp(a2itb2|t|)=exp[(a1+a2)it(b1+b2)|t|]

As a corollary, the Cauchy distribution is stable, with index α=1:

If X=(X1,X2,,Xn) is a sequence of independent variables, each with the Cauchy distribution with location parameter aR and scale parameter b(0,), then X1+X2++Xn has the Cauchy distribution with location parameter na and scale parameter nb.

Another corollary is the strange property that the sample mean of a random sample from a Cauchy distribution has that same Cauchy distribution. No wonder the expected value does not exist!

Suppose that X=(X1,X2,,Xn) is a sequence of independent random variables, each with the Cauchy distribution with location parameter aR and scale parameter b(0,). (That is, X is a random sample of size n from the Cauchy distribution.) Then the sample mean M=1ni=1nXi also has the Cauchy distribution with location parameter a and scale parameter b.

Details:

From the stability result [25], Y=i=1nXi has the Cauchy distribution with location parameter na and scale parameter nb. But then by [23], M=Y/n has the Cauchy distribution with location parameter a and scale parameter b.

The next result shows explicitly that the Cauchy distribution is infinitely divisible. But of course, infinite divisibility is also a consequence of stability.

Suppose that aR and b(0,). For every nN+ the Cauchy distribution with location parameter a and scale parameter b is the distribution of the sum of n independent variables, each of which has the Cauchy distribution with location parameters a/n and scale parameter b/n.

Our next result is a very slight generalization of the reciprocal result in [11] above for the standard Cauchy distribution.

Suppose that X has the Cauchy distribution with location parameter 0 and scale parameter b(0,). Then Y=1/X has the Cauchy distribution with location parameter 0 and scale parameter 1/b.

Details:

X has the same distribution as bZ where Z has the standard Cauchy distribution. Hence 1X has the same distribution as 1b1Z. But by [11], 1Z also has the standard Cauchy distribution, so 1b1Z has the Cauchy distribution with location parameter 0 and scale parameter 1/b.

As with its standard cousin, the general Cauchy distribution has simple connections with the standard uniform distribution via the distribution function in [18] and quantile function in [19], and in particular, can be simulated via the random quantile method.

Suppose that aR and b(0,).

  1. If U has the standard uniform distribution, then X=F1(U)=a+btan[π(U12)] has the Cauchy distribution with location parameter a and scale parameter g
  2. If X has the Cauchy distribution with location parameter a and scale parameter b, then U=F(X)=12+1πarctan(Xab) has the standard uniform distribution.

Open the random quantile experiment and select the Cauchy distribution. Vary the parameters and note again the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.

As before, the random quantile method has a nice physical interpretation. Suppose that a light source is b units away from position a of an infinite, straight wall. We shine the light at the wall at an angle Θ (to the perpendicular) that is uniformly distributed on the interval (π2,π2). Then the position X=a+btanΘ of the light beam on the wall has the Cauchy distribution with location parameter a and scale parameter b.

Open the Cauchy distribution app. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.