As we have noted before, the normal distribution is perhaps the most important distribution in the study of mathematical statistics, in part because of the central limit theorem. As a consequence of this theorem, measured quantities that are subject to numerous small, random errors will have, at least approximately, normal distributions. Such variables are ubiquitous in statistical experiments, in subjects varying from the physical and biological sciences to the social sciences.
In this section, we will study estimation problems in the two-sample normal model and in the bivariate normal model. This section parallels the section on tests in the two-sample normal model in the chapter on hypothesis testing.
Suppose that \(\bs{X} = (X_1, X_2, \ldots, X_m)\) is a random sample of size \(m\) from the normal distribution with mean \(\mu\) and standard deviation \(\sigma\), and that \(\bs{Y} = (Y_1, Y_2, \ldots, Y_n)\) is a random sample of size \(n\) from the normal distribution with mean \(\nu\) and standard deviation \(\tau\). Moreover, suppose that the samples \(\bs{X}\) and \(\bs{Y}\) are independent. Usually, the parameters are unknown, so the parameter set for our vector of parameters \((\mu, \nu, \sigma, \tau)\) is \(\R^2 \times (0, \infty)^2\).
This type of situation arises frequently when the random variables represent a measurement of interest for the objects of the population, and the samples correspond to two different treatments. For example, we might be interested in the blood pressure of a certain population of patients. The \(\bs{X}\) vector records the blood pressures of a control sample, while the \(\bs{Y}\) vector records the blood pressures of the sample receiving a new drug. Similarly, we might be interested in the yield of an acre of corn. The \(\bs{X}\) vector records the yields of a sample receiving one type of fertilizer, while the \(\bs{Y}\) vector records the yields of a sample receiving a different type of fertilizer.
Usually our interest is in a comparison of the parameters (either the means or standard deviations) for the two sampling distributions. In this section we will construct confidence intervals for the difference of the distribution means \( \nu - \mu \) and for the ratio of the distribution variances \( \tau^2 / \sigma^2 \). As with previous estimation problems, the construction depends on finding appropriate pivot variables.
For a generic sample \(\bs{U} = (U_1, U_2, \ldots, U_k)\) from a distribution with mean \(a\), we will use our standard notation for the sample mean and for the sample variance. \begin{align} M(\bs{U}) & = \frac{1}{k} \sum_{i=1}^k U_i \\ S^2(\bs{U}) & = \frac{1}{k - 1} \sum_{i=1}^k [U_i - M(\bs{U})]^2 \end{align}
Let \( p \in (0, 1) \) and let \(j, \, k \in \N_+ \).
Recall that by symmetry, \(z(p) = -z(1 - p)\) and \( t_k(p) = -t_k(1 - p) \) for \( p \in (0, 1) \) and \( k \in \N_+ \). On the other hand, there is no simple relationship between the left and right tail probabilities of the \( F \) distribution.
First we will construct confidence intervals for \( \nu - \mu \) under the assumption that the distribution variances \( \sigma^2 \) and \( \tau^2 \) are known. This is not always an artificial assumption. As in the one sample normal model, the variances are sometime stable, and hence are at least approximately known, while the means change under different treatments. First recall the following basic facts:
The difference of the sample means \(M(\bs{Y}) - M(\bs{X})\) has the normal distribution with mean \(\nu - \mu\) and variance \(\sigma^2 / m + \tau^2 / n\). Hence the standard score of the difference of the sample means \[ Z = \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{\sqrt{\sigma^2 / m + \tau^2 / n}} \] has the standard normal distribution. Thus, this variable is a pivotal variable for \( \nu - \mu \) when \( \sigma, \tau\) are known.
The basic confidence interval and upper and lower bound are now easy to construct.
For \( \alpha \in (0, 1) \),
The variable \( T \) given in has the standard normal distribution. Hence each of the following events has probability \( 1 - \alpha \) by definition of the quantiles:
In each case, solving the inequality for \( \nu - \mu \) gives the result.
The two-sided interval in part (a) is the symmetric interval corresponding to \( \alpha / 2 \) in both tails of the standard normal distribution. As usual, we can construct more general two-sided intervals by partitioning \( \alpha \) between the left and right tails in anyway that we please.
For every \(\alpha, \, p \in (0, 1)\), a \(1 - \alpha\) confidence interval for \(\nu - \mu\) is \[ \left[M(\bs{Y}) - M(\bs{X}) - z(1 - \alpha p) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}, M(\bs{Y}) - M(\bs{X}) - z(\alpha - p \alpha) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}} \right]\]
From the distribution of the pivot variable and the definition of the quantile function, \[ \P \left[ z(\alpha - p \alpha) \lt \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{\sqrt{\sigma^2 / m + \tau^2 / n}} \lt z(1 - p \alpha) \right] = 1 - \alpha \] Solving for \(\nu - \mu\) in the inequality gives the confidence interval.
The following theorem gives some basic properties of the length of this interval.
The (deterministic) length of the general two-sided confidence interval is \[ L = [z(1 - \alpha p) - z(\alpha - \alpha p)] \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}} \]
Part (a) means that we can make the estimate more precise by increasing either or both sample sizes. Part (b) means that the estimate becomes less precise as the variance in either distribution increases. Part (c) we have seen before. All other things being equal, we can increase the confidence level only at the expense of making the estimate less precise. Part (d) means that the symmetric, equal-tail confidence interval is the best of the two-sided intervals.
Our next method is a construction of confidence intervals for the difference of the means \(\nu - \mu\) without needing to know the standard deviations \(\sigma\) and \(\tau\). However, there is a cost; we will assume that the standard deviations are the same, \(\sigma = \tau\), but the common value is unknown. This assumption is reasonable if there is an inherent variability in the measurement variables that does not change even when different treatments are applied to the objects in the population. We need to recall some basic facts from our study of special properties of normal samples.
The pooled estimate of the common variance \(\sigma^2 = \tau^2\) is \[ S^2(\bs{X}, \bs{Y}) = \frac{(m - 1) S^2(\bs{X}) + (n - 1) S^2(\bs{Y})}{m + n - 2} \] The random variable \[ T = \frac{\left[M(\bs{Y}) - M(\bs{X})\right] - (\nu - \mu)}{S(\bs{X}, \bs{Y}) \sqrt{1 / m + 1 / n}} \] has the student \( t \) distribution with \( m + n - 2 \) degrees of freedom
Note that \( S^2(\bs{X}, \bs{Y}) \) is a weighted average of the sample variances, with the degrees of freedom as the weight factors. Note also that \( T \) is a pivot variable for \( \nu - \mu \) and so we can construct confidence intervals for \( \nu - \mu \) in the usual way.
For \( \alpha \in (0, 1) \),
The variable \( T \) given in has the standard normal distribution. Hence each of the following events has probability \( 1 - \alpha \) by definition of the quantiles:
In each case, solving the inequality for \( \nu - \mu \) gives the result.
The two-sided interval in part (a) is the symmetric interval corresponding to \( \alpha / 2 \) in both tails of the student \( t \) distribution. As usual, we can construct more general two-sided intervals by partitioning \( \alpha \) between the left and right tails in anyway that we please.
For every \(\alpha, \, p \in (0, 1)\), a \(1 - \alpha\) confidence interval for \(\nu - \mu\) is \[ \left[M(\bs{Y}) - M(\bs{X}) - t_{m+n-2}(1 - \alpha p) S(\bs{X}, \bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}, M(\bs{Y}) - M(\bs{X}) - t_{m+n-2}(\alpha - p \alpha) S(\bs{X}, \bs{Y}) \sqrt{\frac{1}{m} + \frac{1}{n}} \right]\]
From the distribution of the pivot variable and the definition of the quantile function, \[ \P \left[ t_{m+n-2}(\alpha - p \alpha) \lt \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{S(\bs{X}, \bs{Y})\sqrt{1 / m + 1 / n}} \lt t_{m+n-2}(1 - p \alpha) \right] = 1 - \alpha \] Solving for \(\nu - \mu\) in the inequality gives the confidence interval.
The next result considers the length of the general two-sided interval.
The (random) length of the two-sided interval above is \[ L = [t_{m+n-2}(1 - p \alpha) - t_{m+n-2}(\alpha - p \alpha)] S(\bs{X}, \bs{Y}) \sqrt{\frac{1}{m} + \frac{1}{n}} \]
As in the case of known variances, part (c) means that all other things being equal, we can increase the confidence level only at the expense of making the estimate less precise. Part (b) means that the symmetric, equal-tail confidence interval is the best of the two-sided intervals.
Our next construction will produce interval estimates for the ratio of the variances \( \tau^2 / \sigma^2 \) (or by taking square roots, for the ratio of the standard deviations \( \tau / \sigma \)). Once again, we need to recall some basic facts from our study of special properties of random samples from the normal distribution.
The ratio \[ U = \frac{S^2(\bs{X}) \tau^2}{S^2(\bs{Y}) \sigma^2} \] has the \(F\) distribution with \(m - 1\) degrees of freedom in the numerator and \(n - 1\) degrees of freedom in the denominator, and hence this variable is a pivot variable for \(\tau^2 / \sigma^2\).
The pivot variable \( U \) can be used to construct confidence intervals for \( \tau^2 / \sigma^2 \) in the usual way.
For \( \alpha \in (0, 1) \),
The variable \( U \) given in has the \( F \) distribution with \( m - 1 \) degrees of freedom in the numerator and \( n - 1 \) degrees of freedom in the denominator. Hence each of the following events has probability \( 1 - \alpha \) by definition of the quantiles:
In each case, solving the inequality for \( \tau^2 / \sigma^2 \) gives the result.
The two-sided confidence interval in part (a) is the equal-tail confidence interval, and is the one commonly used. But as usual, we can partition \( \alpha \) between the left and right tails of the distribution of the pivot variable in any way that we please.
For every \(\alpha, \, p \in (0, 1)\), a \(1 - \alpha\) confidence set for \(\tau^2 / \sigma^2 \) is \[ \left[f_{m-1, n-1}(\alpha - p \alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})}, f_{m-1, n-1}(1 - p \alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})} \right] \]
From the \( F \) pivot variable and the definition of the quantile function, \[ \P \left[ f_{m-1,n-1}(\alpha - p \, \alpha) \lt \frac{S^2(\bs{X}, \mu) \tau^2}{S^2(\bs{Y}, \nu) \sigma^2} \lt f_{m-1,n-1}(1 - p \,\alpha) \right] = 1 - \alpha \] Solving for \(\tau^2 / \sigma^2\) in the inequality.
The length of the general confidence interval is considered next.
The (random) length of the general two-sided confidence interval above is \[ L = \left[f_{m-1,n-1}(1 - p \alpha) - f_{m-1,n-1}(\alpha - p \alpha) \right] \frac{S^2(\bs{Y})}{S^2(\bs{X})}\] Assuming that \( m \gt 5 \) and \( n \gt 1 \),
Parts (b) and (c) follow since \( \frac{\sigma^2}{\tau^2} \frac{S^2(\bs{Y})}{S^2(\bs{X})^2} \) as the \( F \) distribution with \( n - 1 \) degrees of freedom in the numerator and \( m - 1 \) degrees of freedom in the denominator.
Optimally, we might want to choose \( p \) so that \( \E(L) \) is minimized. However, this is difficult computationally, and fortunately the equal-tail interval with \( p = \frac{1}{2} \) is not too far from optimal when the sample sizes \( m \) and \( n \) are large.
In this subsection, we consider a model that is superficially similar to the two-sample normal model, but is actually much simpler. Suppose that \[ \left((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\right) \] is a random sample of size \(n\) from the bivariate normal distribution of a random vector \((X, Y)\), with \(\E(X) = \mu\), \(\E(Y) = \nu\), \(\var(X) = \sigma^2\), \(\var(Y) = \tau^2\), and \(\cov(X, Y) = \delta\).
Thus, instead of a pair of samples, we have a sample of pairs. This type of model frequently arises in before and after experiments, in which a measurement of interest is recorded for a sample of \(n\) objects from the population, both before and after a treatment. For example, we could record the blood pressure of a sample of \(n\) patients, before and after the administration of a certain drug. The critical point is that in this model, \( X_i \) and \( Y_i \) are measurements made on the same underlying object in the sample. As with the two-sample normal model, the interest is usually in estimating the difference of the means.
We will use our usual notation in definition for the sample means and variances of \(\bs{X} = (X_1, X_2, \ldots, X_n)\) and \(\bs{Y} = (Y_1, Y_2, \ldots, Y_n)\). Recall also that the sample covariance of \((\bs{X}, \bs{Y})\), is \[ S(\bs{X}, \bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})] \] (not to be confused with the pooled estimate of the standard deviation in the two sample model).
The vector of differences \(\bs{Y} - \bs{X} = (Y_1 - X_1, Y_2 - X_2, \ldots, Y_n - X_n)\) is a random sample of size \(n\) from the distribution of \(Y - X\), which is normal with
The sample mean and variance of the sample of differences are given by
Thus, the sample of differences \(\bs{Y} - \bs{X}\) fits the normal model for a single variable. The section on estimation in the normal model could be used to obtain confidence sets and intervals for the parameters \((\nu - \mu, \sigma^2 + \tau^2 - 2 \, \delta)\).
In the setting of this subsection, suppose that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) and \(\bs{Y} = (Y_1, Y_2, \ldots, Y_n)\) are independent. Mathematically this fits both models—the two-sample normal model and the bivariate normal model. Which procedure would work better for estimating the difference of means \(\nu - \mu\)?
Although the setting in fits both models mathematically, only one model would make sense in a real problem. Again, the critical point is whether \( (X_i, Y_i) \) makes sense as a pair of random variables (measurements) corresponding to a given object in the sample.
A new drug is being developed to reduce a certain blood chemical. A sample of 36 patients are given a placebo while a sample of 49 patients are given the drug. Let \(X\) denote the measurement for a patient given the placebo and \(Y\) the measurement for a patient given the drug (in mg). The statistics are \(m(\bs{x}) = 87\), \(s(\bs{x}) = 4\), \(m(\bs{y}) = 63\), \(s(\bs{y}) = 6\).
A company claims that an herbal supplement improves intelligence. A sample of 25 persons are given a standard IQ test before and after taking the supplement. Let \(X\) denote the IQ of a subject before taking the supplement and \(Y\) the IQ of the subject after the supplement. The before and after statistics are \(m(\bs{x}) = 105\), \(s(\bs{x}) = 13\), \(m(\bs{y}) = 110\), \(s(\bs{y}) = 17\), \(s(\bs{x}, \bs{y}) = 190\). Do you believe the company's claim?
A 90% confidence lower bound for the difference in IQ is 2.675. There may be a vary small increase.
In Fisher's iris data, let \(X\) denote consider the petal length of a Versicolor iris and \(Y\) the petal length of a Virginica iris.
A plant has two machines that produce a circular rod whose diameter (in cm) is critical. Let \(X\) denote the diameter of a rod from the first machine and \(Y\) the diameter of a rod from the second machine. A sample of 100 rods from the first machine as mean 10.3 and standard deviation 1.2. A sample of 100 rods from the second machine has mean 9.8 and standard deviation 1.6.