A multinomial trials process is a sequence of independent, identically distributed random variables
Thus, the multinomial trials process is a simple generalization of the Bernoulli trials process (which corresponds to
As with our discussion of the binomial distribution, we are interested in the random variables that count the number of times each outcome occurred.
Let
Of course, these random variables also depend on the parameter
Basic arguments using independence and combinatorics can be used to derive the joint, marginal, and conditional densities of the counting variables. In particular, recall the definition of the multinomial coefficient: for nonnegative integers
For nonnegative integers
By independence, any sequence of trials in which outcome
The distribution of
For each
There is a simple probabilistic proof. If we think of each trial as resulting in outcome
The multinomial distribution is preserved when the counting variables are combined.
Suppose that
Again, there is a simple probabilistic proof. Each trial, independently of the others, results in an outome in
The multinomial distribution is also preserved when some of the counting variables are observed.
Suppose that
Again, there is a simple probabilistic argument and a harder analytic argument. If we know
Combinations of the basic results involving grouping in [5] and conditioning in [6] can be used to compute any marginal or conditional distributions.
We will compute the mean and variance of each counting variable, and the covariance and correlation of each pair of variables.
For
Recall that
For distinct
From the bi-linearity of the covariance operator, we have
From [8], note that the number of times outcome
If
This follows immediately from [8] since we must have
In the dice experiment, select the number of aces. For each die distribution, start with a single die and add dice one at a time, noting the shape of the probability density function and the size and location of the mean/standard deviation bar. When you get to 10 dice, run the simulation 1000 times and compare the relative frequency function to the probability density function, and the empirical moments to the distribution moments.
Suppose that we throw 10 standard, fair dice. Find the probability of each of the following events:
Suppose that we roll 4 ace-six flat dice (faces 1 and 6 have probability
In the dice experiment, select 4 ace-six flats. Run the experiment 500 times and compute the joint relative frequency function of the number times each score occurs. Compare the relative frequency function to the true probability density function.
Suppose that we roll 20 ace-six flat dice. Find the covariance and correlation of the number of 1's and the number of 2's.
covariance:
In the dice experiment, select 20 ace-six flat dice. Run the experiment 500 times, updating after each run. Compute the empirical covariance and correlation of the number of 1's and the number of 2's. Compare the results with the theoretical results in [14].