Discrete Random Vector

Definition 1.

Let be a probability space. is called a discrete random vector if and for all .

If is any function and is a random vector, is also a random vector. In particular, if , is a random variable.

Theorem 2.

Let , where . is a random vector iff each is a random variable.

Proof of
Let be the projection map . Then, is a random variable.

Proof of

which is in the algebra since it is closed under countable intersections.

Definition 3.

Let be a random vector. Then, defined by is called a joint probability mass function of the random variables , or just the probability mass function of . The density function of is called the th marginal density of or .

As in the one dimensional case, has the following properties:

  • for all , and for at most countably many , which may be denoted by .
  • .

Also, any real valued function on having these properties is the density function for some dimensional random vector, and is called a discrete dimensional density function.

The marginal densities of can be obtained from the density of :

Theorem 4.

Let be the density function of a random vector defined on . If , we know that each is a random variable. The probability mass functions can be expressed in terms of like so:


Independent Random Variables

The concept of independent random variables is very similar to independent events. Recall that two events and are independent if . We say discrete random variables and are independent if for all . In other words, the joint density of and should be given by .

Definition 5.

Let be discrete random variables having densities respectively. These random variables are said to be mutually independent if their joint density function is given by

If are any subsets of , then

To see this, note that

Theorem 6(Proposition).

Let and be independent random variables, and let . Then, and are independent random variables.

Theorem 7(Proposition).

Let and be two probability mass functions. Then, there exists and such that , , and and are independent.

Example 14 here is pertinent.

The multinomial distribution

We have previously seen that if have a common Bernoulli density with parameter (, ), then is binomially distributed with parameters and . Trials that result in either success or failure are called Bernoulli trials, and the above situation is described by saying we perform Bernoulli trials with common probability for success.

More generally, consider an experiment, such as rolling a die, that can result in a finite number of distinct possible outcomes. Let be a random variable which assumes the values so that represents the fact that the experiment yielded the th outcome. Let . An n-fold independent repetition of the experiment can be represented by the random vector , where have the same distribution as and are independent. Now, let , , denote the number of trials that yield the th outcome. The joint density of follows the multinomial distribution with parameters and :

Note that the random variables are not independent. In fact, knowing any of them determines the th.

Poisson Approximation to the binomial distribution

Given ,

So, the limit of the binomial distribution with parameters and as is the poisson distribution with parameter .