*chevron_left*Probability Theory

# Comprehensive Guide on Random Variables

*chevron_right*

*schedule*Oct 31, 2022

*toc*Table of Contents

*expand_more*

# Motivating example

Recall that eventslink in the context of a statistical experiment are a random binary outcome. For instance, we might be interested in the following events when rolling a dice once:

is the outcome odd?

is the outcome greater than $4$?

is the outcome a $3$?

All of these events can be answered as either yes or no - this is what makes them binary. In many cases, we are rather interested in numeric events.

For instance, suppose we roll a dice twice. We might be interested in how many times we roll a $3$, and questions of the type "how many" cannot be answered with yes or no. Let's introduce a numeric variable $X$ that represents the number of times we roll a $3$. In this case, the possible values that $X$ may take are:

Because the value of $X$ depends on the outcome of the experiment, $X$ is known as a random variable. Notationally, we use an uppercase $X$ to refer to a random variable and a lowercase $x$ to denote a particular value that $X$ takes on. The main difference is that $X$ is random whereas $x$ is a specific observed value that is not random.

To be more mathematically precise, a random variable is defined as a function that associates a real value $x$ to every possible outcome in the experiment, that is, the sample spacelink. For our dice-rolling experiment, let's define the events of success ($\mathrm{S}$) and failure ($\mathrm{F}$) as follows:

success - the outcome of rolling a $3$.

failure - the outcome of not rolling a $3$.

Therefore, the sample space $\Omega$ is:

Here, $\mathrm{SF}$, for instance, represents a success followed by a failure.

Remember, every element in the sample space is called a sample point. In this case, we have four sample points in our sample space. We defined the random variable $X$ as the number of times we roll a $3$, which is equivalent to the number of times we observe a success $\mathrm{S}$. The random variable $X$ maps each sample point to a specific value $x$ like so:

Sample space | $x$ |
---|---|

$\mathrm{FF}$ | $0$ |

$\mathrm{SF}$ | $1$ |

$\mathrm{FS}$ | $1$ |

$\mathrm{SS}$ | $2$ |

Note the following:

$X$ maps $\mathrm{FF}$ to the value $0$ because this is the case when we roll no $3$s. Mathematically, $X(\mathrm{FF})=0$.

$X$ maps $\mathrm{SF}$ and $\mathrm{FS}$ both to the value $1$ because these are the cases when we roll a single $3$. Mathematically, $X(\mathrm{SF})=X(\mathrm{FS})=1$. This also demonstrates how $X$ can map different sample points to the same value.

$X$ maps $\mathrm{SS}$ to the value $2$ because this is the case when we roll two $3$s. Mathematically, $X(\mathrm{SS})=2$.

the values that $X$ can take on is $0$, $1$ or $2$.

We now formally state the definition of a random variable.

# Random variable

A random variable $X$ is a function that assigns a real value to each element (sample point) of the sample space.

## Random variables of drawing balls from a bag

Suppose we draw two balls from a bag containing many red and green balls. We are interested in the number of green balls we draw. How should we define the random variable in this case?

Solution. We should define random variable $X$ as the number of green balls we draw. The sample space $\Omega$ in this case is:

Here, $\color{red}\mathrm{R}$ represents the event of drawing a red ball, and $\color{green}\mathrm{G}$ represents the event of drawing a green ball. The random variable $X$ maps each of the sample points to a real value $x$, which is the number of green balls we draw for each sample point:

Sample space | $x$ |
---|---|

$\color{red}\mathrm{RR}$ | $0$ |

$\mathrm{\color{red}R\color{green}G}$ | $ 1$ |

$\mathrm{\color{green}G\color{red}R}$ | $1$ |

$\color{green}\mathrm{GG}$ | $2$ |

Therefore, $X$ can take on the values $0$, $1$ or $2$.

# Assigning probabilities to random variables

Just like how we assign probabilities to events, we can do the same to random variables. Consider the following example:

## Assigning probabilities to random variables of drawing balls from a bag

Suppose we draw two balls **with replacement** from a bag containing 2 red and 3 green balls. What is the probability of drawing:

no red balls?

one red ball?

two red balls?

Solution. Let's define random variable $X$ as the number of red balls we draw. The probabilities of interest are:

$\mathbb{P}(X=0)$ - the probability of drawing no red balls.

$\mathbb{P}(X=1)$ - the probability of drawing one red ball.

$\mathbb{P}(X=2)$ - the probability of drawing two red balls.

We can compute these probabilities by referring to the sample space:

Sample space | $x$ |
---|---|

$\color{green}\mathrm{GG}$ | $0$ |

$\mathrm{\color{red}R\color{green}G}$ | $ 1$ |

$\mathrm{\color{green}G\color{red}R}$ | $1$ |

$\color{red}\mathrm{RR}$ | $2$ |

Let's calculate $\mathbb{P}(X=0)$ and $\mathbb{P}(X=2)$ first:

Next, let's calculate $\mathbb{P}(X=1)$. $X=1$ is true when either event $\mathrm{\color{red}R\color{green}G}$ or $\mathrm{\color{green}G\color{red}R}$ occurs. Since $\mathrm{\color{red}R\color{green}G}$ and $\mathrm{\color{green}G\color{red}R}$ are disjoint, we can apply the third axiom of probabilitylink to calculate $\mathbb{P}(X=1)$ like so:

Notice the following:

This holds because the probability of the sample space must be equal to one:

# Types of random variables

There are two types of random variables - discrete and continuous.

## Discrete random variables

A discrete random variable only takes on a countable number of values. The examples we have looked at previously are all discrete random variables:

the number of times we get a 3 after rolling a dice twice.

the number of green balls we get after randomly drawing two balls from a bag.

The reason these are discrete random variables is that there is a finite number of values that the random variable can take on.

Other examples of discrete random variables are:

the number of days it rains in some given month.

the number of people waiting at a particular bus stop.

## Continuous random variables

A continuous random variable only takes on an infinite number of possible values. For instance, let's consider the height of an adult as a random variable $X$. Unlike the discrete case, we cannot assign a real value $x$ to each possible value of $X$ because there is an infinite number of values - a person could be $170.5\mathrm{cm}$ tall or even $170.0005\mathrm{cm}$ tall.

Examples of continuous random variables include:

the amount of rain in some given month.

the length of time we wait for the bus.

# Independence of random variables

If $X$ and $Y$ are two independent random variables, then:

The independence of random variables is analogous to the case when we have two independent events $A$ and $B$ - the probability that both $A$ and $B$ occur is the product of the probability of $A$ and the probability of $B$, that is:

## Tossing two coins

Suppose we have a fair coin, and perform the following:

we toss the coin once and we define random variable $X$ as the number of heads - $X\in\{0,1\}$.

we toss the coin twice and we define random variable $Y$ as the number of tails - $Y\in\{0,1,2\}$.

Compute $\mathbb{P}(X=1\text{ and }Y=2)$.

Solution. The outcome of a coin toss does not affect the outcome of subsequent tosses. This means that random variables $X$ and $Y$ are independent:

# Final remarks

The concept of random variables provides the fundamental building block for other important statistical concepts such as probability distribution functions and linear regression. Random variables can be thought of as numeric events that can either be discrete or continuous. The main difference between the two types is that discrete random variables take on a finite number of values whereas continuous random variables take on an infinite number of values.

We have also briefly explored how we can assign probabilities to random variables. In the next section, we will discuss this further by looking at:

probability mass functions that assign probabilities to each possible value of a discrete random variable

probability density functions that assign probabilities to a range of values of a continuous random variable.