search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

Comprehensive Guide on Sample Covariance

schedule Aug 12, 2023
Last updated
local_offer
Probability and Statistics
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

Sample covariance

If $\boldsymbol{x}=(x_1,x_2,\cdots,x_n)$ and $\boldsymbol{y}=(y_1,y_2,\cdots,y_n)$ are a pair of samples, then the sample covariance $s_{xy}$ between $\boldsymbol{x}$ and $\boldsymbol{y}$ is computed as:

$$s_{xy}=\frac{1}{n-1}\sum^n_{i=1} (x_i-\bar{x})(y_i-\bar{y})$$

Where:

  • $n$ is the sample size.

  • $\bar{x}$ is the sample mean of $\boldsymbol{x}$.

  • $\bar{y}$ is the sample mean of $\boldsymbol{y}$.

Intuition behind sample covariance

Consider the following 11 data points:

Here, each data point corresponds to an observation $(x_i,y_i)$ in a sample. Let's draw the sample mean of $\boldsymbol{x}$ and the sample mean of $\boldsymbol{y}$ below:

The sample covariance formula is as follows:

$$s_{xy}=\frac{1}{n-1}\sum^n_{i=1} (x_i-\bar{x})(y_i-\bar{y})$$

Basically, the sample covariance involves taking the average of the products $(x_i-\bar{x})(y-\bar{y})$ for each point. We can visualize $(x_i-\bar{x})$ and $(y_i-\bar{y})$ like so:

Here, we've focused only on the 1st and 3rd quadrants:

  • for the points in the 1st quadrant (top-right), we can see that both $(x_i-\bar{x})$ and $(y_i-\bar{y})$ are positive, and so their product $(x_i-\bar{x})(y_i-\bar{y})$ will be positive.

  • for the points in the 3rd quadrant, we see that both $(x_i-\bar{x})$ and $(y_i-\bar{y})$ are negative, which means that their product will also be positive.

Let's now focus on the points in the 2nd and 4th quadrant:

Note the following:

  • for the points in the 2nd quadrant (top-left), we see that $(x_i-\bar{x})$ is negative while $(y_i-\bar{y})$ is positive. This means that their product is negative.

  • for the points in the 4th quadrant (bottom-right), we see that $(x_i-\bar{x})$ is positive while $(y_i-\bar{y})$ is negative, which means that their product is also negative.

To summarize, the sign of $(x_i-\bar{x})(y_i-\bar{y})$ will depend on where the point is located:

All the points in the green region contribute to making the sample covariance more positive, while all points in the red region contribute to making the sample covariance more negative.

So far, we've only looked at the sign of $(x_i-\bar{x})(y_i-\bar{y})$, so let's focus now on its magnitude. The product can be interpreted as the area of the rectangle where $(x_i-\bar{x})$ is the width and $(y_i-\bar{y})$ is the height. Below is an example:

Here, we've only drawn 4 rectangles because drawing all the rectangles will make the diagram cluttered. We can see that the rectangle formed by the particular point in the first quadrant is large, which means that this point greatly contributes to making the sample covariance positive.

As we can imagine, the sample covariance would be positive in this case because not only are there more points in the green region, but also the magnitude of the points in the green region is generally bigger. If the sample covariance is positive, we say that there is a positive association between $x$ and $y$, which means that as $x$ increases, $y$ tends to increase as well. Again, this should be intuitive from the diagram below:

We end up with a positive association when there are more points in the green region that are far away from the mean origin $(\bar{x},\bar{y})$. Whenever there is a positive association, the line of best fit through the data points will have a positive slope:

If the line with a positive slope fits the data points well, then we say that $x$ and $y$ have a positive linear relationship. In contrast, a negative association may look like follows:

Here, there are more points in the red region that are generally far away from the mean origin $(\bar{x},\bar{y})$, and thus the covariance is negative. Whenever there is negative association, the line of best fit will have a negative slope.

Zero association may look like follows:

Here, there are roughly the same number of points in the green and red regions with comparable magnitudes, so the covariance is approximately zero. In these cases, the line of best will be approximately a horizontal line.

In some cases, our data points may not look linear at all:

The covariance in these cases is typically near zero as well.

Example.

Computing sample covariance by hand

Consider the following dataset:

$x$

$y$

1

1

2

3

4

6

5

7

Compute the sample covariance of $\boldsymbol{x}$ and $\boldsymbol{y}$.

Solution. We have $n=4$ pairs of samples. Let's start by computing the sample means $\bar{x}$ and $\bar{y}$ which are required when computing the covariance:

$$\begin{align*} \bar{x}&=\frac{1}{4}(1+2+4+5)=3\\ \bar{y}&=\frac{1}{4}(1+3+6+7)=4.25\\ \end{align*}$$

The sample covariance is computed as:

$$\begin{align*} s_{xy}&=\frac{1}{n-1}\sum^n_{i=1} (x_i-\bar{x})(y_i-\bar{y})\\ &=\frac{1}{3} \Big[(1-3)(1-4.25)+(2-3)(3-4.25)+(4-3)(6-4.25)+(5-3)(7-4.25)\Big]\\ &=5 \end{align*}$$

This means that $x$ and $y$ are positively associated as confirmed by the graph below:

We can see that as $x$ increases, $y$ increases as well - this is what a positive association is!

Why sample covariance is not usually computed

The sample covariance tells us the association between two variables. However, the value that the sample covariance takes is not bounded and is heavily affected by the scale of the sample. For instance, consider some data points about people's weight and height. Let's draw two plots with different units:

Small covariance

High covariance

The pattern of the data points is identical regardless of whether we choose kilograms/grams and meters/centimeters. However, notice how the covariance is much larger on the right because the scale is bigger. This is quite misleading because we would think that a higher covariance is caused by a stronger positive association but this is clearly not the case - the culprit here is the scale of the variables. It is therefore meaningless to compare covariance between different multiple pairs of variables because their scale might be different.

We typically normalize the covariance such that the measure is no longer dependent on the scale of the variables. This normalized version of the covariance is called the correlation, and unlike the covariance that is unbounded, the correlation is bounded between $-1$ and $1$. Please consult our comprehensive guide on correlation for the details!

Why we divide by n-1 instead of n

Just like sample variance, we divide by $n-1$ instead of $n$ for the sample covariance. The reasoning is the same - dividing by $n-1$ leads to an unbiased estimator for the population covariance.

Theorem.

Unbiased estimator for population covariance

The sample covariance $S_{XY}$ is an unbiased estimator for the population covariance $\sigma_{XY}=\text{cov}(X,Y)$, that is:

$$\mathbb{E}(S_{XY})=\sigma_{XY}$$

Proof. We begin with the definition of sample covariance:

$$\begin{align*} S_{XY}&=\frac{1}{n-1}\sum^n_{i=1} (X_i-\bar{X})(Y_i-\bar{Y})\\ &=\frac{1}{n-1}\sum^n_{i=1} (X_iY_i-X_i\bar{Y}-\bar{X}Y_i+\bar{X}\bar{Y})\\ &=\frac{1}{n-1}\Big[\sum^n_{i=1} (X_iY_i)-\bar{Y}\sum^n_{i=1}(X_i)-\bar{X}\sum^n_{i=1}(Y_i)+n\bar{X}\bar{Y}\Big]\\ &=\frac{1}{n-1}\Big[\sum^n_{i=1} (X_iY_i)-\frac{1}{n}\sum^n_{i=1}(Y_i)\sum^n_{i=1}(X_i)-\frac{1}{n}\sum^n_{i=1}(X_i)\sum^n_{i=1}(Y_i)+ \frac{1}{n}\sum^n_{i=1}(X_i)\sum^n_{i=1}(Y_i)\Big]\\ &=\frac{1}{n-1}\Big[\sum^n_{i=1} (X_iY_i)-\frac{1}{n}\sum^n_{i=1}(Y_i)\sum^n_{i=1}(X_i)\Big]\\ \end{align*}$$

Now, taking the expected value of both sides:

$$\begin{equation}\label{eq:LiGvwVUlwjBZZGOSvvu} \begin{aligned}[b] \mathbb{E}(S_{XY})&=\mathbb{E}\Big[\frac{1}{n-1}\Big(\sum^n_{i=1} (X_iY_i)-\frac{1}{n}\sum^n_{i=1}(Y_i)\sum^n_{i=1}(X_i)\Big)\Big]\\ &=\frac{1}{n-1}\Big[\Big(\sum^n_{i=1}{\color{green}\mathbb{E}(X_iY_i)}\Big) -\frac{1}{n}{\color{red}\mathbb{E}\Big(\sum^n_{i=1}(Y_i)\sum^n_{i=1}(X_i)\Big)}\Big]\\ \end{aligned} \end{equation}$$

Now, recall the computational form of covariance:

$$\begin{equation}\label{eq:fLUfGdvlpmKMguNWq9J} \mathrm{cov}(X,Y)=\mathbb{E}\left(XY\right)-\mathbb{E}(X)\cdot\mathbb{E}(Y) \end{equation}$$

This can be rewritten as:

$$\begin{equation}\label{eq:dXuLJecqNKkBZtHXzzH} \mathbb{E}\left(XY\right)=\mathrm{cov}(X,Y)+\mathbb{E}(X)\cdot\mathbb{E}(Y) \end{equation}$$

Let's apply \eqref{eq:dXuLJecqNKkBZtHXzzH} to the green term in \eqref{eq:LiGvwVUlwjBZZGOSvvu}:

$$\begin{equation}\label{eq:F1XIzvILfeqoGkzXYzI} \begin{aligned}[b] {\color{green}\mathbb{E}\left(X_iY_i\right)} &=\mathrm{cov}(X_i,Y_i)+\mathbb{E}(X_i)\cdot\mathbb{E}(Y_i)\\ &=\sigma_{XY}+\mu_X\mu_Y\\ \end{aligned} \end{equation}$$

Let's now apply \eqref{eq:dXuLJecqNKkBZtHXzzH} to the red term in \eqref{eq:LiGvwVUlwjBZZGOSvvu}:

$$\begin{equation}\label{eq:gtgRyN6yqUB2vYcn5SG} \begin{aligned}[b] {\color{red}\mathbb{E}\left(\sum^n_{i=1}X_i\sum^n_{i=1}Y_i\right)} &=\mathrm{cov}\Big(\sum^n_{i=1}X_i,\sum^n_{i=1}Y_i\Big) +\mathbb{E}\Big(\sum^n_{i=1}X_i\Big)\cdot\mathbb{E}\Big(\sum^n_{i=1}Y_i\Big)\\ &=\mathrm{cov}\Big(\sum^n_{i=1}X_i,\sum^n_{i=1}Y_i\Big) +\Big[\sum^n_{i=1}\mathbb{E}(X_i)\Big]\cdot\Big[\sum^n_{i=1}\mathbb{E}(Y_i)\Big]\\ &=\mathrm{cov}\Big(\sum^n_{i=1}X_i,\sum^n_{i=1}Y_i\Big) +\Big(\sum^n_{i=1}\mu_X\Big)\cdot\Big(\sum^n_{i=1}\mu_Y\Big)\\ &=\mathrm{cov}\Big(\sum^n_{i=1}X_i,\sum^n_{i=1}Y_i\Big) +(n\mu_X)\cdot(n\mu_Y)\\ &=\mathrm{cov}\Big(\sum^n_{i=1}X_i,\sum^n_{i=1}Y_i\Big) +n^2\mu_X\mu_Y\\ \end{aligned} \end{equation}$$

Now, using a property of covariance, we can take the summation sign outside:

$$\begin{equation}\label{eq:mWv1fj3A4F3Cz5AEoL6} \begin{aligned}[b] {\color{red}\mathbb{E}\left(\sum^n_{i=1}X_i\sum^n_{i=1}Y_i\right)} &= \Big[\sum^n_{i=1}\sum^n_{j=1}\mathrm{cov}\Big(X_i,Y_j\Big)\Big] +n^2\mu_X\mu_Y\\ \end{aligned} \end{equation}$$

Notice that $\text{cov}(X_i,Y_j)=0$ when $i\ne{j}$ - for instance $\text{cov}(X_1,Y_2)$ should be zero because $X_1$ and $Y_2$ are independent whereas $X_1$ and $Y_1$ are dependent. Therefore, we have that:

$$\begin{equation}\label{eq:bJvCPML5bpLjKdNEnT1} \begin{aligned}[b] {\color{red}\mathbb{E}\left(\sum^n_{i=1}X_i\sum^n_{i=1}Y_i\right)} &=\Big[\sum^n_{i=1}\mathrm{cov}\Big(X_i,Y_i\Big)\Big] +n^2\mu_X\mu_Y\\ &=\Big(\sum^n_{i=1}\sigma_{XY}\Big) +n^2\mu_X\mu_Y\\ &=n\sigma_{XY} +n^2\mu_X\mu_Y\\ \end{aligned} \end{equation}$$

Substituting the green \eqref{eq:F1XIzvILfeqoGkzXYzI} and red \eqref{eq:bJvCPML5bpLjKdNEnT1} components back into \eqref{eq:LiGvwVUlwjBZZGOSvvu} gives:

$$\begin{align*} \mathbb{E}(S_{XY})&= \frac{1}{n-1}\Big[\Big(\sum^n_{i=1} \sigma_{XY}+\mu_X\mu_Y\Big)-\frac{1}{n}(n\sigma_{XY}+n^2\mu_X\mu_Y)\Big]\\ &=\frac{1}{n-1}\Big( n\sigma_{XY}+n\mu_X\mu_Y-(\sigma_{XY}+n\mu_X\mu_Y)\Big)\\ &=\frac{1}{n-1}\Big( n\sigma_{XY}-\sigma_{XY}\Big)\\ &=\frac{1}{n-1}\Big[\sigma_{XY}(n-1)\Big]\\ &=\sigma_{XY}\\ \end{align*}$$

This completes the proof that the sample covariance is an unbiased estimator for the population covariance of $X$ and $Y$.

Computing sample covariance using Python

We can easily compute the sample covariance using Python's numpy library. Suppose we have the same dataset as earlier:

$$\begin{align*} \boldsymbol{x}&=(1,2,4,5)\\ \boldsymbol{y}&=(1,3,6,7) \end{align*}$$

The sample covariance can be computed like so:

import numpy as np
x = [1,2,4,5]
y = [1,3,6,7]
cov_matrix = np.cov(x,y) # uses unbiased estimator (divide by n-1 instead of n)
cov_matrix
array([[3.33333333, 5. ],
[5. , 7.58333333]])

Here, NumPy's cov(~) method returns a covariance matrix, which is a symmetric matrix whose diagonals are the sample variance of $\boldsymbol{x}$ and $\boldsymbol{y}$, and the non-diagonal entries are the covariance. To extract the covariance value, use NumPy's [~] syntax:

cov_matrix[0][1]
5.0

This is exactly what we got earlier when we computed the sample covariance by hand!

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...