search
Search
Publish
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
chevron_left Mathematical Statistics
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe: "Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
share
thumb_up_alt
bookmark
arrow_backShare
Twitter
Facebook
chevron_left Mathematical Statistics
thumb_up
0
thumb_down
0
chat_bubble_outline
0
auto_stories new
settings

Covariance Matrix

Probability and Statistics
chevron_right
Mathematical Statistics
schedule Jul 1, 2022
Last updated
local_offer
Tags

Covariance matrix for two random variables

The covariance matrix for the two random variables $X_1$ and $X_2$ look like the following:

$$\boldsymbol{\Sigma}=\begin{pmatrix} \mathbb{V}(X_1)&\mathrm{cov}(X_1,X_2)\\ \mathrm{cov}(X_2,X_1)&\mathbb{V}(X_2)\\ \end{pmatrix}$$

Note that $\mathrm{cov}(X_1,X_2)=\mathrm{cov}(X_2,X_1)$.

As we can see, the diagonals contain the variance of the random variables $X_1,…,X_n$ while the other entries contain the covariance between these random variables. Since the covariance matrix contains information about both the covariance as well as the variance, we sometimes call the matrix the covariance-variance matrix.

Extending to higher dimensions

To generalise this to more than two random variables:

$$\boldsymbol{\Sigma}=\mathrm{cov}(\boldsymbol{X})= \begin{pmatrix} \mathbb{V}(X_1)&\mathrm{cov}(X_1,X_2)&\dots&\mathrm{cov}(X_1,X_n)\\ \mathrm{cov}(X_2,X_1)&\mathbb{V}(X_2)&\dots&\mathrm{cov}(X_2,X_n)\\ \vdots&\vdots&\ddots&\vdots\\ \mathrm{cov}(X_n,X_1)&\mathrm{cov}(X_n,X_2)&\dots&\mathbb{V}(X_n)\\ \end{pmatrix}$$

To extend this concept to higher dimensions, we need to use some Linear Algebra. Let $X$ refer to random vectors, and subscripted $X_i$ is used to refer to random scalars.

Suppose we have the following random vector and vector mean:

$$\boldsymbol{X}=\begin{pmatrix} 1\\ \vdots\\ X_n\\ \end{pmatrix} \qquad \boldsymbol{\mu}=\begin{pmatrix} 1\\ \vdots\\ \mu_n\\ \end{pmatrix}$$

are random variables, each with finite variance, then the covariance matrix $\Sigma$ is the matrix whose $(i,j)$ entry is the covariance:

$$\Sigma_{ij}=\mathrm{cov}(X_i,X_j )=\mathbb{E}[(X_i-\mu_i )(X_j-\mu_j )]=\mathbb{E}(X_i X_j )-\mu_i\mu_j$$

Where $\mu_i=\mathbb{E}(X_i)$ and $\mu_j=\mathbb{E}(X_j)$. In matrix form, this translates to the following:

$$\Sigma=\mathrm{cov}(\boldsymbol{X})=\mathbb{E}\left(\boldsymbol{X}\boldsymbol{X}^T\right)-\boldsymbol{\mu}\boldsymbol{\mu}^T$$
NOTE

Trick when converting to matrix form

When we see $v_iv_j$, we can often rewrite this as $\mathbf{v}\mathbf{v}^T$.

Note that $\mathbf{X}\mathbf{X}^T$ is a matrix (i.e. recall that ​$\mathbf{a}\mathbf{b}^T$ results in a matrix), although $X$ is a vector here. An equivalent statement of the above would be:

$$\boldsymbol{\Sigma}=\mathrm{cov}(X)=\mathbb{E}[\mathbf{X}\mathbf{X}^T ]-\mathbb{E}(\mathbf{X})\mathbb{E}(\mathbf{X})^T$$
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!