search
Search
Unlock 100+ guides
search toc
close
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
near_me
Linear Algebra
54 guides
keyboard_arrow_down
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

# Comprehensive Guide on Diagonal Matrices and their Basic Properties

schedule Aug 11, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

# Diagonal matrix

A diagonal matrix is a square matrix whose non-diagonal entries are all zero. For instance, here is a $3\times3$ diagonal matrix:

$$\boldsymbol{D}=\begin{pmatrix} 2&0&0\\ 0&6&0\\ 0&0&4 \end{pmatrix}$$

Diagonal matrices are usually denoted by a bold uppercase letter $\boldsymbol{D}$.

Example.

## Identity matrix

The identity matrix $\boldsymbol{I}_n$ is a classic example of a diagonal matrix. Here's the $3\times3$ identity matrix:

$$\boldsymbol{I}_3=\begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{pmatrix}$$
Theorem.

# Transpose of a diagonal matrix equals itself

If $\boldsymbol{D}$ is a diagonal matrix, then:

$$\boldsymbol{D}^T=\boldsymbol{D}$$

Proof. By theoremlink, taking the transpose of a square matrix does not change the diagonal entries. Since diagonal matrices are square matrices whose non-diagonal entries are all zero by definitionlink, we conclude that the transpose of a diagonal matrix is itself. This completes the proof.

Theorem.

# Product of a matrix and a diagonal matrix

Consider an $m\times{n}$ matrix $\boldsymbol{A}$ and an $n\times{n}$ diagonal matrix $\boldsymbol{D}$ below:

$$\boldsymbol{A} = \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{a_1}&\boldsymbol{a_2}&\cdots&\boldsymbol{a_n}\\ \vert&\vert&\cdots&\vert\\ \end{pmatrix},\;\;\;\;\; \boldsymbol{D}= \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}$$

Where the columns of matrix $\boldsymbol{A}$ are represented by vectors $\boldsymbol{a}_1$, $\boldsymbol{a}_2$, $\cdots$, $\boldsymbol{a}_n$.

The product $\boldsymbol{AD}$ is:

$$\boldsymbol{AD} = \begin{pmatrix} \vert&\vert&\cdots&\vert\\ d_{11}\boldsymbol{a_1}& d_{22}\boldsymbol{a_2} &\cdots& d_{nn}\boldsymbol{a_n}\\ \vert&\vert&\cdots&\vert\\ \end{pmatrix}$$

Proof. Let matrix $\boldsymbol{A}$ be represented as:

$$\boldsymbol{A}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{a_1}&\boldsymbol{a_2}&\cdots&\boldsymbol{a_n}\\ \vert&\vert&\cdots&\vert\\ \end{pmatrix}= \begin{pmatrix} a_{11}&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}&a_{m2}&\cdots&a_{mn} \end{pmatrix}$$

The product $\boldsymbol{AD}$ is:

\begin{align*} \boldsymbol{AD}&=\begin{pmatrix} a_{11}&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\smash\ddots&\vdots\\ a_{m1}&a_{m2}&\cdots&a_{mn} \end{pmatrix} \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}\\ &=\begin{pmatrix} a_{11}d_{11}&a_{12}d_{22}&\cdots&a_{1n}d_{nn}\\ a_{21}d_{11}&a_{22}d_{22}&\cdots&a_{2n}d_{nn}\\ \vdots&\vdots&\smash\ddots&\vdots\\ a_{m1}d_{11}&a_{m2}d_{22}&\cdots&a_{mn}d_{nn} \end{pmatrix}\\ &=\begin{pmatrix} \vert&\vert&\cdots&\vert\\ d_{11}\boldsymbol{a}_1& d_{22}\boldsymbol{a}_2 &\cdots& d_{nn}\boldsymbol{a}_n\\ \vert&\vert&\cdots&\vert\\ \end{pmatrix} \end{align*}

This completes the proof.

Example.

## Computing the product of a matrix and a diagonal matrix

Compute the following matrix product:

$$\begin{pmatrix} 1&4\\ 5&6\\ \end{pmatrix} \begin{pmatrix} 2&0\\ 0&3\\ \end{pmatrix}$$

Solution. Let $\boldsymbol{A}$ denote the left matrix. We multiply the first column of $\boldsymbol{A}$ by $2$ and the second column by $3$ to get:

$$\begin{pmatrix} 1&4\\ 5&6\\ \end{pmatrix} \begin{pmatrix} 2&0\\ 0&3\\ \end{pmatrix}= \begin{pmatrix} 2&12\\ 10&18\\ \end{pmatrix}$$
Theorem.

# Product of a diagonal matrix and a matrix

Consider an $n\times{m}$ matrix $\boldsymbol{A}$ and an $n\times{n}$ diagonal matrix $\boldsymbol{D}$ below:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&\cdots&a_{1m}\\ a_{21}&a_{22}&\cdots&a_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}&a_{n2}&\cdots&a_{nm} \end{pmatrix}= \begin{pmatrix} -&\boldsymbol{a}_1&-\\ -&\boldsymbol{a}_2&-\\ \vdots&\vdots&\vdots\\ -&\boldsymbol{a}_n&-\\ \end{pmatrix} ,\;\;\;\;\;\; \boldsymbol{D}=\begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}$$

Where $\boldsymbol{A}$ is represented as a collection of row vectors.

The product $\boldsymbol{DA}$ is:

$$\boldsymbol{DA}= \begin{pmatrix} -&d_{11}\boldsymbol{a}_1&-\\ -&d_{22}\boldsymbol{a}_2&-\\ \vdots&\vdots&\vdots\\ -&d_{nn}\boldsymbol{a}_n&-\\ \end{pmatrix}$$

Proof. Let $\boldsymbol{A}$ be an $m\times{n}$ matrix represented as row vectors and $\boldsymbol{D}$ be a diagonal matrix:

\begin{align*} \boldsymbol{DA}&= \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}\begin{pmatrix} a_{11}&a_{12}&\cdots&a_{1m}\\ a_{21}&a_{22}&\cdots&a_{2m}\\ \vdots&\vdots&\smash\ddots&\vdots\\ a_{n1}&a_{n2}&\cdots&a_{nm} \end{pmatrix}\\ &=\begin{pmatrix} d_{11}a_{11}&d_{11}a_{12}&\cdots&d_{11}a_{1m}\\ d_{22}a_{21}&d_{22}a_{22}&\cdots&d_{22}a_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ d_{nn}a_{n1}&d_{nn}a_{n2}&\cdots&d_{nn}a_{nm} \end{pmatrix}\\ &=\begin{pmatrix} -&d_{11}\boldsymbol{a}_1&-\\ -&d_{22}\boldsymbol{a}_2&-\\ \vdots&\vdots&\vdots\\ -&d_{nn}\boldsymbol{a}_n&-\\ \end{pmatrix} \end{align*}

This completes the proof.

Example.

## Finding the product of a diagonal matrix and a matrix

Compute the following matrix product:

$$\begin{pmatrix} 3&0&0\\0&2&0\\0&0&1 \end{pmatrix} \begin{pmatrix} 5&2&4\\6&3&1\\1&0&2 \end{pmatrix}$$

Proof. The matrix product is:

\begin{align*} \begin{pmatrix} 3&0&0\\0&2&0\\0&0&1 \end{pmatrix} \begin{pmatrix} 5&2&4\\6&3&1\\1&0&2 \end{pmatrix}&= \begin{pmatrix} (3)5&(3)2&(3)4\\(2)6&(2)3&(2)1\\ (1)1&(1)0&(1)2 \end{pmatrix}\\ &= \begin{pmatrix} 15&6&12\\12&6&2\\ 1&0&2 \end{pmatrix} \end{align*}
Theorem.

# Taking the power of diagonal matrices

Consider the following $n\times{n}$ diagonal matrix:

$$\boldsymbol{D}=\begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}$$

Raising $\boldsymbol{D}$ to the power of some positive integer $k$ involves raising the diagonal entries to the power of $k$, that is:

$$\boldsymbol{D}^k=\begin{pmatrix} d_{11}^k&0&\cdots&0\\ 0&d_{22}^k&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}^k\\ \end{pmatrix}$$

Proof. We prove this by induction. Consider the base case when $k=1$, which is trivially true:

$$\boldsymbol{D}^1= \boldsymbol{D}=\begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}\\ \end{pmatrix}=\begin{pmatrix} d_{11}^1&0&\cdots&0\\ 0&d_{22}^1&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}^1\\ \end{pmatrix}$$

We now assume the theorem holds when the power is raised to $k-1$, that is:

$$$$\label{eq:xWOtz2qPTDfaXzfTIsH} \boldsymbol{D}^{k-1}=\begin{pmatrix} d_{11}^{k-1}&0&\cdots&0\\ 0&d_{22}^{k-1}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}^{k-1} \end{pmatrix}$$$$

Our goal is to show that the theorem holds when the power is raised to $k$. This is quite easy because:

\begin{align*} \boldsymbol{D}^k&= \boldsymbol{D}^{k-1}\boldsymbol{D} \end{align*}

We now use the inductive assumption \eqref{eq:xWOtz2qPTDfaXzfTIsH} to get:

\begin{align*} \boldsymbol{D}^k &=\boldsymbol{D}^{k-1}\boldsymbol{D}\\ &=\begin{pmatrix} d_{11}^{k-1}&0&\cdots&0\\ 0&d_{22}^{k-1}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}^{k-1} \end{pmatrix} \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn} \end{pmatrix}\\ &= \begin{pmatrix} d_{11}^k&0&\cdots&0\\ 0&d_{22}^k&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn}^k \end{pmatrix} \end{align*}

By the principle of mathematical induction, the theorem holds for the general case. This completes the proof.

Example.

## Computing the power of an 2x2 matrix

Consider the following diagonal matrix:

$$\boldsymbol{D}=\begin{pmatrix} 2&0\\ 0&1\\ \end{pmatrix}$$

Compute $\boldsymbol{D}^3$.

Solution. $\boldsymbol{D}^3$ can easily be computed by raising each diagonal entry to the power of $3$ like so:

\begin{align*} \boldsymbol{D}^3&= \begin{pmatrix} 2^3&0\\ 0&1^3\\ \end{pmatrix}\\ &= \begin{pmatrix} 8&0\\ 0&1\\ \end{pmatrix} \end{align*}

This is a very neat property of diagonal matrices because taking powers of numbers is computationally much cheaper than matrix multiplication!

Theorem.

# Diagonal matrix is a triangular matrix

If $\boldsymbol{D}$ is a diagonal matrix, then $\boldsymbol{D}$ is both a lower and upper triangular matrix.

Proof. Diagonal matrix $\boldsymbol{D}$ is a lower triangular matrix because all the values above the diagonal entries are zero. Similarly, $\boldsymbol{D}$ is also an upper triangular matrix because all the values below the diagonal entries are zero. This completes the proof.

Theorem.

# Determinant of a diagonal matrix is equal to the product of its diagonal entries

If $\boldsymbol{D}$ is a diagonal matrix, then the determinant of $\boldsymbol{D}$ is equal to the product of its diagonal entries.

Proof. Since a diagonal matrix is triangular, theoremlink applies. This completes the proof.

Theorem.

# Diagonal matrix is invertible if and only if every diagonal entry is non-zero

Let $\boldsymbol{D}$ be a diagonal matrix. $\boldsymbol{D}$ is invertiblelink if and only if every diagonal entry of $\boldsymbol{D}$ is non-zero.

Proof. Since a diagonal matrix is triangular, theoremlink applies. This completes the proof.

Theorem.

# Product of two diagonal matrices is also diagonal

If $\boldsymbol{D}_1$ and $\boldsymbol{D}_2$ are $n\times{n}$ diagonal matrices, then their product $\boldsymbol{D}_1\boldsymbol{D}_2$ is a diagonal matrix whose diagonal entry contains pairwise products of the diagonal entries of $\boldsymbol{D}_1$ and $\boldsymbol{D}_2$.

Proof. Let $\boldsymbol{D}_1$ and $\boldsymbol{D}_2$ be the following diagonal matrices:

$$\boldsymbol{D}_1= \begin{pmatrix} a_{11}&0&\cdots&0\\ 0&a_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&a_{nn} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{D}_2= \begin{pmatrix} b_{11}&0&\cdots&0\\ 0&b_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&b_{nn} \end{pmatrix}$$

The product $\boldsymbol{D}_1\boldsymbol{D}_2$ is:

\begin{align*} \boldsymbol{D}_1\boldsymbol{D}_2&= \begin{pmatrix} a_{11}&0&\cdots&0\\ 0&a_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&a_{nn} \end{pmatrix} \begin{pmatrix} b_{11}&0&\cdots&0\\0&b_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&b_{nn} \end{pmatrix}\\ &= \begin{pmatrix} a_{11}b_{11}&0&\cdots&0\\0&a_{22}b_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&a_{nn}b_{nn} \end{pmatrix} \end{align*}

This completes the proof.

Theorem.

# Product of a triangular matrix and a diagonal matrix

Let $\boldsymbol{A}$ be a triangular matrix and $\boldsymbol{D}$ be a diagonal matrix:

• if $\boldsymbol{A}$ is upper triangular, then $\boldsymbol{AD}$ and $\boldsymbol{DA}$ are upper triangular matrices.

• if $\boldsymbol{A}$ is lower triangular, then $\boldsymbol{AD}$ and $\boldsymbol{DA}$ are lower triangular matrix.

Note that the diagonals of $\boldsymbol{AD}$ and $\boldsymbol{DA}$ are equal to the pairwise products of the diagonal entries of $\boldsymbol{A}$ and $\boldsymbol{D}$.

Proof. Let $\boldsymbol{A}$ be an upper triangular matrix and $\boldsymbol{D}$ be a diagonal matrix. $\boldsymbol{D}$ is also an upper triangular matrix by theoremlink, which means that the product $\boldsymbol{AD}$ is upper triangular by theoremlink. We can also apply the same theoremlink to conclude that $\boldsymbol{DA}$ is upper triangular. Finally, by theoremlink, the diagonal entries of $\boldsymbol{AD}$ and $\boldsymbol{DA}$ will be the pairwise products of the diagonal entries of $\boldsymbol{A}$ and $\boldsymbol{D}$. The proof for the lower triangular case is analogous. This completes the proof.

Theorem.

# Inverse of a diagonal matrix

Let $\boldsymbol{D}$ be a diagonal matrix:

$$\boldsymbol{D}= \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn} \end{pmatrix}$$

If every diagonal entry of $\boldsymbol{D}$ is non-zero, then $\boldsymbol{D}^{-1}$ is computed by:

$$\boldsymbol{D}^{-1}= \begin{pmatrix} \frac{1}{d_{11}}&0&\cdots&0\\ 0&\frac{1}{d_{22}}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&\frac{1}{d_{nn}} \end{pmatrix}$$

This also means that the inverse of a diagonal matrix is also diagonal.

Proof. Suppose we have an $n\times{n}$ diagonal matrix $\boldsymbol{D}$ and another matrix $\boldsymbol{A}$ below:

\begin{align*} \boldsymbol{D}= \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn} \end{pmatrix},\;\;\;\;\;\;\; \boldsymbol{A}=\begin{pmatrix} \frac{1}{d_{11}}&0&\cdots&0\\ 0&\frac{1}{d_{22}}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&\frac{1}{d_{nn}} \end{pmatrix} \end{align*}

Our goal is to show that $\boldsymbol{A}=\boldsymbol{D}^{-1}$. The product $\boldsymbol{DA}$ is:

\begin{align*} \boldsymbol{D}\boldsymbol{A}= \begin{pmatrix} d_{11}&0&\cdots&0\\ 0&d_{22}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&d_{nn} \end{pmatrix}\begin{pmatrix} \frac{1}{d_{11}}&0&\cdots&0\\ 0&\frac{1}{d_{22}}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&\frac{1}{d_{nn}} \end{pmatrix} \end{align*}

By theoremlink, taking the product of two diagonal matrices involves multiplying the corresponding diagonal entries:

$$\boldsymbol{D}\boldsymbol{A}= \begin{pmatrix} \frac{d_{11}}{d_{11}}&0&\cdots&0\\ 0&\frac{d_{22}}{d_{22}}&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&\frac{d_{nn}}{d_{nn}} \end{pmatrix}= \begin{pmatrix} 1&0&\cdots&0\\ 0&1&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&1 \end{pmatrix}=\boldsymbol{I}_n$$

Because $\boldsymbol{DA}=\boldsymbol{I}_n$, we have that $\boldsymbol{D}^{-1}=\boldsymbol{A}$ by definitionlink of inverse matrices. This completes the proof.

Example.

## Finding the inverse of a diagonal matrix

Find the inverse of the following diagonal matrix:

$$\boldsymbol{D}= \begin{pmatrix} 3&0&0\\ 0&2&0\\ 0&0&1\\ \end{pmatrix}$$

Solution. The inverse of $\boldsymbol{D}$ is a diagonal matrix whose diagonal entries are the reciprocal:

$$\boldsymbol{D}^{-1}= \begin{pmatrix} 1/3&0&0\\ 0&1/2&0\\ 0&0&1\\ \end{pmatrix}$$
Edited by 0 others
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!