search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

Comprehensive Guide on Triangular Matrices

schedule Aug 10, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

Upper triangular matrix

An upper triangular matrix is a square matrix whose elements below the diagonal are all zeros, that is:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}$$

Note that any of the entries (e.g. $a_{11}$, $a_{23}$) can be zero as well.

Example.

Upper triangular matrices

The following matrices are all examples of upper triangular matrices:

$$\begin{pmatrix} 3&4\\0&2 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 3&4&0\\0&2&9\\0&0&8 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 3&4&2\\0&0&1\\0&0&8 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}$$
Definition.

Lower triangular matrix

A lower triangular matrix is a square matrix whose elements above the diagonal are all zeros, that is:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&0&0&\cdots&0\\ a_{21}&a_{22}&0&\cdots&0\\ a_{31}&a_{32}&a_{33}&\cdots&0\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ a_{n1}&a_{n2}&a_{n3}&\cdots&a_{nn} \end{pmatrix}$$

Again, any of the entries (e.g. $a_{21}$, $a_{33}$) can be zero as well.

Example.

Lower triangular matrices

Below are some examples of lower triangular matrices:

$$\begin{pmatrix} 2&0\\1&3 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 3&0&0\\1&2&0\\0&5&4 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 3&0&0\\1&0&0\\2&5&4 \end{pmatrix},\;\;\;\;\; \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}$$
Definition.

Triangular matrices

A matrix that is either an upper or lower triangular matrix is called a triangular matrix.

Theorem.

Transpose of a triangular matrix

The following statements are true:

  • If $\boldsymbol{A}$ is an upper triangular matrix, then $\boldsymbol{A}^T$ is a lower triangular matrix.

  • If $\boldsymbol{A}$ is a lower triangular matrix, then $\boldsymbol{A}^T$ is an upper triangular matrix.

Moreover, the diagonal entries of a triangular matrix will remain the same after taking its transpose.

Proof. Let $\boldsymbol{A}$ be an upper triangular matrix:

$$\begin{equation}\label{eq:zu3HzlzY3yHGvuZYGyD} \boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix} \end{equation}$$

We now take the transpose of $\boldsymbol{A}$ to get:

$$\begin{equation}\label{eq:urdwv17ZWDa7aeNDM1p} \boldsymbol{A}^T=\begin{pmatrix} a_{11}&0&0&\cdots&0\\ a_{12}&a_{22}&0&\cdots&0\\ a_{13}&a_{23}&a_{33}&\cdots&0\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ a_{1n}&a_{2n}&a_{3n}&\cdots&a_{nn} \end{pmatrix} \end{equation}$$

Notice that $\boldsymbol{A}^T$ is a lower triangular matrix. Now, consider the case when $\boldsymbol{A}$ is a lower triangular matrix of the form \eqref{eq:urdwv17ZWDa7aeNDM1p}. Taking the transpose of $\boldsymbol{A}$ gives us \eqref{eq:zu3HzlzY3yHGvuZYGyD}, which means that $\boldsymbol{A}^T$ is an upper triangular matrix.

Finally, since triangular matrices are square by definitionlink, taking their transpose does not change the diagonal entries by propertylink. This completes the proof.

Theorem.

Determinant of an upper triangular matrix

The determinant of an upper triangular matrix is equal to the product of its diagonal entries, that is:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}\;\;\;\;\;{\color{blue}\implies}\;\;\;\;\; \det(\boldsymbol{A})= a_{11}\cdot{a_{22}}\cdot{a_{33}}\cdot\;\cdots\; \cdot{a_{nn}}$$

Proof. We will prove this by induction. Consider the base case in which our upper triangular matrix is $2\times2$ like so:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}\\ 0&a_{22} \end{pmatrix}$$

The determinant of $\boldsymbol{A}$ is:

$$\begin{align*} \det(\boldsymbol{A}) &=a_{11}\cdot\det(a_{22})\\ &=a_{11}\cdot{a_{22}} \end{align*}$$

Therefore, the theorem holds for the base case. Now, we assume that the theorem holds for the $(n-1)\times(n-1)$ case. Our goal is to show that the theorem holds for the $n\times{n}$ upper triangular matrix:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}$$

We compute the determinant of $\boldsymbol{A}$ by cofactor expansionlink along the first column:

$$\det(\boldsymbol{A})=a_{11}\begin{vmatrix} a_{22}&a_{23}&\cdots&a_{2n}\\ 0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&a_{nn} \end{vmatrix}$$

Notice how we are finding the determinant of an $(n-1)\times(n-1)$ matrix on the right-hand side. We now use the inductive assumption that the theorem holds for the $(n-1)\times(n-1)$ case:

$$\begin{align*} \det(\boldsymbol{A})&= a_{11}\cdot({a_{22}}\cdot{a_{33}}\cdot\;\cdots\;\cdot{a_{nn}})\\ &=a_{11}\cdot{a_{22}}\cdot{a_{33}}\cdot\;\cdots\;\cdot{a_{nn}} \end{align*}$$

By the principle of mathematical induction, the theorem therefore holds for the general case. This completes the proof.

Theorem.

Determinant of a lower triangular matrix

The determinant of a lower triangular matrix is equal to the product of its diagonal entries, that is:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&0&0&\cdots&0\\ a_{21}&a_{22}&0&\cdots&0\\ a_{31}&a_{32}&a_{33}&\cdots&0\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ a_{n1}&a_{n2}&a_{n3}&\cdots&a_{nn} \end{pmatrix}\;\;\;\;\;{\color{blue}\implies}\;\;\;\;\; \det(\boldsymbol{A})= a_{11}\cdot{a_{22}}\cdot{a_{33}}\cdot\;\cdots\; \cdot{a_{nn}}$$

Proof. Let $\boldsymbol{A}$ be a lower triangular matrix. with diagonal entries $a_{11}$, $a_{22}$, $\cdots$, $a_{nn}$. By theoremlink, the transpose of $\boldsymbol{A}$ is an upper triangular matrix. Now, by theoremlink, we know that the determinant of an upper triangular matrix is equal to the product of its diagonal entries. Because $\boldsymbol{A}$ is a square matrix, the diagonal entries of $\boldsymbol{A}$ and $\boldsymbol{A}^T$ are the same. This means that $\det(\boldsymbol{A}^T)$ is:

$$\det(\boldsymbol{A}^T)= a_{11}\cdot{a_{22}}\cdot\;\cdots\;\cdot {a_{nn}}$$

Next, by theoremlink, we have that $\det(\boldsymbol{A}^T)=\det(\boldsymbol{A})$ and so:

$$\det(\boldsymbol{A})= a_{11}\cdot{a_{22}}\cdot\;\cdots\;\cdot {a_{nn}}$$

This means that the determinant of a lower triangular matrix is also equal to the product of its diagonal entries! This completes the proof.

Theorem.

Determinant of a triangular matrix is equal to the determinant of its transpose

If $\boldsymbol{A}$ is a triangular matrix, then:

$$\det(\boldsymbol{A}^T)= \det(\boldsymbol{A})$$

Proof. If $\boldsymbol{A}$ is a triangular matrix, then its determinant is equal to the product of its diagonal entries by theoremlink and theoremlink. Taking the transpose of a square matrix does not change its diagonal entries by theoremlink, which means that the diagonal entries of $\boldsymbol{A}^T$ are the same as those of $\boldsymbol{A}$. Therefore, the determinant of $\boldsymbol{A}^T$ is equal to the determinant of $\boldsymbol{A}$. This completes the proof.

Theorem.

Product of two triangular matrices is also a triangular matrix

The following statements are true:

  • the product of two upper triangular matrices is an upper triangular matrix.

  • the product of two lower triangular matrices is a lower triangular matrix.

Proof. We will prove the first statement and then use it to prove the second statement. Consider the following two upper triangular matrices:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix},\;\;\;\;\;\;\; \boldsymbol{B}=\begin{pmatrix} b_{11}&b_{12}&b_{13}&\cdots&b_{1n}\\ 0&b_{22}&b_{23}&\cdots&b_{2n}\\ 0&0&a_{33}&\cdots&b_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&b_{nn} \end{pmatrix}$$

The product $\boldsymbol{AB}$ is:

$$\begin{equation}\label{eq:b220MZwh2cqYZgtbWYE} \boldsymbol{AB}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}\begin{pmatrix} b_{11}&b_{12}&b_{13}&\cdots&b_{1n}\\ 0&b_{22}&b_{23}&\cdots&b_{2n}\\ 0&0&b_{33}&\cdots&b_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&b_{nn} \end{pmatrix} \end{equation}$$

To show that $\boldsymbol{AB}$ is an upper triangular matrix, we must show that the $k$-th entry of the $i$-th row of $\boldsymbol{AB}$, denoted by $(\boldsymbol{AB})_{ik}$, is equal to zero when $k\lt{i}$. From basic matrix multiplication, $(\boldsymbol{AB})_{ik}$ is:

$$(\boldsymbol{AB})_{ik}=\sum_{j=1}^n a_{ij}\cdot b_{jk} $$

Now, consider the case when $k\lt{i}$. By definition, we have that:

  • since $\boldsymbol{A}$ is an upper triangular matrix, $a_{ij}=0$ for $i\gt{j}$.

  • since $\boldsymbol{B}$ is an upper triangular matrix, $b_{ij}=0$ for $i\gt{j}$.

Let's now compute the summation:

$$\begin{align*} (\boldsymbol{AB})_{ik}&=\sum_{j=1}^n a_{ij}\cdot{b_{jk}}\\ &=(a_{i1}\cdot{b_{1k}})+ (a_{i2}\cdot{b_{2k}})+ \cdots+(a_{ik}\cdot{b_{kk}})+ (a_{i(k+1)}\cdot{b_{(k+1)k}})+ \cdots+(a_{in}\cdot{b_{nk}})\\ &=(0\cdot{b_{1k}})+ (0\cdot{b_{2k}})+ \cdots+(0\cdot{b_{kk}})+ (a_{i(k+1)}\cdot0)+ \cdots+(a_{in}\cdot0)\\ &=0 \end{align*}$$

Therefore, $\boldsymbol{AB}$ is an upper triangular matrix.

Now, let's prove the second statement that the product of two lower triangular matrices is also a lower triangular matrix. Suppose matrices $\boldsymbol{A}$ and $\boldsymbol{B}$ are lower triangular matrices. By theoremlink, we have that:

$$\begin{equation}\label{eq:w7nmV7TiN2hyimd0AB2} (\boldsymbol{AB})^T=\boldsymbol{B}^T\boldsymbol{A}^T \end{equation}$$

By theoremlink, the transpose of a lower triangular matrix is an upper triangular matrix. Therefore, $\boldsymbol{B}^T$ and $\boldsymbol{A}^T$ are both upper triangular matrices. We have just proven that the product of two upper triangular matrices is an upper triangular matrix and thus $\boldsymbol{B}^T\boldsymbol{A}^T$ is an upper triangular matrix.

We now take the transpose of both sides of \eqref{eq:w7nmV7TiN2hyimd0AB2} to get:

$$\boldsymbol{AB}= \big(\boldsymbol{B}^T\boldsymbol{A}^T\big)^T$$

Since $\boldsymbol{B}^T\boldsymbol{A}^T$ is an upper triangular matrix, its transpose is a lower triangular matrix by theoremlink. This proves that $\boldsymbol{AB}$ is a lower triangular matrix. This completes the proof.

NOTE

We could have explicitly computed $\boldsymbol{AB}$ in \eqref{eq:b220MZwh2cqYZgtbWYE} to get:

$$\begin{align*} \begin{pmatrix} a_{11}b_{11}&a_{11}b_{12}+a_{12}b_{22}&a_{11}b_{13}+a_{12}b_{23}+a_{13}b_{33}& \cdots&a_{11}b_{1n}+a_{12}b_{2n}+a_{13}b_{3n}+\cdots+a_{1n}b_{nn}\\ 0&a_{22}b_{22}&a_{22}b_{23}+a_{23}b_{33}&\cdots&a_{22}b_{2n}+a_{23}b_{3n}+\cdots+a_{2n}b_{nn}\\ 0&0&a_{33}b_{33}&\cdots&a_{33}b_{3n}+\cdots+a_{3n}b_{nn}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn}b_{nn} \end{pmatrix} \end{align*}$$

Clearly, we can observe that $\boldsymbol{AB}$ is an upper triangular matrix. However, this proof is not rigorous because it is based merely on observation.

Theorem.

Triangular matrix is invertible if and only if every diagonal entry is non-zero

Let $\boldsymbol{A}$ be a triangular matrix. $\boldsymbol{A}$ is invertible if and only if every diagonal entry of $\boldsymbol{A}$ is non-zero.

Proof. We first prove the forward proposition. Assume $\boldsymbol{A}$ is an invertible $n\times{n}$ upper triangular matrix. We will prove the case for a lower triangular matrix later.

Consider the following homogeneous system:

$$\boldsymbol{Ax}=\boldsymbol{0}$$

We can express this in matrix form like so:

$$\begin{equation}\label{eq:JeNwZXCx8kptEPgAQI7} \begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}\begin{pmatrix} x_{1}\\x_{2}\\x_{3}\\\vdots\\x_{n} \end{pmatrix}= \begin{pmatrix} 0\\0\\0\\\vdots\\0 \end{pmatrix} \end{equation}$$

By theoremlink, since $\boldsymbol{A}$ is invertible, $\boldsymbol{Ax}=\boldsymbol{0}$ has only the trivial solution of $\boldsymbol{x}=\boldsymbol{0}$.

Now, let's assume one of the diagonals $a_{ii}$ is $0$ while the other diagonal entries are all non-zero. The $i$-th row of the system \eqref{eq:JeNwZXCx8kptEPgAQI7} can be written as the following linear equation:

$$\begin{equation}\label{eq:nzzwEiA9LwHSgN9L0b8} a_{ii}x_i+ a_{i(i+1)}x_{i+1}+ a_{i(i+2)}x_{i+2}+ \cdots+ a_{in}x_{n}=0 \end{equation}$$

We can solve for $x_{i+1}$, $x_{i+2}$, $\cdots$, $x_{n}$ using the linear equations below row $i$. For instance, to solve for $x_n$, we focus on the last linear equation. Since $a_{nn}$ can be any number except zero, the following holds:

$$a_{nn}x_n=0 \;\;\;\;{\color{blue}\implies}\;\;\;\;{x_n=0}$$

Substituting $x_n$ into the second-last linear equation of \eqref{eq:JeNwZXCx8kptEPgAQI7} gives:

$$a_{n(n-1)}x_{n-1}+a_{nn}x_n=0 \;\;\;\;{\color{blue}\implies}\;\;\;\;a_{n(n-1)}x_{n-1}=0 \;\;\;\;{\color{blue}\implies}\;\;\;\;{x_{n-1}=0}$$

We repeat this process until we get $x_{i+1}=x_{i+2}=\cdots=x_n=0$. Therefore, \eqref{eq:nzzwEiA9LwHSgN9L0b8} becomes:

$$a_{ii}x_i=0$$

However, because we have assumed that $a_{ii}$ is zero, $x_i$ can be any value and is no longer restricted to $x_i=0$. This contradicts the fact that $\boldsymbol{x}$ in $\boldsymbol{Ax}=\boldsymbol{0}$ must be the zero vector. Therefore, we conclude that if $\boldsymbol{A}$ is an invertible upper triangular matrix, then every diagonal entry of $\boldsymbol{A}$ must be zero.

* * *

Let's now prove the converse. Assume that $\boldsymbol{A}$ is an upper triangular matrix and that every diagonal entry of $\boldsymbol{A}$ is non-zero. By theoremlink, the determinant of an upper triangular matrix is equal to the product of its diagonal entries:

$$\det(\boldsymbol{A})= a_{11}\cdot{a_{22}}\cdot \;\cdots\; \cdot{a_{nn}}$$

Since every diagonal entry of $\boldsymbol{A}$ is non-zero, we have that $\det(\boldsymbol{A})\ne0$. By theoremlink, we conclude that $\boldsymbol{A}$ is invertible. We have now managed to prove the theorem for upper triangular matrices.

* * *

Let's now prove that the theorem also holds for lower triangular matrices. Let $\boldsymbol{A}$ be an $n\times{n}$ lower triangular matrix. Assume $\boldsymbol{A}$ is invertible. By theoremlink, $\boldsymbol{A}^T$ is an upper triangular matrix. Because $\boldsymbol{A}$ is invertible, $\boldsymbol{A}^T$ is also invertible by theoremlink. We have just proven that if $\boldsymbol{A}^T$ is an upper triangular matrix, then every diagonal entry of $\boldsymbol{A}^T$ is non-zero. Because taking the transpose of a square matrix does not change its diagonal entries, we conclude that every diagonal entry of $\boldsymbol{A}$ is non-zero.

We now prove the converse. Let $\boldsymbol{A}$ be an $n\times{n}$ lower triangular matrix. Assume every diagonal entry of $\boldsymbol{A}$ is non-zero. By theoremlink, the determinant of $\boldsymbol{A}$ is:

$$\det(\boldsymbol{A})= a_{11}\cdot{a_{22}}\cdot \;\cdots\; \cdot{a_{nn}}$$

Since every diagonal entry of $\boldsymbol{A}$ is non-zero, we have that $\det(\boldsymbol{A})\ne0$. By theoremlink, we conclude that $\boldsymbol{A}$ is invertible.

We have now shown that the theorem holds for both upper and lower triangular matrices. This completes the proof.

Theorem.

Inverse of a triangular matrix is also triangular with the diagonal entries becoming reciprocals

The following statements are true:

  • if $\boldsymbol{A}$ is an invertible upper triangular matrix, then $\boldsymbol{A}^{-1}$ is also an upper triangular matrix.

  • if $\boldsymbol{B}$ is an invertible lower triangular matrix, then $\boldsymbol{A}^{-1}$ is also a lower triangular matrix.

Specifically, the diagonal entries of the inverse matrix are the reciprocals of the diagonal entries of the original matrix.

Proof. Let's start by proving the first statement. We assume $\boldsymbol{A}$ is an invertible upper triangular matrix. Since $\boldsymbol{A}$ is invertible, $\boldsymbol{A}^{-1}$ exists. Let's express $\boldsymbol{A}^{-1}$ as a collection of column vectors:

$$\begin{equation}\label{eq:orAGpH3zG5jkR2B7uJH} \boldsymbol{A}^{-1}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{x}_1& \boldsymbol{x}_2&\cdots&\boldsymbol{x}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix} \end{equation}$$

Now, the product $\boldsymbol{AA}^{-1}$ is:

$$\begin{align*} \boldsymbol{A}\boldsymbol{A}^{-1}&= \boldsymbol{A}\begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{x}_1&\boldsymbol{x}_2&\cdots&\boldsymbol{x}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}\\ &=\begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{A}\boldsymbol{x}_1& \boldsymbol{A}\boldsymbol{x}_2& \cdots&\boldsymbol{A}\boldsymbol{x}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}\\ \end{align*}$$

Where the second equality holds by theoremlink. By definitionlink of invertibility, we have that $\boldsymbol{AA}^{-1}=\boldsymbol{I}_n$. This means that:

$$\begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{A}\boldsymbol{x}_1& \boldsymbol{A}\boldsymbol{x}_2& \cdots&\boldsymbol{A}\boldsymbol{x}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{e}_1& \boldsymbol{e}_2& \cdots&\boldsymbol{e}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Where $\boldsymbol{e}_1$, $\boldsymbol{e}_2$, $\cdots$, $\boldsymbol{e}_n$ are the standard basis vectors of $\mathbb{R}^n$. Equating the columns gives:

$$\begin{gather*} \boldsymbol{Ax}_1=\boldsymbol{e}_1\\ \boldsymbol{Ax}_2=\boldsymbol{e}_2\\ \vdots\\ \boldsymbol{Ax}_n=\boldsymbol{e}_n\\ \end{gather*}$$

Let's focus on the first equation $\boldsymbol{Ax}_1=\boldsymbol{e}_1$. In matrix form, this is:

$$\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}\begin{pmatrix} x_{11}\\x_{12}\\x_{13}\\\vdots\\x_{1n} \end{pmatrix}= \begin{pmatrix} 1\\0\\0\\\vdots\\0 \end{pmatrix}$$

The corresponding system of linear equations is:

$$\begin{equation}\label{eq:vkHtcb8floPmN2JzQGw} \begin{aligned} a_{11}x_{11}+a_{12}x_{12}+a_{13}x_{13}+\cdots+a_{1n}x_{1n}&=1\\ a_{22}x_{12}+a_{23}x_{13}+\cdots+a_{2n}x_{1n}&=0\\ a_{33}x_{13}+\cdots+a_{3n}x_{1n}&=0\\ \vdots\\ a_{(n-1)(n-1)}x_{1(n-1)}+a_{(n-1)n}x_{1n}&=0\\ a_{nn}x_{1n}&=0\\ \end{aligned} \end{equation}$$

Focus on the last equation. Since any entry $a_{ij}$ can be any value, the only way for the equation to hold is if $x_{1n}=0$. Next, we substitute $x_{1n}$ into the second-last equation and conclude that $x_{1(n-1)}=0$ using the same logic. We keep on repeating this to conclude that:

$$\begin{equation}\label{eq:D1yHzEHi5XYByCThF4x} x_{12}=x_{13}=\cdots=x_{1n}=0 \end{equation}$$

Now, we substitute \eqref{eq:D1yHzEHi5XYByCThF4x} into the first equation of \eqref{eq:vkHtcb8floPmN2JzQGw} to get:

$$\begin{align*} a_{11}x_{11}&=1\\ x_{11}&=\frac{1}{a_{11}} \end{align*}$$

Next, we focus on $\boldsymbol{Ax}_2=\boldsymbol{e}_2$. The matrix form is:

$$\begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix}\begin{pmatrix} x_{21}\\x_{22}\\x_{23}\\\vdots\\x_{2n} \end{pmatrix}= \begin{pmatrix} 0\\1\\0\\\vdots\\0 \end{pmatrix}$$

The corresponding linear system is:

$$\begin{equation}\label{eq:uGONxUV3mp6hybu30s8} \begin{aligned} a_{11}x_{21}+a_{12}x_{22}+a_{13}x_{23}+\cdots+a_{1n}x_{2n}&=0\\ a_{22}x_{22}+a_{23}x_{23}+\cdots+a_{2n}x_{2n}&=1\\ a_{33}x_{23}+\cdots+a_{3n}x_{2n}&=0\\ \vdots\\ a_{(n-1)(n-1)}x_{2(n-1)}+a_{(n-1)n}x_{2n}&=0\\ a_{nn}x_{2n}&=0\\ \end{aligned} \end{equation}$$

Again, we perform the same steps to conclude:

$$x_{23}=\cdots=x_{2n}=0$$

Substitute this into the second equation of \eqref{eq:uGONxUV3mp6hybu30s8} to get:

$$\begin{align*} a_{22}x_{22}&=1\\ x_{22}&=\frac{1}{a_{22}} \end{align*}$$

In general, for the system $\boldsymbol{Ax}_k=\boldsymbol{e}_k$, we have:

$$x_{k(k+1)}= x_{k(k+2)}= \cdots=x_{kn}=0$$

The value of a diagonal entry is:

$$\begin{align*} a_{kk}x_{kk}&=1\\ x_{kk}&=\frac{1}{a_{kk}} \end{align*}$$

The inverse matrix of $\boldsymbol{A}$ defined as \eqref{eq:orAGpH3zG5jkR2B7uJH} is therefore:

$$\boldsymbol{A}^{-1}=\begin{pmatrix} x_{11}&x_{21}&x_{31}&\cdots&x_{n1}\\ 0&x_{22}&x_{32}&\cdots&x_{n2}\\ 0&0&x_{33}&\cdots&x_{n3}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&x_{nn} \end{pmatrix}=\begin{pmatrix} 1/a_{11}&x_{21}&x_{31}&\cdots&x_{n1}\\ 0&1/a_{22}&x_{32}&\cdots&x_{n2}\\ 0&0&1/a_{33}&\cdots&x_{n3}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&1/a_{nn} \end{pmatrix}$$

This means that $\boldsymbol{A}^{-1}$ is an upper triangular matrix.

* * *

Let's now prove the second statement. Let $\boldsymbol{A}$ be an invertible lower triangular matrix. By theoremlink, $\boldsymbol{A}^T$ is an upper triangular matrix. We have just proven that the inverse of an upper triangular matrix is also an upper triangular matrix, that is, $(\boldsymbol{A}^T)^{-1}$ is an upper triangular matrix. Now, recall the following theoremlink:

$$(\boldsymbol{A}^T)^{-1}= (\boldsymbol{A}^{-1})^{T}$$

Therefore, $(\boldsymbol{A}^{-1})^T$ is an upper triangular matrix. This means that $\boldsymbol{A}^{-1}$ is a lower triangular matrix by theoremlink. This completes the proof.

Theorem.

Diagonal entries of the product of two triangular matrices

Let $\boldsymbol{A}$ and $\boldsymbol{B}$ be both $n\times{n}$ lower or upper triangular matrices in which:

  • the diagonal entries of $\boldsymbol{A}$ are $a_{11}$, $a_{22}$, $\cdots$, $a_{nn}$.

  • the diagonal entries of $\boldsymbol{B}$ are $b_{11}$, $b_{22}$, $\cdots$, $b_{nn}$.

The diagonal entries of the product $\boldsymbol{AB}$ are $a_{11}b_{11}$, $a_{22}b_{22}$, $\cdots$, $a_{nn}b_{nn}$.

Proof. We will first prove the case for when matrices $\boldsymbol{A}$ and $\boldsymbol{B}$ are upper triangular matrices. For reference, the product $\boldsymbol{AB}$ is:

$$\begin{align*} \boldsymbol{AB}&= \begin{pmatrix} a_{11}&a_{12}&a_{13}&\cdots&a_{1n}\\ 0&a_{22}&a_{23}&\cdots&a_{2n}\\ 0&0&a_{33}&\cdots&a_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&a_{nn} \end{pmatrix} \begin{pmatrix} b_{11}&b_{12}&b_{13}&\cdots&b_{1n}\\ 0&b_{22}&b_{23}&\cdots&b_{2n}\\ 0&0&b_{33}&\cdots&b_{3n}\\ \vdots&\vdots&\vdots&\smash\ddots&\vdots\\ 0&0&0&\cdots&b_{nn} \end{pmatrix} \end{align*}$$

By definition, we have that:

  • since $\boldsymbol{A}$ is an upper triangular matrix, $a_{ij}=0$ for $i\gt{j}$.

  • since $\boldsymbol{B}$ is an upper triangular matrix, $b_{ij}=0$ for $i\gt{j}$.

Let's now compute the summation. The diagonal entries of $\boldsymbol{AB}$ are:

$$\begin{align*} (\boldsymbol{AB})_{ii}&= \sum^n_{j=1}a_{ij}b_{ji}\\ &=a_{i1}b_{1i}+a_{i2}b_{2i}+\cdots+ a_{ii}b_{ii}+ a_{i(i+1)}b_{(i+1)i}+\cdots+a_{in}b_{ni}\\ &=(0)b_{1i}+(0)b_{2i}+\cdots+ a_{ii}b_{ii}+ a_{i(i+1)}(0)+\cdots+a_{in}(0)\\ &=a_{ii}b_{ii} \end{align*}$$

This shows that the theorem holds for upper triangular matrices.

* * *

Let's now prove the case for when $\boldsymbol{A}$ and $\boldsymbol{B}$ are lower triangular matrices. By theoremlink, we have that:

$$\begin{equation}\label{eq:tQ4mPvxH1y4CA0ciqbX} (\boldsymbol{AB})^T= \boldsymbol{B}^T\boldsymbol{A}^T \end{equation}$$

By theoremlink, $\boldsymbol{B}^T$ and $\boldsymbol{A}^T$ are upper triangular matrices. Since $\boldsymbol{B}$ and $\boldsymbol{A}$ are square matrices, $\boldsymbol{B}^T$ and $\boldsymbol{A}^T$ have the same diagonals as $\boldsymbol{B}$ and $\boldsymbol{A}$ respectively. We have just proven that the diagonals of the product of two upper triangular matrices, $\boldsymbol{B}^T\boldsymbol{A}^T$, is $a_{ii}b_{ii}$ for $i=1,2,\cdots,n$. We now take the transpose of \eqref{eq:tQ4mPvxH1y4CA0ciqbX} and conclude that the diagonal entries of $\boldsymbol{AB}$ are also $a_{ii}b_{ii}$ for $i=1,2,\cdots,n$.

This completes the proof.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...