search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

Comprehensive Guide on LU Factorization

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

LU factorization

Let $\boldsymbol{A}$ be a square matrix. The $\boldsymbol{LU}$ factorization of $\boldsymbol{A}$ involves expressing $\boldsymbol{A}$ as a product of a lower triangular matrix $\boldsymbol{L}$ and upper triangular matrix $\boldsymbol{U}$ like so:

$$\boldsymbol{A}=\boldsymbol{LU}$$

As we shall see later, not all square matrices have a $\boldsymbol{LU}$ factorization.

Theorem.

LU factorization of 2x2 and 3x3 matrices

Below are some examples of $\boldsymbol{LU}$ factorization:

$$\begin{pmatrix}1&2\\3&4\end{pmatrix}= \begin{pmatrix}1&0\\3&1\end{pmatrix} \begin{pmatrix}1&2\\0&-2\end{pmatrix},\;\;\;\;\;\;\; \begin{pmatrix}1&3&2\\2&1&1\\4&2&2\end{pmatrix}= \begin{pmatrix}1&0&0\\2&1&0\\4&2&1\end{pmatrix} \begin{pmatrix} 1&3&2\\0&-5&-3\\0&0&0 \end{pmatrix}$$
Theorem.

Performing LU factorization on a matrix

Let $\boldsymbol{A}$ be a square matrix. If $\boldsymbol{A}$ can be reduced to a row echelon formlink $\boldsymbol{U}$ by Gaussian Elimination without performing any row swaps, then $\boldsymbol{A}$ can be factorized into $\boldsymbol{LU}$ form. Specifically, $\boldsymbol{L}$ is a lower triangular matrix equal to the product of the inverses of elementary matrices required to reduce $\boldsymbol{A}$ into $\boldsymbol{U}$.

Proof. The row echelon form $\boldsymbol{U}$ of square matrix $\boldsymbol{A}$ can be obtained by applying a series of elementary row operations on $\boldsymbol{A}$. Assume that none of the elementary row operations is a row interchange. By theoremlink, performing an elementary row operation on $\boldsymbol{A}$ is equivalent to multiplying the corresponding elementary matrix to $\boldsymbol{A}$ like so:

$$\boldsymbol{E_k} \cdots \boldsymbol{E_2} \boldsymbol{E_1} \boldsymbol{A} =\boldsymbol{U}$$

By theoremlink, elementary matrices are invertible and thus we can make $\boldsymbol{A}$ the subject:

$$\begin{equation}\label{eq:Ty2QBEaT0q9NbNmLu1S} \boldsymbol{A} = \boldsymbol{E_1}^{-1} \boldsymbol{E_2}^{-1} \cdots \boldsymbol{E_k}^{-1} \boldsymbol{U} \end{equation}$$

Recall that there are three types of elementary matrices:

  • elementary matrix corresponding to multiplying a row by some scalar $k$.

  • elementary matrix corresponding to interchanging two rows.

  • elementary matrix corresponding to adding a multiple of one row to another row.

We will focus on the first and third types of elementary matrices, and then later explain why $\boldsymbol{LU}$ factorization does not work when we allow the second type of elementary matrix.

Firstly, consider the elementary matrix corresponding to the elementary row operation of multiplying a row by some scalar $k$. This elementary matrix resembles the identity matrix where one of the $1$s is multiplied by $k$. An example of such an elementary matrix might look like so:

$$\begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&k\\ \end{pmatrix}$$

This means that this type of elementary matrix is a triangular matrix. Note that this could be treated as both a lower or an upper triangular matrix.

Secondly, consider the elementary matrix corresponding to the elementary row operation of multiplying one row by a scalar and then adding it to another row. This type of elementary matrix is constructed by filling one of the $0$s below or above the diagonal of an identity matrix with $k$. An example of such an elementary matrix might look like so:

$$\begin{pmatrix} 1&0&k\\0&1&0\\0&0&1\\ \end{pmatrix},\;\;\;\;\;\; \begin{pmatrix} 1&0&0\\0&1&0\\0&k&1\\ \end{pmatrix}$$

Here, the left elementary matrix corresponds to multiplying the third row by $k$ and then adding it to the first row. The right elementary matrix corresponds to multiplying the second row by $k$ and then adding it to the third row. Therefore, depending on the operation we perform, this type of elementary matrix could either be an upper or lower triangular matrix. However, we only require one of these types to obtain the row echelon form. For instance, consider the following matrix:

$$\begin{pmatrix} 1&3\\2&8 \end{pmatrix}$$

To obtain the row echelon form, we can either:

  • multiply the bottom row by $-1/2$ and then add it to the top row. The associated elementary matrix will be an upper triangular.

  • multiply the top row by $-2$ and then add it to the bottom row. The associated elementary matrix will be a lower triangular.

Both ways will give us the row echelon form. This means that we can assume the elementary matrix of this type to be a lower triangular matrix for argument's sake.

Now, for your reference, here's \eqref{eq:Ty2QBEaT0q9NbNmLu1S} again:

$$\begin{equation}\label{eq:vNlDGOwhsFrqkw9fegY} \boldsymbol{A} = \boldsymbol{E_1}^{-1} \boldsymbol{E_2}^{-1} \cdots \boldsymbol{E_k}^{-1} \boldsymbol{U} \end{equation}$$

We know that elementary matrices $\boldsymbol{E}_1$, $\boldsymbol{E}_2$, $\cdots$, $\boldsymbol{E}_k$ are all lower triangular matrices. By theoremlink, the inverse of a lower triangular matrix is also a lower triangular matrix. Therefore, $\boldsymbol{E}_1^{-1}$, $\boldsymbol{E}^{-1}_2$, $\cdots$, $\boldsymbol{E}^{-1}_k$ are all lower triangular matrices. By theoremlink, the product of lower triangular matrices also results in a lower triangular matrix. Let's express this product like so:

$$\begin{equation}\label{eq:nwLkhk5cMunkwsPtPTh} \boldsymbol{L} = \boldsymbol{E_1}^{-1} \boldsymbol{E_2}^{-1} \cdots \boldsymbol{E_k}^{-1} \end{equation}$$

Where $\boldsymbol{L}$ is a lower triangular matrix. Therefore, \eqref{eq:vNlDGOwhsFrqkw9fegY} becomes:

$$\boldsymbol{A}= \boldsymbol{LU}$$

Finally, let's go over why the elementary row operation of row interchanges is not allowed. As an example, here's the elementary matrix corresponding to interchanging the first and second rows:

$$\boldsymbol{E}=\begin{pmatrix} 0&1&0\\ 1&0&0\\ 0&0&1\\ \end{pmatrix}$$

Clearly, this is not a triangular matrix. If we include a non-triangular matrix in \eqref{eq:nwLkhk5cMunkwsPtPTh}, then the product of the elementary matrices is no longer guaranteed to be a lower triangular matrix.

This completes the proof.

Example.

Performing LU factorization on a 2x2 matrix

Perform $\boldsymbol{LU}$ factorization on the following matrix:

$$\boldsymbol{A}=\begin{pmatrix}1&2\\3&4\end{pmatrix}$$

Proof. We first obtain the row echelon form of $\boldsymbol{A}$ without performing any row swaps:

$$\begin{pmatrix}1&2\\3&4\end{pmatrix} \sim \begin{pmatrix}1&2\\0&-2\end{pmatrix}$$

Here, we've performed the elementary row operation of multiplying the first row by $-3$ and then adding it to the bottom row. The corresponding elementary matrix is:

$$\boldsymbol{E}= \begin{pmatrix} 1&0\\ -3&1 \end{pmatrix}$$

By theoremlink, the inverse matrix of $\boldsymbol{E}$ is:

$$\boldsymbol{E}^{-1}= \begin{pmatrix} 1&0\\ 3&1 \end{pmatrix}$$

By theoremlink, we can express $\boldsymbol{A}$ as:

$$\begin{align*} \boldsymbol{A}&= \boldsymbol{E}^{-1}\boldsymbol{U}\\ &=\begin{pmatrix}1&0\\3&1\end{pmatrix} \begin{pmatrix}1&2\\0&-2\end{pmatrix} \end{align*}$$
Example.

Performing LU factorization on a 3x3 matrix

Perform $\boldsymbol{LU}$ factorization on the following matrix:

$$\boldsymbol{A}= \begin{pmatrix}1&3&2\\2&1&1\\4&2&2\end{pmatrix}$$

Solution. We first obtain the row echelon form of $\boldsymbol{A}$ without performing any row interchanges:

$$\begin{pmatrix}1&3&2\\2&1&1\\4&2&2\end{pmatrix}\sim \begin{pmatrix}1&3&2\\0&-5&-3\\4&2&2\end{pmatrix}\sim \begin{pmatrix}1&3&2\\0&-5&-3\\0&-10&-6\end{pmatrix}\sim \begin{pmatrix}1&3&2\\0&-5&-3\\0&0&0\end{pmatrix}$$

The corresponding elementary row operations are:

$$\boldsymbol{E}_1=\begin{pmatrix} 1&0&0\\-2&1&0\\0&0&1 \end{pmatrix},\;\;\;\;\; \boldsymbol{E}_2=\begin{pmatrix} 1&0&0\\0&1&0\\-4&0&1 \end{pmatrix},\;\;\;\;\; \boldsymbol{E}_3=\begin{pmatrix} 1&0&0\\0&1&0\\0&-2&1 \end{pmatrix}$$

By theoremlink, the inverse matrices are:

$$\boldsymbol{E}_1^{-1}=\begin{pmatrix} 1&0&0\\2&1&0\\0&0&1 \end{pmatrix},\;\;\;\;\; \boldsymbol{E}_2^{-1}=\begin{pmatrix} 1&0&0\\0&1&0\\4&0&1 \end{pmatrix},\;\;\;\;\; \boldsymbol{E}_3^{-1}=\begin{pmatrix} 1&0&0\\0&1&0\\0&2&1 \end{pmatrix}$$

The product of these inverse matrices is:

$$\begin{align*}\boldsymbol{L}&=\boldsymbol{E}_1^{-1}\boldsymbol{E}_2^{-1}\boldsymbol{E}_3^{-1}\\ &=\begin{pmatrix}1&0&0\\2&1&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&1&0\\4&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&1&0\\0&2&1\end{pmatrix}\\ &=\begin{pmatrix}1&0&0\\2&1&0\\4&0&1\end{pmatrix} \begin{pmatrix}1&0&0\\0&1&0\\0&2&1\end{pmatrix}\\ &=\begin{pmatrix}1&0&0\\2&1&0\\4&2&1\end{pmatrix} \end{align*}$$

By theoremlink, we have that:

$$\boldsymbol{A}= \begin{pmatrix}1&0&0\\2&1&0\\4&2&1\end{pmatrix} \begin{pmatrix}1&3&2\\0&-5&-3\\0&0&0\end{pmatrix}$$
Theorem.

LU factorization of a matrix is not unique

In general, the $\boldsymbol{LU}$ factorization of a matrix is not unique.

Counter example. Recall from examplelink in which we performed $\boldsymbol{LU}$ factorization on the following:

$$\boldsymbol{A}=\begin{pmatrix}1&2\\3&4\end{pmatrix}$$

The row echelon form of $\boldsymbol{A}$ is:

$$\begin{equation}\label{eq:GiGJS7dCfh5KEbUDdiw} \begin{pmatrix}1&2\\3&4\end{pmatrix} \sim \begin{pmatrix}1&2\\0&-2\end{pmatrix} \end{equation}$$

The corresponding elementary matrix and its inverse are:

$$\boldsymbol{E}= \begin{pmatrix}1&0\\-3&1\end{pmatrix} ,\;\;\;\;\;\; \boldsymbol{E}^{-1}= \begin{pmatrix}1&0\\3&1\end{pmatrix}$$

Therefore, $\boldsymbol{A}$ has the following $\boldsymbol{LU}$ form:

$$\begin{align*} \boldsymbol{A}&= \boldsymbol{E}^{-1}\boldsymbol{U}\\ &=\begin{pmatrix}1&0\\3&1\end{pmatrix} \begin{pmatrix}1&2\\0&-2\end{pmatrix} \end{align*}$$

However, we can further reduce the row echelon form of $\boldsymbol{A}$ in \eqref{eq:GiGJS7dCfh5KEbUDdiw} to get:

$$\begin{equation}\label{eq:KNM0cce9DtV8B1NtAFq} \begin{pmatrix}1&2\\3&4\end{pmatrix} \sim \begin{pmatrix}1&2\\0&-2\end{pmatrix} \sim \begin{pmatrix}1&2\\0&1\end{pmatrix} \end{equation}$$

Here, we multiplied the second row by $-2$. The elementary matrices are:

$$\boldsymbol{E}_1= \begin{pmatrix}1&0\\-3&1\end{pmatrix} ,\;\;\;\;\;\; \boldsymbol{E}_2= \begin{pmatrix}1&0\\0&-2\end{pmatrix}$$

By theoremlink, the inverses of $\boldsymbol{E}_1$ and $\boldsymbol{E}_2$ are:

$$\boldsymbol{E}_1^{-1}= \begin{pmatrix}1&0\\3&1\end{pmatrix} ,\;\;\;\;\;\; \boldsymbol{E}_2^{-1}= \begin{pmatrix}1&0\\0&-\frac{1}{2}\end{pmatrix}$$

This means that $\boldsymbol{A}$ can also be factorized into $\boldsymbol{LU}$ form like so:

$$\begin{align*} \boldsymbol{A}&= (\boldsymbol{E}_1^{-1}\boldsymbol{E}_2^{-1})\boldsymbol{U}\\ &=\begin{pmatrix}1&0\\3&1\end{pmatrix} \begin{pmatrix}1&0\\0&-\frac{1}{2}\end{pmatrix} \begin{pmatrix}1&2\\0&1\end{pmatrix}\\ &= \begin{pmatrix}1&0\\3&-\frac{1}{2}\end{pmatrix} \begin{pmatrix}1&2\\0&1\end{pmatrix} \end{align*}$$

In fact, we could multiply the first or second row by any scalar, which will give us another row echelon form $\boldsymbol{U}$. Therefore, there are infinite number of $\boldsymbol{LU}$ forms of $\boldsymbol{A}$. The completes the proof.

Theorem.

Not all square matrices can be LU factorized

Theoremlink guarantees the existence of a $\boldsymbol{LU}$ form of a square matrix $\boldsymbol{A}$ if $\boldsymbol{A}$ can be reduced to a row echelon form without performing any row swaps. If this condition is not satisfied, then $\boldsymbol{A}$ generally cannot be $\boldsymbol{LU}$ factorized.

Counter example. Consider the following matrix:

$$\boldsymbol{A}=\begin{pmatrix} 0&1\\1&0 \end{pmatrix}$$

Notice how we can not eliminate the $1$ below the $0$ because we can only add multiples of the top row to the bottom row. Therefore, this matrix can only be reduced to a row echelon form by interchanging the rows. This means that the existence of the $\boldsymbol{LU}$ form of $\boldsymbol{A}$ is not guaranteed. Let's show that $\boldsymbol{A}$ in fact does not have a $\boldsymbol{LU}$ form. Let $\boldsymbol{L}$ and $\boldsymbol{U}$ be defined like so:

$$\boldsymbol{L}=\begin{pmatrix} l_{11}&0\\ l_{21}&l_{22}\\ \end{pmatrix},\;\;\;\;\;\; \boldsymbol{U}=\begin{pmatrix} u_{11}&u_{12}\\ 0&u_{22}\\ \end{pmatrix}$$

The product $\boldsymbol{LU}$ is:

$$\boldsymbol{LU}=\begin{pmatrix} l_{11}&0\\ l_{21}&l_{22}\\ \end{pmatrix}\begin{pmatrix} u_{11}&u_{12}\\0&u_{22}\\ \end{pmatrix}=\begin{pmatrix} l_{11}u_{11}&0\\ l_{21}u_{11}&l_{21}u_{12}+l_{22}u_{22} \end{pmatrix}$$

Notice how the top-right entry of $\boldsymbol{LU}$ is $0$. This means that $\boldsymbol{LU}$ can never be $\boldsymbol{A}$ and so $\boldsymbol{A}$ does not have a $\boldsymbol{LU}$ form.

As another interesting example, consider the following matrix:

$$\boldsymbol{A}=\begin{pmatrix} 0&0\\1&0 \end{pmatrix}$$

Again, theoremlink cannot guarantee the existence of the $\boldsymbol{LU}$ form of $\boldsymbol{A}$. However, this does not mean that a $\boldsymbol{LU}$ form does not exist - in fact, a $\boldsymbol{LU}$ form does exist in this case:

$$\begin{pmatrix} 0&0\\1&0 \end{pmatrix}= \begin{pmatrix} 0&0\\1&0 \end{pmatrix} \begin{pmatrix} 1&0\\0&1 \end{pmatrix}$$
Theorem.

Solving linear systems using LU factorization

Consider the linear system $\boldsymbol{Ax}=\boldsymbol{b}$. Suppose $\boldsymbol{A}$ can be factorized as $\boldsymbol{A}=\boldsymbol{LU}$. Substitute $\boldsymbol{A}$ into $\boldsymbol{Ax}=\boldsymbol{b}$ to get:

$$\begin{equation}\label{eq:r9NAn9g56oN1waXrjoV} \boldsymbol{LUx}=\boldsymbol{b} \end{equation}$$

Define a vector $\boldsymbol{y}$ such that:

$$\begin{equation}\label{eq:jVmN4hvWsxyeJFPABTe} \boldsymbol{y}=\boldsymbol{Ux} \end{equation}$$

Substituting $\boldsymbol{y}$ into \eqref{eq:r9NAn9g56oN1waXrjoV} gives:

$$\boldsymbol{Ly}=\boldsymbol{b}$$

We solve this system to obtain $\boldsymbol{y}$, which we then substitute into \eqref{eq:jVmN4hvWsxyeJFPABTe} to solve for $\boldsymbol{x}$.

Theorem.

Solving a linear system using LU factorization

Solve the following system using $\boldsymbol{LU}$ factorization:

$$\begin{cases} 2x_1+4x_2=6\\ 2x_1+3x_2=2\\ \end{cases}$$

Proof. The system can be written as $\boldsymbol{Ax}=\boldsymbol{b}$ where:

$$\boldsymbol{A}=\begin{pmatrix} 2&4\\2&3 \end{pmatrix},\;\;\;\;\;\; \boldsymbol{x}=\begin{pmatrix} x_1\\x_2 \end{pmatrix},\;\;\;\;\;\; \boldsymbol{b}=\begin{pmatrix} 6\\2 \end{pmatrix}$$

The first step is to obtain a $\boldsymbol{LU}$ form of $\boldsymbol{A}$ like so:

$$\begin{pmatrix}2&4\\2&3 \end{pmatrix}\sim \begin{pmatrix}2&4\\0&-1 \end{pmatrix}$$

The corresponding elementary matrix and its inverse are:

$$\boldsymbol{E}= \begin{pmatrix} 1&0\\-1&1 \end{pmatrix},\;\;\;\;\;\;\; \boldsymbol{E}^{-1}= \begin{pmatrix} 1&0\\1&1 \end{pmatrix}$$

By theoremlink, $\boldsymbol{A}$ can be factorized into:

$$\begin{align*} \boldsymbol{A}&= \boldsymbol{LU}\\ &= \begin{pmatrix} 1&0\\1&1 \end{pmatrix} \begin{pmatrix} 2&4\\0&-1 \end{pmatrix} \end{align*}$$

We first solve the system $\boldsymbol{Ly}=\boldsymbol{b}$ for $\boldsymbol{y}$ like so:

$$\begin{pmatrix} 1&0\\1&1 \end{pmatrix} \begin{pmatrix} y_1\\y_2 \end{pmatrix}= \begin{pmatrix} 6\\2 \end{pmatrix}$$

We have that $y_1=6$ and $y_2=-4$. We now solve $\boldsymbol{y}=\boldsymbol{Ux}$ for $\boldsymbol{x}$ like so:

$$\begin{pmatrix} 6\\-4 \end{pmatrix}= \begin{pmatrix} 2&4\\0&-1 \end{pmatrix} \begin{pmatrix} x_1\\x_2 \end{pmatrix}$$

We have that $x_2=4$ and $x_1=-5$. Therefore, the solution to $\boldsymbol{Ax}=\boldsymbol{b}$ is:

$$\boldsymbol{x}= \begin{pmatrix} -5\\4 \end{pmatrix}$$

Benefits of using LU factorization to solve linear systems

You may wonder why we go through the hassle of solving a system of linear equations using $\boldsymbol{LU}$ factorization instead of the more straight forward approach of Gaussian elimination. There are two main reasons:

  • although $\boldsymbol{LU}$ factorization is cumbersome by hand, the computer implementation of solving a linear system using $\boldsymbol{LU}$ factorization is just as fast as Gaussian elimination.

  • if we are given multiple linear systems $\boldsymbol{Ax}=\boldsymbol{b}_1$, $\boldsymbol{Ax}=\boldsymbol{b}_2$, $\cdots$, $\boldsymbol{Ax}=\boldsymbol{b}_k$, then solving them will be much easier once we work out the $\boldsymbol{LU}$ factorization of $\boldsymbol{Ax}$ at the start. If we were to use Gaussian elimination, we would have to solve each system separately.

For these reasons, $\boldsymbol{LU}$ factorization is commonly used in computer programs to solve linear systems.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...