search
Search
Unlock 100+ guides
search toc
close
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
near_me
Linear Algebra
54 guides
keyboard_arrow_down
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

# Cramer's rule and finding inverse matrix using determinants

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Theorem.

# Cramer's rule

Consider $\boldsymbol{Ax}=\boldsymbol{b}$ where $\boldsymbol{A}$ is an invertible $n\times{n}$ matrix and $\boldsymbol{x}$ and $\boldsymbol{b}$ are vectors in $\mathbb{R}^n$. The components of $\boldsymbol{x}$ are given by:

$$x_i=\frac{\det(\boldsymbol{A}_i(\boldsymbol{b}))} {\det(\boldsymbol{A})}$$

Where $\boldsymbol{A}_i(\boldsymbol{b})$ is a matrix that is identical to $\boldsymbol{A}$ except that the $i$-th column is replaced by $\boldsymbol{b}$.

Proof. Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\boldsymbol{a}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Similarly, consider the identity matrix $\boldsymbol{I}$ below:

$$\boldsymbol{I}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{e}_1&\boldsymbol{e}_2&\cdots&\boldsymbol{e}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Here, the columns of $\boldsymbol{I}$ are the standard unit vectors.

Now, by definition, $\boldsymbol{A}_i(\boldsymbol{b})$ is the same as matrix $\boldsymbol{A}$ except that the $i$-th column is replaced by $\boldsymbol{b}$, that is:

$$\boldsymbol{A}_i(\boldsymbol{b})= \begin{pmatrix} \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\boldsymbol{a}_{i-1} &\boldsymbol{b} &\boldsymbol{a}_{i+1}&\cdots&\boldsymbol{a}_{n} \\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}$$

Also, by definition, $\boldsymbol{I}_i(\boldsymbol{x})$ is:

$$\boldsymbol{I}_i(\boldsymbol{x})= \begin{pmatrix} \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \\ \boldsymbol{e}_1&\boldsymbol{e}_2&\cdots&\boldsymbol{e}_{i-1} &\boldsymbol{x} &\boldsymbol{e}_{i+1}&\cdots&\boldsymbol{e}_{n} \\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}$$

Notice how the $i$-th row contains all zeros except for $x_i$. By Laplace expansion theoremlink, we can compute the determinant of $\boldsymbol{I}_i(\boldsymbol{x})$ by cofactor expansion along the $i$-th row:

\label{eq:mc88MydaJ22kwUgV9Ox} \begin{aligned}[b] \det\big(\boldsymbol{I}_i(\boldsymbol{x})\big) &=x_i\cdot\det(\boldsymbol{I})\\ &=x_i\cdot(1)\\ &=x_i \end{aligned}

Here, we used the fact that $\det(\boldsymbol{I})=1$ by theoremlink.

Now, the matrix product $\boldsymbol{A}\Big(\boldsymbol{I}_i(\boldsymbol{x})\Big)$ is:

\label{eq:KlruqkGMOuMi0fTgm1X} \begin{aligned}[b] \boldsymbol{A}\Big(\boldsymbol{I}_i(\boldsymbol{x})\Big)&= \boldsymbol{A}\begin{pmatrix} \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \\ \boldsymbol{e}_1&\boldsymbol{e}_2&\cdots&\boldsymbol{e}_{i-1} &\boldsymbol{x} &\boldsymbol{e}_{i+1}&\cdots&\boldsymbol{e}_{n} \\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}\\ &=\begin{pmatrix} \vert&\vert&\cdots&\vert&\vert&\vert&\cdots&\vert\\ \boldsymbol{A}\boldsymbol{e}_1&\boldsymbol{A}\boldsymbol{e}_2&\cdots&\boldsymbol{A}\boldsymbol{e}_{i-1} &\boldsymbol{A}\boldsymbol{x}&\boldsymbol{A}\boldsymbol{e}_{i+1}&\cdots&\boldsymbol{A}\boldsymbol{e}_{n}\\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}\\ &=\begin{pmatrix} \vert&\vert&\cdots&\vert&\vert&\vert&\cdots&\vert\\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\boldsymbol{a}_{i-1} &\boldsymbol{b}&\boldsymbol{a}_{i+1}&\cdots&\boldsymbol{a}_{n}\\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}\\ &=\boldsymbol{A}_i(\boldsymbol{b}) \end{aligned}

Note that the second equality holds because of theoremlink. Now, we take the determinant of both sides of \eqref{eq:KlruqkGMOuMi0fTgm1X} to get:

$$\det\Big(\boldsymbol{A}\big(\boldsymbol{I}_i(\boldsymbol{x})\big)\Big)= \det\big(\boldsymbol{A}_i(\boldsymbol{b})\big)$$

From the multiplicative property of determinantslink, we have that:

$$\det(\boldsymbol{A})\cdot\det\big(\boldsymbol{I}_i(\boldsymbol{x})\big)= \det\big(\boldsymbol{A}_i(\boldsymbol{b})\big)$$

Substituting \eqref{eq:mc88MydaJ22kwUgV9Ox} gives:

\begin{align*} \det(\boldsymbol{A})\cdot{x_i}&= \det\big(\boldsymbol{A}_i(\boldsymbol{b})\big)\\ x_i&=\frac{\det\big(\boldsymbol{A}_i(\boldsymbol{b})\big)}{\det(\boldsymbol{A})} \end{align*}

This completes the proof.

Example.

## Solving systems of linear equations using Cramer's rule

Solve the following system of linear equations using Cramer's rule:

$$\begin{cases} x_1+2x_2&=7\\ x_1+x_2&=3 \end{cases}$$

Solution. The system of linear equations can be expressed as:

$$\begin{pmatrix} 1&2\\1&1 \end{pmatrix} \begin{pmatrix} x_1\\x_2 \end{pmatrix}= \begin{pmatrix} 7\\3 \end{pmatrix}$$

Let $\boldsymbol{A}$ represent the matrix on the left and $\boldsymbol{b}$ represent the vector on the right. To use Cramer's rule, we first need to compute the determinant of $\boldsymbol{A}$ like so:

\begin{align*} \det(\boldsymbol{A})&= (1)(1)-(2)(1)\\ &=1-2\\ &=-1 \end{align*}

Next, $\boldsymbol{A}_1(\boldsymbol{b})$ and $\boldsymbol{A}_2(\boldsymbol{b})$ are:

$$\boldsymbol{A}_1(\boldsymbol{b})= \begin{pmatrix} 7&2\\3&1 \end{pmatrix},\;\;\;\;\;\; \boldsymbol{A}_2(\boldsymbol{b})= \begin{pmatrix} 1&7\\1&3 \end{pmatrix}$$

We now compute the determinant of each:

\begin{align*} \det\big(\boldsymbol{A}_1(\boldsymbol{b})\big) &=(7)(1)-(2)(3)\\ &=1\\\\ \det\big(\boldsymbol{A}_2(\boldsymbol{b})\big) &=(1)(3)-(7)(1)\\ &=-4 \end{align*}

Now, we use Cramer's rule to solve the system:

\begin{align*} x_1 &=\frac{\boldsymbol{A}_1(\boldsymbol{b})} {\det(\boldsymbol{A})}\\ &=\frac{1}{-1}\\ &=-1\\\\ x_2 &=\frac{\boldsymbol{A}_2(\boldsymbol{b})} {\det(\boldsymbol{A})}\\ &=\frac{-4}{-1}\\ &=4 \end{align*}

Therefore, the solution is:

$$\begin{pmatrix} x_1\\x_2 \end{pmatrix}= \begin{pmatrix} -1\\4\end{pmatrix}$$

Finally, just to confirm that this is indeed the solution to our system of linear equations, let's substitute $x_1$ and $x_2$ into the system:

$$\begin{cases} -1+2(4)&=7\\ (-1)+4&=3 \end{cases}$$

These are indeed the solutions to the system 🎉!

Definition.

The adjugate (or classical adjoint) of a matrix $\boldsymbol{A}$ is defined as the transpose of a matrix containing the cofactors of $\boldsymbol{A}$, that is:

$$\mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} C_{11}&C_{21}&\cdots&C_{n1}\\ C_{12}&C_{22}&\cdots&C_{n2}\\ \vdots&\vdots&\ddots&\vdots\\ C_{1n}&C_{2n}&\cdots&C_{nn}\\ \end{pmatrix}$$

Example.

## Finding the adjugate of a 2x2 matrix

Find the adjugate of the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 3&1\\2&4 \end{pmatrix}$$

Solution. The adjugate of $\boldsymbol{A}$ is:

$$\mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} C_{11}&C_{21}\\C_{12}&C_{22} \end{pmatrix}$$

We now need to find the cofactors:

\begin{align*} C_{11}&=4\\ C_{21}&=-1\\ C_{12}&=-2\\ C_{22}&=3\\ \end{align*}

Therefore, the adjugate of $\boldsymbol{A}$ is:

$$\mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} 4&-1\\-2&3 \end{pmatrix}$$
Theorem.

# Finding the inverse matrix using the adjugate of a matrix

If $\boldsymbol{A}$ is an invertible matrix, then its inverse can be computed by:

$$\boldsymbol{A}^{-1} = \frac{1}{\det(\boldsymbol{A})} \;\mathrm{adj}(\boldsymbol{A})$$

Where $\text{adj}(\boldsymbol{A})$ is the adjugate of $\boldsymbol{A}$.

Proof. By definitionlink, a matrix $\boldsymbol{A}$ is invertible if and only if there exists another matrix $\boldsymbol{B}$ such that:

$$\boldsymbol{AB}=\boldsymbol{I}$$

Once again, we treat matrices as a collection of columns:

$$\boldsymbol{A}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\boldsymbol{a}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}, \;\;\;\;\;\;\boldsymbol{B}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{b}_1&\boldsymbol{b}_2&\cdots&\boldsymbol{b}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}, \;\;\;\;\;\; \boldsymbol{I}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{e}_1&\boldsymbol{e}_2&\cdots&\boldsymbol{e}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Where the column vectors in $\boldsymbol{I}$ represent the standard unit vectors. From theoremlink, the matrix product $\boldsymbol{AB}$ can be expressed as:

$$\boldsymbol{AB}= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{A}\boldsymbol{b}_1&\boldsymbol{A}\boldsymbol{b}_2&\cdots&\boldsymbol{A}\boldsymbol{b}_n\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Aligning the columns of $\boldsymbol{AB}$ and $\boldsymbol{I}$ gives:

\begin{align*} \boldsymbol{A}\boldsymbol{b}_1&=\boldsymbol{e}_1\\ \boldsymbol{A}\boldsymbol{b}_2&=\boldsymbol{e}_2\\ &\vdots\\ \boldsymbol{A}\boldsymbol{b}_n&=\boldsymbol{e}_n \end{align*}

Let's consider the $j$-th equality:

$$$$\label{eq:NzWlAObw1vbirulI81f} \boldsymbol{A}\boldsymbol{b}_j= \boldsymbol{e}_j$$$$

This can be thought of as a system of linear equations. The components of $\boldsymbol{b}_j$ can be expressed as:

$$$$\label{eq:QHoHxlY9MaHnXYENUgz} \boldsymbol{b}_j= \begin{pmatrix} b_{1j}\\b_{2j}\\\vdots\\b_{nj} \end{pmatrix}$$$$

We can find each component using Cramer's rulelink like so:

$$$$\label{eq:VhAat4egrXF78WCsobx} b_{ij}= \frac{\det\big(\boldsymbol{A}_i(\boldsymbol{e}_j)\big)} {\det(\boldsymbol{A})}$$$$

The numerator of \eqref{eq:VhAat4egrXF78WCsobx} is:

$$\boldsymbol{A}_i(\boldsymbol{e}_j)= \begin{pmatrix} \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\boldsymbol{a}_{i-1} &\boldsymbol{e}_j &\boldsymbol{a}_{i+1}&\cdots&\boldsymbol{a}_{n} \\ \vert&\vert&\cdots&\vert& \vert&\vert&\cdots&\vert \end{pmatrix}$$

By definition, column $\boldsymbol{e}_j$ has a $1$ in the $j$-th entry and zero in all other entries. We now perform cofactor expansion along the $i$-th column to obtain the determinant of $\boldsymbol{A}_i(\boldsymbol{e}_j)$ like so:

$$$$\label{eq:QwllPQsCa6VKNAJxQPS} \det\big(\boldsymbol{A}_i(\boldsymbol{e}_j)\big)= (-1)^{i+j}\cdot\det(\boldsymbol{A}_{ji})$$$$

Where $\boldsymbol{A}_{ji}$ represents the sub-matrix in which the $j$-th row and the $i$-th column of $\boldsymbol{A}$ are removed. For notational convenience, we write the right-hand side as the cofactor $C_{ji}$ of entry $a_{ji}$ to get:

$$$$\label{eq:DZT5wYfD8XE7rMz7g0f} \det\big(\boldsymbol{A}_i(\boldsymbol{e}_j)\big)= C_{ji}$$$$

Substituting \eqref{eq:DZT5wYfD8XE7rMz7g0f} into \eqref{eq:VhAat4egrXF78WCsobx} gives:

$$$$\label{eq:YSZHqoEIbE2fcJGaKue} b_{ij}= \frac{C_{ji}} {\det(\boldsymbol{A})}$$$$

Using \eqref{eq:YSZHqoEIbE2fcJGaKue}, we now express \eqref{eq:QHoHxlY9MaHnXYENUgz} as:

$$\boldsymbol{b}_j= \begin{pmatrix} C_{j1}/\det(\boldsymbol{A})\\ C_{j2}/\det(\boldsymbol{A})\\ \vdots\\ C_{jn}/\det(\boldsymbol{A})\\ \end{pmatrix}=\frac{1}{\det(\boldsymbol{A})} \begin{pmatrix} C_{j1}\\ C_{j2}\\ \vdots\\ C_{jn}\\ \end{pmatrix}$$

Remember, $\boldsymbol{b}_j$ represents the $j$-th column of $\boldsymbol{B}$. We can now express $\boldsymbol{B}$ in its entirety:

$$$$\label{eq:vrmyga8REQfbqxH3cqg} \boldsymbol{B}= \frac{1}{\det(\boldsymbol{A})} \begin{pmatrix} C_{11}&C_{21}&\cdots&C_{n1}\\ C_{12}&C_{22}&\cdots&C_{n2}\\ \vdots&\vdots&\ddots&\vdots\\ C_{1n}&C_{2n}&\cdots&C_{nn}\\ \end{pmatrix}$$$$

The matrix in \eqref{eq:vrmyga8REQfbqxH3cqg} is called the adjugatelink of $\boldsymbol{A}$, that is:

$$\mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} C_{11}&C_{21}&\cdots&C_{n1}\\ C_{12}&C_{22}&\cdots&C_{n2}\\ \vdots&\vdots&\ddots&\vdots\\ C_{1n}&C_{2n}&\cdots&C_{nn}\\ \end{pmatrix}$$

Finally, $\boldsymbol{B}$ is the inverse of $\boldsymbol{A}$, so let's write it as $\boldsymbol{A}^{-1}$ instead. Therefore, we end up with the following result:

$$\boldsymbol{A}^{-1} = \frac{1}{\det(\boldsymbol{A})} \;\mathrm{adj}(\boldsymbol{A})$$

This completes the proof.

Theorem.

# Finding the inverse of a 2x2 matrix using its determinant

Consider the following $2\times2$ matrix:

$$\boldsymbol{A}= \begin{pmatrix} a&b\\ c&d \end{pmatrix}$$

If $\mathrm{det}(\boldsymbol{A})\ne0$, that is, if $\boldsymbol{A}$ is invertible, then the inverse of $\boldsymbol{A}$ is computed as:

$$\boldsymbol{A}^{-1}= \frac{1}{ad-bc} \begin{pmatrix} d&-b\\ -c&a \end{pmatrix}$$

Note that if $\mathrm{det}{(\boldsymbol{A})}=0$, then the inverse of $\boldsymbol{A}$ does not exist.

Proof. By theoremlink, we have that:

$$$$\label{eq:gZWXPHh86uoVaBPL25D} \boldsymbol{A}^{-1} = \frac{1}{\det(\boldsymbol{A})} \;\mathrm{adj}(\boldsymbol{A})$$$$

By theoremlink, we know that the determinant of a $2\times2$ matrix is:

$$$$\label{eq:S6Nx8o9E4W7RCn1bWUG} \det(\boldsymbol{A})=ad-bc$$$$

The adjugate of $\boldsymbol{A}$ is:

$$\mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} C_{11}&C_{21}\\C_{12}&C_{22} \end{pmatrix}$$

The cofactors are:

\begin{align*} C_{11}&=d\\ C_{21}&=-b\\ C_{12}&=-c\\ C_{22}&=a\\ \end{align*}

The adjugate of $\boldsymbol{A}$ is therefore:

$$$$\label{eq:IprqJxAw30U05ioX3DD} \mathrm{adj}(\boldsymbol{A})= \begin{pmatrix} d&-b\\-c&a \end{pmatrix}$$$$

Substituting \eqref{eq:S6Nx8o9E4W7RCn1bWUG} and \eqref{eq:IprqJxAw30U05ioX3DD} into \eqref{eq:gZWXPHh86uoVaBPL25D} gives:

$$\boldsymbol{A}^{-1} = \frac{1}{ad-bc} \begin{pmatrix} d&-b\\-c&a \end{pmatrix}$$

This completes the proof.

Example.

## Computing the inverse of a 2x2 matrix

Compute the inverse of the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 2&1\\ 4&3\\ \end{pmatrix}$$

Solution. The inverse of matrix $\boldsymbol{A}$ is:

\begin{align*} \boldsymbol{A}^{-1} &=\frac{1}{(2)(3)-(1)(4)} \begin{pmatrix} 3&-1\\ -4&2\\ \end{pmatrix}\\ &=\frac{1}{2} \begin{pmatrix} 3&-1\\ -4&2\\ \end{pmatrix}\\ &= \begin{pmatrix} 1.5&-0.5\\ -2&1\\ \end{pmatrix} \end{align*}

Let's confirm that this is actually the inverse of $\boldsymbol{A}$ by computing $\boldsymbol{AA}^{-1}$ like so:

\begin{align*} \boldsymbol{A}\boldsymbol{A}^{-1} &=\begin{pmatrix}2&1\\4&3\\\end{pmatrix} \begin{pmatrix}1.5&-0.5\\-2&1\\\end{pmatrix}\\ &=\begin{pmatrix}(2)(1.5)+(1)(-2)&2(-0.5)+(1)(1)\\ (4)(1.5)+(3)(-2)&(4)(-0.5)+(3)(1)\\\end{pmatrix}\\ &=\begin{pmatrix}1&0\\0&1\\\end{pmatrix} \end{align*}

Indeed, the $\boldsymbol{A}^{-1}$ that we found is the inverse of $\boldsymbol{A}$.

Edited by 0 others
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!