search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
1
Comment
auto_stories Bi-column layout
settings

Comprehensive Guide on Determinant of Elementary Matrices

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Theorem.

Determinant of matrices that differ by one row

Let square matrices $\boldsymbol{A}$, $\boldsymbol{B}$ and $\boldsymbol{C}$ have the same entries except for the $r$-th row. If the $r$-th row of $\boldsymbol{C}$ is equal to the sum of the $r$-th rows of $\boldsymbol{A}$ and $\boldsymbol{B}$, then the determinant of $\boldsymbol{C}$ is:

$$\det(\boldsymbol{C})= \det(\boldsymbol{A})+ \det(\boldsymbol{B})$$

Proof. We can prove this using induction. We first show that the proposition holds for the $2\times2$ case. Consider the following matrices:

$$\begin{align*} \boldsymbol{A}&=\begin{pmatrix} a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{pmatrix},\;\;\;\;\;\; \boldsymbol{B}=\begin{pmatrix} a_{11}&a_{12}\\ b_{21}&b_{22}\\ \end{pmatrix},\;\;\;\;\;\; \boldsymbol{C}=\begin{pmatrix} a_{11}&a_{12}\\ a_{21}+b_{21}&a_{22}+b_{22}\\ \end{pmatrix} \end{align*}$$

Here, the second row of $\boldsymbol{C}$ is the sum of the second rows of $\boldsymbol{A}$ and $\boldsymbol{B}$. The determinant of $\boldsymbol{A}$ and $\boldsymbol{B}$ is:

$$\begin{align*} \det(\boldsymbol{A})&= a_{11}a_{22}-a_{12}a_{21}\\ \det(\boldsymbol{B})&= a_{11}b_{22}-a_{12}b_{21} \end{align*}$$

The determinant of $\boldsymbol{C}$ is:

$$\begin{align*} \det(\boldsymbol{C})&= a_{11}(a_{22}+b_{22})-a_{12}(a_{21}+b_{21})\\ &=a_{11}a_{22}+a_{11}b_{22}-a_{12}a_{21}-a_{12}b_{21}\\ &=(a_{11}a_{22}-a_{12}b_{21})+(a_{11}b_{22}-a_{12}a_{21})\\ &=\det(\boldsymbol{A})+\det(\boldsymbol{B})\\ \end{align*}$$

Similarly, we can show that the proposition holds if the first row of $\boldsymbol{C}$ is the sum of the first rows of $\boldsymbol{A}$ and $\boldsymbol{B}$. Therefore, the proposition holds for the $2\times2$ case.

Next, the formal proof requires assuming the proposition to be true for the $(n-1)\times(n-1)$ case and showing that the proposition also holds for the $n\times{n}$ case. However, to keep things simple, we will show the case for $3\times3$ - we can generalize this to the $n\times{n}$ case using the same idea.

Consider the following matrices:

$$\begin{align*} \boldsymbol{A}&=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{B}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ b_{31}&b_{32}&b_{33} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{C}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}+b_{31}&a_{32}+b_{32}&a_{33}+b_{33} \end{pmatrix} \end{align*}$$

Our goal is to show that:

$$\det(\boldsymbol{C})= \det(\boldsymbol{A})+ \det(\boldsymbol{B})$$

By theoremlink, we can use the cofactor expansion along the first column to find the determinant of $\boldsymbol{A}$ and $\boldsymbol{B}$ like so:

$$\begin{align*} \det(\boldsymbol{A}) &=a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}&a_{33}\\ \end{vmatrix}+ a_{31}\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix}\\ \det(\boldsymbol{B}) &=a_{11}\begin{vmatrix} a_{22}&a_{23}\\ b_{32}&b_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ b_{32}&b_{33}\\ \end{vmatrix}+ b_{31}\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{align*}$$

Compare the pairs of matrices in $\det(\boldsymbol{A})$ and $\det(\boldsymbol{B})$ and notice how they differ by a single row. For instance, focus on the first determinant of $\det(\boldsymbol{A})$ and $\det(\boldsymbol{B})$ - only the second row is different. Using the inductive assumption that the proposition holds for the $2\times2$ case, we have that $\mathrm{det}(\boldsymbol{A})+ \mathrm{det}(\boldsymbol{B})$ is:

$$\begin{equation}\label{eq:wNDCJhMsM0q0JdH8WeV} a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}+b_{32}&a_{33}+b_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}+b_{32}&a_{33}+b_{33}\\ \end{vmatrix}+ (a_{31}+b_{31})\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{equation}$$

Now, let's perform cofactor expansion along the first column to find the determinant of $\boldsymbol{C}$ like so:

$$\begin{equation}\label{eq:GzFMBuCCHkIQD8aH0OZ} \det(\boldsymbol{C}) =a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}+b_{32}&a_{33}+b_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}+b_{32}&a_{33}+b_{33}\\ \end{vmatrix}+ (a_{31}+b_{31})\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{equation}$$

This is the same expression as that of $\mathrm{det}(\boldsymbol{A})+ \mathrm{det}(\boldsymbol{B})$ in \eqref{eq:wNDCJhMsM0q0JdH8WeV}. Therefore, we conclude that:

$$\det(\boldsymbol{C})= \det(\boldsymbol{A})+ \det(\boldsymbol{B})$$

This completes the proof.

Theorem.

Effect of multiplying a row by a scalar multiple on the determinant

Let $\boldsymbol{A}$ be a square matrix. If we multiply a row by a non-zero constant $k$ to produce matrix $\boldsymbol{B}$, then:

$$\mathrm{det}(\boldsymbol{B})=k\cdot\mathrm{det}(\boldsymbol{A})$$

Proof. Consider the following matrices:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{B}_1=\begin{pmatrix} ka_{11}&ka_{12}&ka_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{B}_2=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ ka_{21}&ka_{22}&ka_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$
  • matrix $\boldsymbol{B}_1$ is obtained by multiplying the first row of $\boldsymbol{A}$ by some scalar $k$.

  • matrix $\boldsymbol{B}_2$ is obtained by multiplying the second row of $\boldsymbol{A}$ by some scalar $k$.

Our goal is to show the following:

$$\begin{align*} \mathrm{det}(\boldsymbol{B}_1)&=k\cdot\mathrm{det}(\boldsymbol{A})\\ \mathrm{det}(\boldsymbol{B}_2)&=k\cdot\mathrm{det}(\boldsymbol{A}) \end{align*}$$

The reason why we consider the two cases above is that we are going to compute the determinant by performing cofactor expansion along the first row. Therefore, we must consider the case when we modify the first row as well as the case when we modify any other row.

Firstly, the determinant of $\boldsymbol{B}_1$ is:

$$\begin{align*} \det(\boldsymbol{B}_1)&= ka_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- ka_{12}\begin{vmatrix} a_{21}&a_{23}\\a_{31}&a_{33}\\ \end{vmatrix}+ ka_{13}\begin{vmatrix} a_{21}&a_{22}\\a_{31}&a_{32}\\ \end{vmatrix}\\ &=k\left(a_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}\\a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}\\a_{31}&a_{32}\\ \end{vmatrix}\right)\\ &=k\cdot\det(\boldsymbol{A}) \end{align*}$$

Next, the determinant of $\boldsymbol{B}_2$ is:

$$\det(\boldsymbol{B}_2)= a_{11}\begin{vmatrix} ka_{22}&ka_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{12}\begin{vmatrix} ka_{21}&ka_{23}\\a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} ka_{21}&ka_{22}\\a_{31}&a_{32}\\ \end{vmatrix}\\$$

We then use our inductive assumption that $\mathrm{det}(\boldsymbol{B})=k\cdot\mathrm{det}(\boldsymbol{A})$ for $2\times2$ case:

$$\begin{align*} \det(\boldsymbol{B}_2) &=ka_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- ka_{12}\begin{vmatrix} a_{21}&a_{23}\\a_{31}&a_{33}\\ \end{vmatrix}+ ka_{13}\begin{vmatrix} a_{21}&k_{22}\\a_{31}&a_{32}\\ \end{vmatrix}\\ &=k\left(a_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}\\a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&k_{22}\\a_{31}&a_{32}\\ \end{vmatrix}\right)\\ &=k\cdot{\det(\boldsymbol{A})} \end{align*}$$

This completes the proof.

Theorem.

Effect of interchanging two rows on the determinant

Let $\boldsymbol{A}$ be a square matrix. If we swap one row with another row to produce matrix $\boldsymbol{B}$, then:

$$\mathrm{det}(\boldsymbol{B}) =-\mathrm{det}(\boldsymbol{A})$$

Proof. We prove this by induction. We will consider the two cases below:

  • case when we interchange the first and second rows.

  • case when we interchange rows that are not the first row.

Consider the following two matrices:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\;\;\;\;\;\boldsymbol{B}=\begin{pmatrix} a_{21}&a_{22}&a_{23}\\ a_{11}&a_{12}&a_{13}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

Here, matrix $\boldsymbol{B}$ is obtained by interchanging the first two rows of $\boldsymbol{A}$.

Our goal is to show the following:

$$\det(\boldsymbol{B})=−\det(\boldsymbol{A})$$

We know from theoremlink that the cofactor expansion along the $1$st column is equal to the cofactor expansion along the $1$st row, which is also equal to the determinant by definition:

$$C_{\text{col=1}}= C_{\text{row=1}}= \det(\boldsymbol{A})$$

The cofactor expansion along the $1$st column of $\boldsymbol{A}$ is:

$$\begin{equation}\label{eq:qt87x7fPDHW8HgOrlWj} \begin{aligned}[b] \det(\boldsymbol{A}) &=a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}&a_{33}\\ \end{vmatrix}+ a_{31}\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{aligned} \end{equation}$$

Let's find the determinant of $\boldsymbol{B}$ by cofactor expansion along the $1$st column:

$$\begin{equation}\label{eq:mzYnKejhz7N7eJUpnwd} \begin{aligned}[b] \det(\boldsymbol{B}) &=a_{21}\begin{vmatrix} a_{12}&a_{13}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}+ a_{31}\begin{vmatrix} a_{22}&a_{23}\\a_{12}&a_{13}\\ \end{vmatrix}\\ \end{aligned} \end{equation}$$

Now, we use the inductive assumption that interchanging two adjacent rows of a $2\times2$ matrix results in a sign flip. Applying this assumption to the third determinant of \eqref{eq:mzYnKejhz7N7eJUpnwd} gives:

$$\begin{equation}\label{eq:g7HNKKkvzSNRMcssofA} \begin{aligned}[b] \det(\boldsymbol{B}) &=a_{21}\begin{vmatrix} a_{12}&a_{13}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{11}\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33}\\ \end{vmatrix}- a_{31}\begin{vmatrix} a_{12}&a_{13}\\a_{22}&a_{23} \end{vmatrix}\\ \end{aligned} \end{equation}$$

We now add \eqref{eq:qt87x7fPDHW8HgOrlWj} and \eqref{eq:g7HNKKkvzSNRMcssofA} to get:

$$\det(\boldsymbol{A})+\det(\boldsymbol{B})=0$$

Therefore, we have that:

$$\det(\boldsymbol{B})=-\det(\boldsymbol{A})$$

Great, we have managed to show that the sign of the determinant flips when we interchange the first row and the second row.

* * *

Let's now consider the case when we interchange rows that are not the first row:

$$\boldsymbol{B}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{31}&a_{32}&a_{33}\\ a_{21}&a_{22}&a_{23} \end{pmatrix}$$

Here, matrix $\boldsymbol{B}$ is formed by interchanging the second and third rows of $\boldsymbol{A}$.

Using the cofactor expansion along the first row, we compute the determinant of $\boldsymbol{B}$ like so:

$$\begin{align*} \det(\boldsymbol{B}) &=a_{11}\begin{vmatrix} a_{32}&a_{33}\\ a_{22}&a_{23}\\ \end{vmatrix}- a_{12}\begin{vmatrix} a_{31}&a_{33}\\ a_{21}&a_{23}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{31}&a_{32}\\ a_{21}&a_{22}\\ \end{vmatrix} \end{align*}$$

We now use the inductive assumption that interchanging a pair of rows flips the sign of the determinant of $2\times2$ matrices:

$$\begin{align*} \det(\boldsymbol{B}) &=-a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}+ a_{12}\begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33}\\ \end{vmatrix}- a_{13}\begin{vmatrix} a_{21}&a_{22}\\ a_{31}&a_{32}\\ \end{vmatrix}\\ &=-\left(a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}\\ a_{31}&a_{32}\\ \end{vmatrix}\right)\\ &=-\det(\boldsymbol{A}) \end{align*}$$

This completes the proof.

Theorem.

Determinant of matrix with a row of zeros

If matrix $\boldsymbol{A}$ has a row containing all zeros, then $\det(\boldsymbol{A})=0$.

Proof. Consider the following matrix with all zeros as the first row:

$$\boldsymbol{A}=\begin{pmatrix} 0&0&0\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

We can easily see that the determinant computed using the cofactor expansion along the first row is equal to zero:

$$\det(\boldsymbol{A})=0$$

Next, we consider the case when the matrix contains a different row with all zeros:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ 0&0&0 \end{pmatrix}$$

We know from theoremlink that interchanging rows will only flip the sign of the determinant of $\boldsymbol{A}$. This means that we can keep interchanging rows to get the row with all zeros at the top:

$$\det(\boldsymbol{A})=\begin{vmatrix} 0&0&0\\ a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ \end{vmatrix}$$

In this case, since we perform a row-swapping operation twice, the sign of the determinant will remain unchanged. Regardless of the sign change, this determinant evaluates to zero:

$$\det(\boldsymbol{A})=0$$

This completes the proof.

Theorem.

Determinant of matrix with a pair of identical rows

If two rows of matrix $\boldsymbol{A}$ are the same, then $\det(\boldsymbol{A})=0$.

Proof. For simplicity, consider the following $3\times3$ matrix:

$$\boldsymbol{A}=\begin{pmatrix} a_{21}&a_{22}&a_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

Here, the first two rows are the same. Consider another matrix $\boldsymbol{B}$ that is equivalent to $\boldsymbol{A}$ except that the first two rows are swapped:

$$\boldsymbol{B}=\begin{pmatrix} a_{21}&a_{22}&a_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

From theoremlink, since $\boldsymbol{B}$ can be obtained by a single row-swapping operation on $\boldsymbol{A}$, the determinant of $\boldsymbol{B}$ is:

$$\begin{equation}\label{eq:Lo6kw9nI8tHbHbaNrCp} \det(\boldsymbol{B})=- \det(\boldsymbol{A}) \end{equation}$$

However, $\boldsymbol{A}$ and $\boldsymbol{B}$ are identical so their determinant must be equal:

$$\begin{equation}\label{eq:DtUzrzbz1t5MDsk3ILK} \det(\boldsymbol{A})= \det(\boldsymbol{B}) \end{equation}$$

The only way for \eqref{eq:vsmM3jVuhJzLX5uKNHM} and \eqref{eq:WKcCX2FTScpNR710UYk} to be true is if:

$$\det(\boldsymbol{A})=\det(\boldsymbol{B})=0$$

This completes the proof.

Theorem.

Determinant of matrix where one row is a multiple of another

If one row of $\boldsymbol{A}$ is a scalar multiple of another row, then $\mathrm{det}(\boldsymbol{A})=0$.

Proof. For simplicity, consider the following $3\times3$ matrix:

$$\boldsymbol{A}=\begin{pmatrix} ka_{21}&ka_{22}&ka_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

Here, the first row is $k$ times the second row.

Now, consider the matrix that is identical to $\boldsymbol{A}$ except we divide the first row by $k$ like so:

$$\boldsymbol{A}'=\begin{pmatrix} a_{21}&a_{22}&a_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix}$$

We know from theoremlink that the determinant of a matrix $\boldsymbol{A}'$ formed by multiplying a single row of a matrix $\boldsymbol{A}$ by any scalar, say $1/k$, is:

$$\begin{equation}\label{eq:TtajWUOX6gq69hxvjGk} \det(\boldsymbol{A}') = (1/k)\cdot\det(\boldsymbol{A}) \end{equation}$$

We know from theoremlink that if two rows of a matrix are the same, then its determinant is zero. Since $\boldsymbol{A}'$ has two identical rows, we have that $\mathrm{det}(\boldsymbol{A}')=0$. Therefore, \eqref{eq:TtajWUOX6gq69hxvjGk} becomes:

$$0= (1/k)\cdot\det(\boldsymbol{A})$$

This means that $\mathrm{det}(\boldsymbol{A})=0$. This completes the proof.

Theorem.

Effect of adding a multiple of a row to another row on the determinant

Let $\boldsymbol{A}$ be a square matrix. If we add a multiple of a row to another row to produce matrix $\boldsymbol{B}$, then:

$$\det(\boldsymbol{B})= \det(\boldsymbol{A})$$

Proof. Consider the following matrices:

$$\begin{align*} \boldsymbol{A}&=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{B}= \begin{pmatrix} a_{11}+ka_{21}&a_{12}+ka_{22}&a_{13}+ka_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{33}&a_{33} \end{pmatrix} \end{align*}$$

Here, $\boldsymbol{B}$ is obtained by multiplying the second row by $k$ and then adding it to the first row. Our goal is to show that:

$$\det(\boldsymbol{B})=\det(\boldsymbol{A})$$

Consider another matrix $\boldsymbol{A}'$ where we replace the first row of $\boldsymbol{A}$ with $k$ times the second row of $\boldsymbol{A}$ like so:

$$\begin{align*} \boldsymbol{A}'&=\begin{pmatrix} ka_{21}&ka_{22}&ka_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix} \end{align*}$$

For our reference, we show $\boldsymbol{A}$, $\boldsymbol{A}'$ and $\boldsymbol{B}$ below:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\; \boldsymbol{A}'=\begin{pmatrix} ka_{21}&ka_{22}&ka_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix},\;\; \boldsymbol{B}=\begin{pmatrix} a_{11}+ka_{21}&a_{12}+ka_{22}&a_{13}+ka_{23}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{33}&a_{33} \end{pmatrix}$$

Notice how all the rows except the first row are the same. Moreover, the first row of $\boldsymbol{B}$ is equal to the sum of the first row of $\boldsymbol{A}$ and $\boldsymbol{A}'$. By theoremlink, we have that:

$$\begin{equation}\label{eq:sHejmwTQJ4JJCK6Ccph} \det(\boldsymbol{A})+\det(\boldsymbol{A}') = \det(\boldsymbol{B}) \end{equation}$$

We know from theoremlink that because $\boldsymbol{A}'$ has a row that is a scalar multiple of another row, $\mathrm{det}(\boldsymbol{A}')=0$. Therefore, \eqref{eq:sHejmwTQJ4JJCK6Ccph} becomes:

$$\det(\boldsymbol{A}) = \det(\boldsymbol{B})$$

This completes the proof.

We now summarize the effects of elementary row operation on the determinant of a matrix.

Theorem.

Effect of elementary row operations on determinant

Let $\boldsymbol{A}$ be a square matrix. The determinant changes in the following ways after performing each type of elementary row operation:

  • if we multiply a row by a non-zero constant $k$ to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B})=k\cdot\mathrm{det}(\boldsymbol{A})$.

  • if we swap one row with another adjacent row to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B}) =-\mathrm{det}(\boldsymbol{A})$.

  • if we add a multiple of a row to another row to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B}) =\mathrm{det}(\boldsymbol{A})$.

Theorem.

Determinant of the identity matrix

If $\boldsymbol{I}$ is an identity matrix, then:

$$\det(\boldsymbol{I})=1$$

Proof. We will prove this by induction on the size of the identity matrix. Firstly, we show that the proposition holds for an $1\times1$ identity matrix. The determinant of a matrix with a single entry is the entry itself, and so $\det(\boldsymbol{I}_1)=1$.

Next, we assume that the proposition holds for an identity matrix of size $(n-1)\times(n-1)$, that is:

$$\begin{equation}\label{eq:xXfsQg85DgvCFmR2Ivz} \det(\boldsymbol{I}_{n-1})= 1 \end{equation}$$

Our goal now is to show that the proposition holds for an identity matrix of size $n\times{n}$ shown below:

$$\boldsymbol{I}_n= \begin{pmatrix} 1&0&\cdots&0\\ 0&1&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&1 \end{pmatrix}$$

Let's perform cofactor expansionlink across the first row to obtain the determinant of $\boldsymbol{I}_n$ like so:

$$\begin{align*} \det(\boldsymbol{I}_n)&= (1)\begin{vmatrix} 1&0&\cdots&0\\ 0&1&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&1\\ \end{vmatrix}+ (0)\begin{vmatrix} 0&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&1\\ \end{vmatrix}+\cdots\\ &=\det(\boldsymbol{I}_{n-1}) \end{align*}$$

We now use our inductive assumption \eqref{eq:xXfsQg85DgvCFmR2Ivz} to conclude $\det(\boldsymbol{I}_n)=1$. By the principle of mathematical induction, the theorem holds for the general case. This completes the proof.

Theorem.

Determinant of k times the identity matrix

If $\boldsymbol{I}_n$ is an $n\times{n}$ identity matrix and $k$ is a scalar, then:

$$\det(k\boldsymbol{I}_n)= k^n$$

Proof. By theoremlink, the determinant of an identity matrix is one:

$$\det(\boldsymbol{I}_n)=1$$

The matrix $k\boldsymbol{I}_n$ is:

$$k\boldsymbol{I}_n=\begin{pmatrix} k&0&\cdots&0\\ 0&k&\cdots&0\\ \vdots&\vdots&\smash\ddots&\vdots\\ 0&0&\cdots&k\\ \end{pmatrix}$$

This matrix can be obtained by performing $n$ elementary row operations of multiplying a single row by $k$ on the identity matrix $\boldsymbol{I}_n$. By theoremlink, we know that each of these elementary row operations will multiply the determinant by $k$. Therefore, the determinant of $k\boldsymbol{I}_n$ is:

$$\begin{align*} \det(k\boldsymbol{I}_n)&=k^n \cdot{\det(\boldsymbol{I}_n)}\\ &=k^n\cdot(1)\\ &=k^n \end{align*}$$

This completes the proof.

Theorem.

Determinant of an elementary matrix

The determinant of an elementary matrix corresponding to multiplying a row by a non-zero scalar $k$ is:

$$\det(\boldsymbol{E})=k$$

The determinant of an elementary matrix corresponding to interchanging adjacent rows is:

$$\det(\boldsymbol{E})=-1$$

The determinant of an elementary matrix corresponding to multiplying a row by a constant and then adding it to another row is:

$$\det(\boldsymbol{E})=1$$

Proof. Let's prove this case by case. From theoremlink, we know that the determinant of an identity matrix is:

$$\det(\boldsymbol{I})=1$$

The first type of elementary matrix $\boldsymbol{E}_1$ is obtained by multiplying a row of an identity matrix by a non-zero scalar $k$. By theoremlink, the determinant of this elementary matrix is:

$$\begin{align*} \det(\boldsymbol{E}_1) &=\det(\boldsymbol{I})\times{k}\\ &=1\times{k}\\ &=k \end{align*}$$

The second type of elementary matrix $\boldsymbol{E}_2$ is obtained by interchanging two rows of the identity matrix. By theoremlink, the determinant of this elementary matrix is:

$$\begin{align*} \det(\boldsymbol{E}_2) &=\det(\boldsymbol{I})\times{-1}\\ &=1\times{-1}\\ &=-1 \end{align*}$$

The third type of elementary matrix $\boldsymbol{E}_3$ is obtained by multiplying a row of an identity matrix by a constant and then adding it to another row. By theoremlink, the determinant of this elementary matrix is:

$$\begin{align*} \det(\boldsymbol{E}_3) &=\det(\boldsymbol{I})\times{1}\\ &=1\times{1}\\ &=1 \end{align*}$$

This completes the proof.

Theorem.

Determinant of a product of an elementary matrix and any matrix

If $\boldsymbol{A}$ is a square matrix and $\boldsymbol{E}$ is an elementary matrix, then:

$$\det(\boldsymbol{E}\boldsymbol{A})= \det(\boldsymbol{E})\cdot\det(\boldsymbol{A})$$

Proof. Because there are $3$ types of elementary row operations, there are also $3$ types of elementary matrices. We must show that the proposition holds for all $3$ types of elementary matrices. We know from theoremlink that the determinant of an identity matrix is:

$$\det(\boldsymbol{I})=1$$

From theoremlink, we know that the determinant of an elementary matrix $\boldsymbol{E}_1$ corresponding to the elementary row operation of multiplying a single row by a non-zero constant $k$ is:

$$\begin{equation}\label{eq:TYFcvS732mjpILuP6gM} \det(\boldsymbol{E}_1)=k \end{equation}$$

Multiplying $\boldsymbol{E}_1$ to some matrix $\boldsymbol{A}$ results in multiplying a row by $k$. From theoremlink, this means that:

$$\begin{equation}\label{eq:h9KdqEkG6TUx3kUFxHb} \det(\boldsymbol{E}_1\boldsymbol{A})=k\cdot\det(\boldsymbol{A})\\ \end{equation}$$

Substituting \eqref{eq:TYFcvS732mjpILuP6gM} into \eqref{eq:h9KdqEkG6TUx3kUFxHb} gives:

$$\det(\boldsymbol{E}_1\boldsymbol{A}) =\det(\boldsymbol{E}_1)\cdot\det(\boldsymbol{A})$$

Next, from theoremlink again, we know that the determinant of an elementary matrix $\boldsymbol{E}_2$ corresponding to the elementary row operation of interchanging two adjacent rows is:

$$\begin{equation}\label{eq:jEQZgj5fvdcSZEAFtJS} \det(\boldsymbol{E}_2)=-1 \end{equation}$$

Multiplying $\boldsymbol{E}_2$ to some matrix $\boldsymbol{A}$ results in interchanging two adjacent rows. From theoremlink, this means that:

$$\begin{equation}\label{eq:xTVYEqErl7Zk1Gr3cah} \det(\boldsymbol{E}_2\boldsymbol{A}) =-\det(\boldsymbol{A}) \end{equation}$$

Combining \eqref{eq:jEQZgj5fvdcSZEAFtJS} and \eqref{eq:xTVYEqErl7Zk1Gr3cah} gives:

$$\det(\boldsymbol{E}_2\boldsymbol{A}) =\det(\boldsymbol{E}_2)\cdot\det(\boldsymbol{A})$$

Finally, from theoremlink again, we know that the determinant of an elementary matrix $\boldsymbol{E}_3$ corresponding to the elementary row operation of adding a multiple of one row to another is:

$$\begin{equation}\label{eq:qkCvklB8gev6dGd5DjF} \det(\boldsymbol{E}_3)=1 \end{equation}$$

Multiplying $\boldsymbol{E}_3$ to some matrix $\boldsymbol{A}$ results in the same elementary row operation. From theoremlink, this means that:

$$\begin{equation}\label{eq:vuu8aKetWMSt1RBfxyU} \det(\boldsymbol{E}_3\boldsymbol{A}) =\det(\boldsymbol{A}) \end{equation}$$

Combining \eqref{eq:qkCvklB8gev6dGd5DjF} and \eqref{eq:vuu8aKetWMSt1RBfxyU} gives:

$$\det(\boldsymbol{E}_3\boldsymbol{A}) =\det(\boldsymbol{E}_3)\cdot\det(\boldsymbol{A})$$

We have now shown the following result:

$$\begin{align*} \det(\boldsymbol{E}_1\boldsymbol{A}) &=\det(\boldsymbol{E}_1)\cdot\det(\boldsymbol{A})\\ \det(\boldsymbol{E}_2\boldsymbol{A}) &=\det(\boldsymbol{E}_2)\cdot\det(\boldsymbol{A})\\ \det(\boldsymbol{E}_3\boldsymbol{A}) &=\det(\boldsymbol{E}_3)\cdot\det(\boldsymbol{A})\\ \end{align*}$$

This means that for any type of elementary matrix $\boldsymbol{E}$, we have:

$$\det(\boldsymbol{E}\boldsymbol{A}) =\det(\boldsymbol{E})\cdot\det(\boldsymbol{A})$$

This completes the proof.

Theorem.

Determinant of a transpose of an elementary matrix

If $\boldsymbol{E}$ is an elementary matrix, then:

$$\det(\boldsymbol{E}^T)=\det(\boldsymbol{E})$$

Proof. We know from theoremlink that the transpose of an elementary matrix is also an elementary matrix. More specifically, we found the following:

  • for elementary matrix $\boldsymbol{E}_1$ corresponding to multiplying a row by a non-zero scalar, we have that $\boldsymbol{E}^T_1 =\boldsymbol{E}_1$. Taking the determinant of both sides gives us $\mathrm{det}(\boldsymbol{E}^T_1)=\det(\boldsymbol{E}_1)$.

  • for elementary matrix $\boldsymbol{E}_2$ corresponding to interchanging two rows, we also have that $\boldsymbol{E}^T_2=\boldsymbol{E}_2$. Taking the determinant of both sides gives $\det(\boldsymbol{E}^T_2)=\det(\boldsymbol{E}_2)$.

  • for elementary matrix $\boldsymbol{E}_3$ corresponding to multiplying row $i$ by $k$ and then adding it to row $j$, the transpose $\boldsymbol{E}^T_3$ corresponds to multiplying row $j$ by $k$ and then adding it to row $i$. We know from theoremlink that the determinant of elementary matrices of this type equals one. Therefore, we conclude that $\det(\boldsymbol{E}_3^T)=\det(\boldsymbol{E}_3)=1$.

This means that for any elementary matrix $\boldsymbol{E}$, we have that:

$$\det(\boldsymbol{E}^T)=\det(\boldsymbol{E})$$

This completes the proof.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...