search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

Comprehensive Guide on Rank and Nullity of Matrices

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

Rank of a matrix

The rank of a matrix $\boldsymbol{A}$ is the dimensionlink of the column spacelink of $\boldsymbol{A}$, that is:

$$\mathrm{rank}(\boldsymbol{A})= \mathrm{dim}\big(\mathrm{col}(\boldsymbol{A})\big)$$
Example.

Finding the rank of a matrix (1)

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix}1&2\\2&4\end{pmatrix}$$

Find the rank of $\boldsymbol{A}$.

Solution. The rank of $\boldsymbol{A}$ is defined as the dimensionlink of the column spacelink of $\boldsymbol{A}$ denoted as $\mathrm{col}(\boldsymbol{A})$. The dimension is defined as the number of basis vectorslink for $\mathrm{col}(\boldsymbol{A})$. Therefore, to find the rank of $\boldsymbol{A}$, we need to find the basis for $\mathrm{col}(\boldsymbol{A})$.

The column space of $\boldsymbol{A}$ is defined as the span of its column vectors:

$$\mathrm{col}(\boldsymbol{A})= \mathrm{span}\left( \begin{pmatrix}1\\2\end{pmatrix},\; \begin{pmatrix}2\\4\end{pmatrix} \right)$$

By definition, the two vectors do not form a basis for $\mathrm{col}(\boldsymbol{A})$ because they are linearly dependentlink. Therefore, we can remove the second vector - this will give us a basis for $\mathrm{col}(\boldsymbol{A})$ below:

$$\left\{ \begin{pmatrix}1\\2\end{pmatrix} \right\}$$

Since this single vector spans the column space of $\boldsymbol{A}$ and is linearly independent, we have that this vector forms a basis for $\mathrm{col}(\boldsymbol{A})$. Therefore, the rank of $\mathrm{col}(\boldsymbol{A})$ is $1$.

Example.

Finding the rank of a matrix (2)

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 1&2&2\\ 2&4&4\\ 3&1&6\\ \end{pmatrix}$$

Find the rank of $\boldsymbol{A}$.

Solution. To find the rank of $\boldsymbol{A}$, we must find the basis for the column space of $\boldsymbol{A}$. The column space of $\boldsymbol{A}$ is defined as:

$$\mathrm{col}(\boldsymbol{A})= \mathrm{span}\left( \begin{pmatrix}1\\2\\3\end{pmatrix},\; \begin{pmatrix}2\\4\\1\end{pmatrix},\; \begin{pmatrix}2\\4\\6\end{pmatrix} \right)$$

We now must check whether or not the three vectors form a basis for $\mathrm{col}(\boldsymbol{A})$. The third vector is double the first vector so the third vector is redundant. The remaining two vectors are linearly independent, and thus the basis for $\mathrm{col}(\boldsymbol{A})$ is:

$$\left\{ \begin{pmatrix}1\\2\\3\end{pmatrix},\; \begin{pmatrix}2\\4\\1\end{pmatrix} \right\}$$

Since two vectors form a basis for the column space of $\boldsymbol{A}$, the dimension of the column space is $2$. Therefore, we conclude that $\mathrm{rank}(\boldsymbol{A})=2$.

Definition.

Nullity of matrix

The nullity of a matrix $\boldsymbol{A}$ is the dimensionlink of the null spacelink of $\boldsymbol{A}$, that is:

$$\mathrm{nullity}(\boldsymbol{A})= \mathrm{dim}\big(\mathrm{nullspace}(\boldsymbol{A})\big)$$
Example.

Finding the nullity of a matrix (1)

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix}1&2\\2&4\end{pmatrix}$$

Find the nullity of $\boldsymbol{A}$.

Solution. We first need to find the basis for the null space of $\boldsymbol{A}$. The null space of $\boldsymbol{A}$ is the set of all vectors $\boldsymbol{x}$ such that:

$$\boldsymbol{Ax}=\boldsymbol{0}$$

Writing this in matrix notation:

$$\begin{pmatrix}1&2\\2&4\end{pmatrix} \begin{pmatrix}x_1\\x_2\end{pmatrix} = \begin{pmatrix}0\\0\end{pmatrix}$$

Let's solve this linear system. We row-reduce the coefficient matrix:

$$\begin{pmatrix}1&2\\2&4\end{pmatrix}\sim \begin{pmatrix}1&2\\0&0\end{pmatrix}$$

We have that $x_2$ is a free variablelink, so let's set $x_2=t$ where $t$ is some scalar. Substituting this into the first row gives:

$$x_1=-2t$$

Therefore, the null space of $\boldsymbol{A}$ is:

$$\mathrm{nullspace}(\boldsymbol{A})= \left\{\begin{pmatrix} -2t\\t \end{pmatrix}\;|\;t\in\mathbb{R}\right\}$$

This means that the basis for the null space of $\boldsymbol{A}$ is:

$$\left\{\begin{pmatrix} -2\\1 \end{pmatrix}\right\}$$

Since a single vector forms a basis for the null space of $\boldsymbol{A}$, the dimension of the null space is $1$. Therefore, we conclude that $\mathrm{nullity}(\boldsymbol{A})=1$.

Example.

Finding the nullity of a matrix (2)

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 1&3&2\\ 2&6&4\\ 3&9&6\\ \end{pmatrix}$$

Find the nullity of $\boldsymbol{A}$.

Solution. Let's first find the null space of $\boldsymbol{A}$. By definition, the null space of $\boldsymbol{A}$ is the solution set of the following homogeneous linear system:

$$\begin{pmatrix} 1&3&2\\ 2&6&4\\ 3&9&6\\ \end{pmatrix} \begin{pmatrix} x_1\\x_2\\x_3\\ \end{pmatrix}= \begin{pmatrix} 0\\0\\0\\ \end{pmatrix}$$

Let's row-reduce $\boldsymbol{A}$ to get:

$$\begin{pmatrix} 1&3&2\\ 2&6&4\\ 3&9&6\\ \end{pmatrix}\sim \begin{pmatrix} 1&3&2\\ 0&0&0\\ 0&0&0\\ \end{pmatrix}$$

We have that $x_2$ and $x_3$ are free variableslink. We set $x_2=r$ and $x_3=t$ where $r$ and $t$ are some scalars. The solution to the homogeneous linear system is therefore:

$$\begin{pmatrix} x_1\\x_2\\x_3 \end{pmatrix}= \begin{pmatrix} -3r-2t\\r\\t \end{pmatrix} = \begin{pmatrix} -3\\1\\0 \end{pmatrix}r+ \begin{pmatrix} -2\\0\\1 \end{pmatrix}t$$

This means that the null space of $\boldsymbol{A}$ is spanned by the following linearly independent vectors:

$$\left\{\begin{pmatrix} -3\\1\\0 \end{pmatrix},\;\; \begin{pmatrix} -2\\0\\1 \end{pmatrix}\right\}$$

These two vectors form a basis for the null space of $\boldsymbol{A}$, which means that the dimension of the null space is $2$. We conclude that $\mathrm{nullity}(\boldsymbol{A})=2$.

Theorem.

Nullity is equal to the number of non-pivot columns

The nullity or the dimension of the null space of a matrix $\boldsymbol{A}$ is equal to the number of non-pivot columns of the reduced row echelon formlink of $\boldsymbol{A}$, that is:

$$\begin{align*} \mathrm{nullity}(\boldsymbol{A})&=\dim(\mathrm{nullspace}(\boldsymbol{A}))\\ &=\text{number of non-pivot columns of }\mathrm{rref}(\boldsymbol{A}) \end{align*}$$

Proof. Consider the homogeneous linear system $\boldsymbol{Ax}=\boldsymbol{0}$. Suppose the reduced row echelon form of $\boldsymbol{A}$ has $2$ non-pivot columns, say the 3rd and 5th column. This means that the system has $2$ free variableslink and so the general solution can be written as:

$$\begin{equation}\label{eq:FkyBOPfLTItKai4Efbh} \boldsymbol{x}=\begin{pmatrix} x_1\\x_2\\x_3\\x_4\\x_5\\ \end{pmatrix}= \begin{pmatrix} *\\ *\\r\\ *\\t \end{pmatrix} = \begin{pmatrix} *\\ *\\1\\ *\\0 \end{pmatrix}r+ \begin{pmatrix} *\\ *\\0\\ *\\1 \end{pmatrix}t \end{equation}$$

Where $x_3=r$ and $x_5=t$ for some $t,r\in\mathbb{R}$. Clearly, the two vectors in \eqref{eq:FkyBOPfLTItKai4Efbh} are linearly independentlink because:

  • the first vector has a $1$ in slot $3$ whereas the second vector has a $0$ in slot $3$.

  • the first vector has a $0$ in slot $5$ whereas the second vector has a $1$ in slot $5$.

Next, from \eqref{eq:FkyBOPfLTItKai4Efbh}, we also know that the general solution $\boldsymbol{x}$ can be expressed as a linear combination of these vectors. In other words, these vectors span the null space of $\boldsymbol{A}$.

Because these vectors are linearly independent and span the null space of $\boldsymbol{A}$, we conclude that they form a basis for the null space of $\boldsymbol{A}$ by definitionlink. In general, if there are $k$ non-pivot columns of $\mathrm{rref}(\boldsymbol{A})$, then there will be $k$ free variables, which means that $k$ vectors form a basis for the null space of $\boldsymbol{A}$. The dimension of the null space is $k$ and thus the nullity of $\boldsymbol{A}$ is also $k$.

This completes the proof.

Theorem.

Rank of matrix A is equal to the number of pivot columns of rref(A)

The rank or the dimension of the column space of a matrix $\boldsymbol{A}$ is equal to the number of pivot columns of the reduced row echelon form of $\boldsymbol{A}$, that is:

$$\begin{align*} \mathrm{rank}(\boldsymbol{A})&= \dim(\mathrm{col}(\boldsymbol{A}))\\ &=\text{number of pivot columns of }\mathrm{rref}(\boldsymbol{A})\\ \end{align*}$$

Proof. By definitionlink, the dimension of the column space of matrix $\boldsymbol{A}$ is equal to the number of basis vectors for the column space. From theoremlink, we know that the columns of $\boldsymbol{A}$ corresponding to the pivot columns in $\mathrm{rref}(\boldsymbol{A})$ form a basis for the column space of $\boldsymbol{A}$. If there are $k$ pivot columns in $\mathrm{rref}(\boldsymbol{A})$, then there will be $k$ basis vectors for $\mathrm{col}(\boldsymbol{A})$, which means that the dimension of $\mathrm{col}(\boldsymbol{A})$ will be $k$. This completes the proof.

Theorem.

Rank of matrix is equal to its number of linearly independent columns

The rank of a matrix $\boldsymbol{A}$ is equal to the number of linearly independent columns of $\boldsymbol{A}$, that is:

$$\mathrm{rank}(\boldsymbol{A})= \text{number of linearly independent columns of }\boldsymbol{A}$$

Proof. From theoremlink, we know that the rank of a matrix $\boldsymbol{A}$ is equal to the number of pivot columns of $\mathrm{rref}(\boldsymbol{A})$. From theoremlink, we also know that the number of pivot columns in $\mathrm{rref}(\boldsymbol{A})$ equals the number of linearly independent columns. Therefore, the rank of $\boldsymbol{A}$ is equal to the number of linearly independent columns of $\boldsymbol{A}$. This completes the proof.

Theorem.

Elementary row operations preserve linear independence

Suppose we have some matrix $\boldsymbol{A}$ and we perform an elementary row operation on $\boldsymbol{A}$ to get matrix $\boldsymbol{B}$. A given set of column vectors of some matrix $\boldsymbol{A}$ is linearly independent if and only if the corresponding vectors of $\boldsymbol{B}$ are linearly independent.

Proof. Let $\mathrm{rref}(\boldsymbol{A})$ be the reduced row echelon form of $\boldsymbol{A}$ and let matrix $\boldsymbol{B}$ be the result of applying an elementary row operation on $\boldsymbol{A}$.

By definition, the reduced row echelon form of $\boldsymbol{A}$ is obtained by performing a series of elementary row operations on $\boldsymbol{A}$.

We are given that $\boldsymbol{B}$ is obtained after the first elementary row operation so if we apply all the subsequent elementary row operations on $\boldsymbol{B}$, then we would end up with the reduced row echelon form of $\boldsymbol{A}$. In other words, we have $\mathrm{rref}(\boldsymbol{A})=\mathrm{rref}(\boldsymbol{B})$.

This means that $\mathrm{rref}(\boldsymbol{A})$ and $\mathrm{rref}(\boldsymbol{B})$ have the same pivot columns. By theoremlink, the columns of $\boldsymbol{A}$ corresponding to the pivot columns of $\mathrm{rref}(\boldsymbol{A})$ are linearly independent, and the same can be said for $\boldsymbol{B}$. This means that, for instance, if $\boldsymbol{a}_1$, $\boldsymbol{a}_3$ and $\boldsymbol{a}_4$ are linearly independent columns of $\boldsymbol{A}$, then $\boldsymbol{b}_1$, $\boldsymbol{b}_3$ and $\boldsymbol{b}_4$ must also be linearly independent columns of $\boldsymbol{B}$.

The converse is true as well, that is, if $\boldsymbol{b}_1$, $\boldsymbol{b}_3$ and $\boldsymbol{b}_4$ are linearly independent columns of $\boldsymbol{B}$, then $\boldsymbol{a}_1$, $\boldsymbol{a}_3$ and $\boldsymbol{a}_4$ must also be linearly independent columns. This completes the proof.

Theorem.

Elementary row operations do not affect the matrix rank

Performing elementary row operations on a matrix $\boldsymbol{A}$ preserves the rank of $\boldsymbol{A}$.

Proof. From theoremlink, we know that elementary row operations maintain the linear independence between columns. This means that elementary row operations preserve the number of linearly independent columns. By theoremlink, the rank of a matrix $\boldsymbol{A}$ is equal to the number of linearly independent columns of $\boldsymbol{A}$. Therefore, we conclude that performing an elementary row operation on $\boldsymbol{A}$ preserves the rank of $\boldsymbol{A}$.

This completes the proof.

Theorem.

Multiplying a vector of a linearly independent set by a scalar preserves linear independence

If $S$ is a linearly independent set of vectors, then multiplying any vector in $S$ by a scalar $k$ will preserve the linear independence of $S$.

Proof. Suppose $S=\{\boldsymbol{v}_1,\boldsymbol{v}_2,\boldsymbol{v}_3\}$ is a linearly independent set of vectors. By definitionlink of linear independence, this means that:

$$\begin{equation}\label{eq:AW6mOv2M1B1oKxihH5e} c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2+c_3\boldsymbol{v}_3 =\boldsymbol{0} \end{equation}$$

This equation only holds when the coefficients $c_1$, $c_2$ and $c_3$ are all equal to zero. Now, let's multiply one of the vectors in $S$, say $\boldsymbol{v}_2$, by a scalar $k$ to get $k\boldsymbol{v}_2$. We now use the definitionlink of linear dependence to show that $\{\boldsymbol{v}_1,k\boldsymbol{v}_2,\boldsymbol{v}_3\}$ is linearly independent:

$$d_1\boldsymbol{v}_1+d_2(k\boldsymbol{v}_2)+d_3\boldsymbol{v}_3 =\boldsymbol{0}$$

Where $d_1$, $d_2$ and $d_3$ are some scalars. Since $d_2k$ is just another scalar, let's rewrite it as $d_4$ like so:

$$\begin{equation}\label{eq:i1dQ8X9HQnIWTCRofta} d_1\boldsymbol{v}_1+d_4\boldsymbol{v}_2+d_3\boldsymbol{v}_3 =\boldsymbol{0} \end{equation}$$

We know from \eqref{eq:AW6mOv2M1B1oKxihH5e} that the only coefficients to make the equation hold are all zeros. This means that the only way for \eqref{eq:i1dQ8X9HQnIWTCRofta} to hold is if $d_1=d_4=d_3=0$. Therefore, we conclude that $\boldsymbol{v}_1$, $k\boldsymbol{v}_2$ and $\boldsymbol{v}_3$ are linearly independent.

This completes the proof.

Theorem.

Elementary columns operations do not affect the matrix rank

Performing elementary column operations on a matrix $\boldsymbol{A}$ preserves the rank of $\boldsymbol{A}$.

Proof. There are three types of elementary column operations so let's show that the matrix rank is preserved after each type of elementary column operation. By theoremlink, the matrix rank is equal to the number of linearly independent columns of the matrix. Therefore, we need to show that an elementary column operation preserves the number of linearly independent columns.

The first type of elementary column operation is a column swap. Clearly, swapping two columns does not affect the linear dependence between them. For instance, if column vectors $\boldsymbol{a}_1$ and $\boldsymbol{a}_2$ are linearly independent, then $\boldsymbol{a}_2$ and $\boldsymbol{a}_1$ will also be linearly independent - the ordering does not matter.

The second type of elementary column operation is scalar multiplication. By theoremlink, given a linearly independent set of vectors $S$, multiplying any vector in $S$ by a scalar will preserve the linear independence of $S$.

The third type of elementary column operation is multiplying a vector by a scalar $k$ and then adding it to another column vector.

Let $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ be linearly independent columns. By definitionlink, we have that:

$$\begin{equation}\label{eq:fDWRXmiM3xwcK95NA6l} c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2=\boldsymbol{0} \end{equation}$$

Where $c_1$ and $c_2$ are both zero.

Now, our goal is to show that $k\boldsymbol{v}_1+\boldsymbol{v}_2$ and $\boldsymbol{v}_2$ are also linearly independent. We once again use the definitionlink of linear independence to check whether the two vectors are independent:

$$\begin{equation}\label{eq:mwy9PyejhBAYxoQ84Pr} d_1(k\boldsymbol{v}_1+\boldsymbol{v}_2)+d_2\boldsymbol{v}_2=\boldsymbol{0} \end{equation}$$

Where $d_1$ and $d_2$ are both scalars. Simplifying this gives:

$$\begin{equation}\label{eq:cPUQtzZQGPHIZNcAaiH} kd_1\boldsymbol{v}_1+(d_1+d_2)\boldsymbol{v}_2=\boldsymbol{0}\\ \end{equation}$$

Here, $kd_1$ and $d_1+d_2$ are some constants, so let's replace them with some constants $e_1$ and $e_2$ like so:

$$\begin{equation}\label{eq:WK9M3CBHt5bKvfJoRrB} \begin{aligned} e_1&=kd_1\\ e_2&=d_1+d_2\\ \end{aligned} \end{equation}$$

Equation \eqref{eq:cPUQtzZQGPHIZNcAaiH} becomes:

$$\begin{equation}\label{eq:rCbL4BoUSVsNz9yT4RQ} e_1\boldsymbol{v}_1+e_2\boldsymbol{v}_2=\boldsymbol{0}\\ \end{equation}$$

From \eqref{eq:fDWRXmiM3xwcK95NA6l}, we know that the scalar coefficients of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ must be zero. Therefore, we have that $\boldsymbol{e}_1= \boldsymbol{e}_2= 0$ for \eqref{eq:rCbL4BoUSVsNz9yT4RQ} to hold as well. From \eqref{eq:WK9M3CBHt5bKvfJoRrB}, since $k$ is nonzero, $d_1=0$. This also means that $d_2=0$. Because $d_1=d_2=0$ for the equality \eqref{eq:mwy9PyejhBAYxoQ84Pr} to hold, we conclude that $k\boldsymbol{v}_1+\boldsymbol{v}_2$ and $\boldsymbol{v}_2$ are also linearly independent.

We have now shown that all three types of elementary column operations preserve the linear independence of columns. This means that the rank of a matrix is preserved after an elementary column operation. This completes the proof.

Theorem.

Rank of a matrix transpose

If $\boldsymbol{A}$ is a matrix, then the rank of $\boldsymbol{A}^T$ is:

$$\mathrm{rank}(\boldsymbol{A})= \mathrm{rank}(\boldsymbol{A}^T)$$

Proof. Consider the matrix $\boldsymbol{A}$ and its transpose $\boldsymbol{A}^T$ below:

$$\boldsymbol{A}=\begin{pmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34} \end{pmatrix},\;\;\;\;\;\; \boldsymbol{A}^T=\begin{pmatrix} a_{11}&a_{21}&a_{31}\\ a_{12}&a_{22}&a_{32}\\ a_{13}&a_{23}&a_{33}\\ a_{14}&a_{24}&a_{34}\\ \end{pmatrix}$$

Let $\mathrm{rank}(\boldsymbol{A})=r$ and $\mathrm{rank}(\boldsymbol{A}^T)=t$. Our goal is to show that $r=t$.

Notice that performing an elementary row operation on $\boldsymbol{A}$ is equivalent to performing an elementary column operation on $\boldsymbol{A}^T$. For instance, suppose we perform an elementary row operation of multiplying the second row of $\boldsymbol{A}$ by some scalar $k$. This is equivalent to performing an elementary column operation of multiplying the second column of $\boldsymbol{A}^T$ by $k$ like so:

$$\begin{pmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ \color{blue}ka_{21}&\color{blue}ka_{22}&\color{blue}ka_{23}&\color{blue}ka_{24}\\ a_{31}&a_{32}&a_{33}&a_{34} \end{pmatrix},\;\;\;\;\;\; \begin{pmatrix} a_{11}&\color{blue}ka_{21}&a_{31}\\ a_{12}&\color{blue}ka_{22}&a_{32}\\ a_{13}&\color{blue}ka_{23}&a_{33}\\ a_{14}&\color{blue}ka_{24}&a_{34}\\ \end{pmatrix}$$

Now, suppose we reduced $\boldsymbol{A}$ to its reduced row echelon form $\mathrm{rref}(\boldsymbol{A})$. If we perform the equivalent column operations on $\boldsymbol{A}^T$, we may end up with:

$$\begin{pmatrix} 1&*&0&0\\ 0&0&1&0\\ 0&0&0&0 \end{pmatrix},\;\;\;\;\;\; \begin{pmatrix} 1&0&0\\ *&0&0\\ 0&1&0\\ 0&0&0\\ \end{pmatrix}$$

Now, let's perform a series of column operations to the left matrix as well as equivalent row operations on the right matrix to get:

$$\begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&0 \end{pmatrix},\;\;\;\;\;\; \begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&0\\ 0&0&0\\ \end{pmatrix}$$

We can clearly see that the number of linearly independent columns in the reduced form of $\boldsymbol{A}$ and $\boldsymbol{A}^T$ is the same. By theoremlink, matrix rank is equal to the number of linearly independent columns, and so we have that:

$$\begin{equation}\label{eq:gRA0TYBQOoKkHqDgLwx} \mathrm{rank}\big(\mathrm{reduced}(\boldsymbol{A})\big)= \mathrm{rank}\big(\mathrm{reduced}(\boldsymbol{A}^T)\big) \end{equation}$$

By theoremlink and theoremlink, we know that elementary row/column operations do not affect the rank of a matrix. Since the reduced form of $\boldsymbol{A}$ and $\boldsymbol{A}^T$ is obtained by performing a series of elementary row/column operations, we have that:

$$\begin{align*} \mathrm{rank}\big(\mathrm{reduced}(\boldsymbol{A})\big)&= \mathrm{rank}(\boldsymbol{A})\\ \mathrm{rank}\big(\mathrm{reduced}(\boldsymbol{A}^T)\big)&= \mathrm{rank}(\boldsymbol{A}^T)\\ \end{align*}$$

This means that \eqref{eq:gRA0TYBQOoKkHqDgLwx} becomes:

$$\mathrm{rank}(\boldsymbol{A})= \mathrm{rank}(\boldsymbol{A}^T)$$

This completes the proof.

Theorem.

Rank-nullity theorem

The sum of the rank and nullity of an $m\times{n}$ matrix $\boldsymbol{A}$ is:

$$\mathrm{rank}(\boldsymbol{A}) +\mathrm{nullity}(\boldsymbol{A}) =n $$

Proof. Let $\boldsymbol{A}$ be an $m\times{n}$ matrix. We know that every column in $\boldsymbol{A}$ must either correspond to a pivot or a non-pivot column in the reduced row echelon form of $\boldsymbol{A}$, which means that:

$${\color{blue}\text{# pivot columns in rref}(\boldsymbol{A})}+ {\color{green}\text{# non-pivot columns in rref}(\boldsymbol{A})}= {\color{red}\text{# columns in rref}(\boldsymbol{A})}$$

From theoremlink, the number of pivot columns in $\mathrm{rref}(\boldsymbol{A})$ is equal to the rank of $\boldsymbol{A}$, that is:

$$\mathrm{rank}(\boldsymbol{A})= {\color{blue}\text{number of pivot columns of }\mathrm{rref}(\boldsymbol{A})}$$

From theoremlink, the number of non-pivot columns in $\mathrm{rref}(\boldsymbol{A})$ is equal to the nullity of $\boldsymbol{A}$, that is:

$$\mathrm{nullity}(\boldsymbol{A})= {\color{green}\text{number of non-pivot columns of }\mathrm{rref}(\boldsymbol{A})}$$

Finally, the number of columns in $\mathrm{rref}(\boldsymbol{A})$ is the same as the number of columns in $\boldsymbol{A}$. Therefore, we conclude that:

$$\mathrm{rank}(\boldsymbol{A}) +\mathrm{nullity}(\boldsymbol{A}) =n $$

This completes the proof.

Theorem.

Rank of PA and AP where P is invertible

Let $\boldsymbol{A}$ be a square matrix. If $\boldsymbol{P}$ is an invertible matrix, then:

$$\mathrm{rank}(\boldsymbol{A})= \mathrm{rank}(\boldsymbol{PA})= \mathrm{rank}(\boldsymbol{AP})$$

Proof. We will first prove that $\mathrm{rank}(\boldsymbol{A}) =\mathrm{rank}(\boldsymbol{PA})$. Let $\boldsymbol{U}$ be some invertible matrix. Note that we use the notation $\boldsymbol{U}$ instead of $\boldsymbol{P}$ to avoid confusion later. By theoremlink, because $\boldsymbol{U}$ is invertible, we can express $\boldsymbol{U}$ in terms of elementary matrices $\boldsymbol{E}_k$, $\cdots$, $\boldsymbol{E}_2$, $\boldsymbol{E}_1$ like so:

$$\boldsymbol{U}= \boldsymbol{E}_k \cdots \boldsymbol{E}_2 \boldsymbol{E}_1 \boldsymbol{I}_n$$

Now, substituting this $\boldsymbol{U}$ into $\mathrm{rank}(\boldsymbol{A}) =\mathrm{rank}(\boldsymbol{UA})$ gives:

$$\begin{align*} \mathrm{rank}(\boldsymbol{UA})&= \mathrm{rank}(\boldsymbol{E}_k\cdots \boldsymbol{E}_2\boldsymbol{E}_1\boldsymbol{I}_n \boldsymbol{A})\\ &=\mathrm{rank}(\boldsymbol{E}_k\cdots \boldsymbol{E}_2\boldsymbol{E}_1 \boldsymbol{A})\\ \end{align*}$$

By theoremlink, we know that elementary row operations do not affect the matrix rank. Since multiplying an elementary matrix to $\boldsymbol{A}$ is equivalent to performing an elementary row operation on $\boldsymbol{A}$, we have that:

$$\begin{align*} \mathrm{rank}(\boldsymbol{UA}) &=\mathrm{rank}(\boldsymbol{E}_k\cdots \boldsymbol{E}_2\boldsymbol{E}_1 \boldsymbol{A})\\ &=\mathrm{rank}(\boldsymbol{A})\\ \end{align*}$$

Next, let's show that $\mathrm{rank}(\boldsymbol{AP})=\mathrm{rank}(\boldsymbol{A})$. By theoremlink and theoremlink, we have that:

$$\begin{equation}\label{eq:cEZXdQF3V7ZIIu92tIr} \begin{aligned}[b] \mathrm{rank}(\boldsymbol{AP}) &=\mathrm{rank}\big((\boldsymbol{AP})^T\big)\\ &=\mathrm{rank} \big(\boldsymbol{P}^T\boldsymbol{A}^T\big) \end{aligned} \end{equation}$$

However, we have just proven that $\mathrm{rank}(\boldsymbol{UA}) = \mathrm{rank}(\boldsymbol{A})$. Therefore, we can reduce \eqref{eq:cEZXdQF3V7ZIIu92tIr} to:

$$\begin{align*} \mathrm{rank}(\boldsymbol{AP}) &=\mathrm{rank} \big(\boldsymbol{P}^T\boldsymbol{A}^T\big)\\ &=\mathrm{rank} \big(\boldsymbol{A}^T\big) \end{align*}$$

This completes the proof.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...