Definition.
Rank of a matrix
Example.
Finding the rank of a matrix (1)
Consider the following matrix:
Find the rank of .
Solution. The rank of is defined as the dimensionlink of the column spacelink of denoted as . The dimension is defined as the number of basis vectorslink for . Therefore, to find the rank of , we need to find the basis for .
The column space of is defined as the span of its column vectors:
By definition, the two vectors do not form a basis for because they are linearly dependentlink. Therefore, we can remove the second vector - this will give us a basis for below:
Since this single vector spans the column space of and is linearly independent, we have that this vector forms a basis for . Therefore, the rank of is .
■
Example.
Finding the rank of a matrix (2)
Consider the following matrix:
Find the rank of .
Solution. To find the rank of , we must find the basis for the column space of . The column space of is defined as:
We now must check whether or not the three vectors form a basis for . The third vector is double the first vector so the third vector is redundant. The remaining two vectors are linearly independent, and thus the basis for is:
Since two vectors form a basis for the column space of , the dimension of the column space is . Therefore, we conclude that .
■
Definition.
Nullity of matrix
Example.
Finding the nullity of a matrix (1)
Consider the following matrix:
Find the nullity of .
Solution. We first need to find the basis for the null space of . The null space of is the set of all vectors such that:
Writing this in matrix notation:
Let's solve this linear system. We row-reduce the coefficient matrix:
We have that is a free variablelink, so let's set where is some scalar. Substituting this into the first row gives:
Therefore, the null space of is:
This means that the basis for the null space of is:
Since a single vector forms a basis for the null space of , the dimension of the null space is . Therefore, we conclude that .
■
Example.
Finding the nullity of a matrix (2)
Consider the following matrix:
Find the nullity of .
Solution. Let's first find the null space of . By definition, the null space of is the solution set of the following homogeneous linear system:
Let's row-reduce to get:
We have that and are free variableslink. We set and where and are some scalars. The solution to the homogeneous linear system is therefore:
This means that the null space of is spanned by the following linearly independent vectors:
These two vectors form a basis for the null space of , which means that the dimension of the null space is . We conclude that .
■
Theorem.
Nullity is equal to the number of non-pivot columns
The nullity or the dimension of the null space of a matrix is equal to the number of non-pivot columns of the reduced row echelon formlink of , that is:
Proof. Consider the homogeneous linear system . Suppose the reduced row echelon form of has non-pivot columns, say the 3rd and 5th column. This means that the system has free variableslink and so the general solution can be written as:
Where and for some . Clearly, the two vectors in are linearly independentlink because:
Next, from , we also know that the general solution can be expressed as a linear combination of these vectors. In other words, these vectors span the null space of .
Because these vectors are linearly independent and span the null space of , we conclude that they form a basis for the null space of by definitionlink. In general, if there are non-pivot columns of , then there will be free variables, which means that vectors form a basis for the null space of . The dimension of the null space is and thus the nullity of is also .
This completes the proof.
■
Theorem.
Rank of matrix A is equal to the number of pivot columns of rref(A)
The rank or the dimension of the column space of a matrix is equal to the number of pivot columns of the reduced row echelon form of , that is:
Proof. By definitionlink, the dimension of the column space of matrix is equal to the number of basis vectors for the column space. From theoremlink, we know that the columns of corresponding to the pivot columns in form a basis for the column space of . If there are pivot columns in , then there will be basis vectors for , which means that the dimension of will be . This completes the proof.
■
Theorem.
Rank of matrix is equal to its number of linearly independent columns
The rank of a matrix is equal to the number of linearly independent columns of , that is:
Proof. From theoremlink, we know that the rank of a matrix is equal to the number of pivot columns of . From theoremlink, we also know that the number of pivot columns in equals the number of linearly independent columns. Therefore, the rank of is equal to the number of linearly independent columns of . This completes the proof.
■
Theorem.
Elementary row operations preserve linear independence
Suppose we have some matrix and we perform an elementary row operation on to get matrix . A given set of column vectors of some matrix is linearly independent if and only if the corresponding vectors of are linearly independent.
Proof. Let be the reduced row echelon form of and let matrix be the result of applying an elementary row operation on .
By definition, the reduced row echelon form of is obtained by performing a series of elementary row operations on .
We are given that is obtained after the first elementary row operation so if we apply all the subsequent elementary row operations on , then we would end up with the reduced row echelon form of . In other words, we have .
This means that and have the same pivot columns. By theoremlink, the columns of corresponding to the pivot columns of are linearly independent, and the same can be said for . This means that, for instance, if , and are linearly independent columns of , then , and must also be linearly independent columns of .
The converse is true as well, that is, if , and are linearly independent columns of , then , and must also be linearly independent columns. This completes the proof.
■
Theorem.
Elementary row operations do not affect the matrix rank
Performing elementary row operations on a matrix preserves the rank of .
Proof. From theoremlink, we know that elementary row operations maintain the linear independence between columns. This means that elementary row operations preserve the number of linearly independent columns. By theoremlink, the rank of a matrix is equal to the number of linearly independent columns of . Therefore, we conclude that performing an elementary row operation on preserves the rank of .
This completes the proof.
■
Theorem.
Multiplying a vector of a linearly independent set by a scalar preserves linear independence
If is a linearly independent set of vectors, then multiplying any vector in by a scalar will preserve the linear independence of .
Proof. Suppose is a linearly independent set of vectors. By definitionlink of linear independence, this means that:
This equation only holds when the coefficients , and are all equal to zero. Now, let's multiply one of the vectors in , say , by a scalar to get . We now use the definitionlink of linear dependence to show that is linearly independent:
Where , and are some scalars. Since is just another scalar, let's rewrite it as like so:
We know from that the only coefficients to make the equation hold are all zeros. This means that the only way for to hold is if . Therefore, we conclude that , and are linearly independent.
This completes the proof.
■
Theorem.
Elementary columns operations do not affect the matrix rank
Performing elementary column operations on a matrix preserves the rank of .
Proof. There are three types of elementary column operations so let's show that the matrix rank is preserved after each type of elementary column operation. By theoremlink, the matrix rank is equal to the number of linearly independent columns of the matrix. Therefore, we need to show that an elementary column operation preserves the number of linearly independent columns.
The first type of elementary column operation is a column swap. Clearly, swapping two columns does not affect the linear dependence between them. For instance, if column vectors and are linearly independent, then and will also be linearly independent - the ordering does not matter.
The second type of elementary column operation is scalar multiplication. By theoremlink, given a linearly independent set of vectors , multiplying any vector in by a scalar will preserve the linear independence of .
The third type of elementary column operation is multiplying a vector by a scalar and then adding it to another column vector.
Let and be linearly independent columns. By definitionlink, we have that:
Where and are both zero.
Now, our goal is to show that and are also linearly independent. We once again use the definitionlink of linear independence to check whether the two vectors are independent:
Where and are both scalars. Simplifying this gives:
Here, and are some constants, so let's replace them with some constants and like so:
Equation becomes:
From , we know that the scalar coefficients of and must be zero. Therefore, we have that for to hold as well. From , since is nonzero, . This also means that . Because for the equality to hold, we conclude that and are also linearly independent.
We have now shown that all three types of elementary column operations preserve the linear independence of columns. This means that the rank of a matrix is preserved after an elementary column operation. This completes the proof.
■
Theorem.
Rank of a matrix transpose
If is a matrix, then the rank of is:
Proof. Consider the matrix and its transpose below:
Let and . Our goal is to show that .
Notice that performing an elementary row operation on is equivalent to performing an elementary column operation on . For instance, suppose we perform an elementary row operation of multiplying the second row of by some scalar . This is equivalent to performing an elementary column operation of multiplying the second column of by like so:
Now, suppose we reduced to its reduced row echelon form . If we perform the equivalent column operations on , we may end up with:
Now, let's perform a series of column operations to the left matrix as well as equivalent row operations on the right matrix to get:
We can clearly see that the number of linearly independent columns in the reduced form of and is the same. By theoremlink, matrix rank is equal to the number of linearly independent columns, and so we have that:
By theoremlink and theoremlink, we know that elementary row/column operations do not affect the rank of a matrix. Since the reduced form of and is obtained by performing a series of elementary row/column operations, we have that:
This means that becomes:
This completes the proof.
■
Theorem.
Rank-nullity theorem
The sum of the rank and nullity of an matrix is:
Proof. Let be an matrix. We know that every column in must either correspond to a pivot or a non-pivot column in the reduced row echelon form of , which means that:
From theoremlink, the number of pivot columns in is equal to the rank of , that is:
From theoremlink, the number of non-pivot columns in is equal to the nullity of , that is:
Finally, the number of columns in is the same as the number of columns in . Therefore, we conclude that:
This completes the proof.
■
Theorem.
Rank of PA and AP where P is invertible
Let be a square matrix. If is an invertible matrix, then:
Proof. We will first prove that . Let be some invertible matrix. Note that we use the notation instead of to avoid confusion later. By theoremlink, because is invertible, we can express in terms of elementary matrices , , , like so:
Now, substituting this into gives:
By theoremlink, we know that elementary row operations do not affect the matrix rank. Since multiplying an elementary matrix to is equivalent to performing an elementary row operation on , we have that:
Next, let's show that . By theoremlink and theoremlink, we have that:
However, we have just proven that . Therefore, we can reduce to:
This completes the proof.
■