Guide on eigenspace and eigenbasis
Start your free 7-days trial now!
Eigenspace
Let $\lambda$ be an eigenvaluelink of a square matrix $\boldsymbol{A}$. The eigenspace of $\lambda$ is defined as the null spacelink of $\boldsymbol{A}-\lambda\boldsymbol{I}$, that is:
Discussion. Recall that once we have obtained the eigenvalues $\lambda$ of a square matrix $\boldsymbol{A}$, we solve the below equation for $\boldsymbol{x}$ to obtain the corresponding eigenvectorslink:
By definitionlink, the set of vectors $\boldsymbol{x}$ that satisfies such a system is the null space of $\boldsymbol{A}-\lambda\boldsymbol{I}$. The eigenspace associated with $\lambda$ is defined to be this null space.
Note that an eigenspace always contains the zero vector because $\boldsymbol{x}=0$ is a solution to \eqref{eq:l3BEQJa2h6ZO7W293FK}. However, an eigenvector cannot be the zero vector by definitionlink. In other words, all the vectors that reside in the eigenspace are eigenvectors of $\lambda$ except for the zero vector.
Finding the eigenspace of a matrix
Consider the following matrix:
Find the eigenspace of $\boldsymbol{A}$.
Solution. The characteristic polynomiallink of $\boldsymbol{A}$ is:
Therefore, the eigenvalues of $\boldsymbol{A}$ are $\lambda_1=3$ and $\lambda_2=7$. Next, let's find eigenvectors $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ corresponding to $\lambda_1$ and $\lambda_2$ respectively.
Taking the first row:
Let $x_{12}=t$ where $t$ is some scalar. This means that $x_{11}=-3t$. Therefore, an eigenvector corresponding to eigenvalue $\lambda_1$ can be expressed as:
The eigenspace $\mathcal{E}_1$ associated with $\lambda_1$ is the vector space below:
We can think of this eigenspace as a collection of eigenvectors corresponding to $\lambda_1$. Again, keep in mind that any vector in this eigenspace is an eigenvector associated with $\lambda_1$ except the zero vector.
Next, let's find the eigenvectors corresponding to the eigenvalue $\lambda_2=7$. We proceed like so:
Let $x_{22}=t$ where $t\in\mathbb{R}$. This means $x_{21}$ is:
The eigenspace $\mathcal{E}_2$ corresponding to eigenvalue $\lambda_2$ is:
Eigenspace is a subspace
Let $\boldsymbol{A}$ be an $n\times{n}$ matrix and let $\lambda$ be an eigenvalue of $\boldsymbol{A}$. The eigenspace associated with $\lambda$ is a subspacelink of $\mathbb{R}^n$.
Proof. By definitionlink, the eigenspace of an eigenvalue $\lambda$ is:
By theorem, the null space of any $m\times{n}$ matrix is a space of $\mathbb{R}^n$. This completes the proof.
Eigenbasis
Let $\boldsymbol{A}$ be an $n\times{n}$ matrix. The eigenbasis of $\boldsymbol{A}$ is a basislink of $\mathbb{R}^n$ consisting of $n$ eigenvectors of $\boldsymbol{A}$. As we shall see later, not all square matrices have an eigenbasis.
Finding the eigenbasis of a matrix
Find the eigenbasis of the following matrix:
Solution. The characteristic polynomial of $\boldsymbol{A}$ is:
Therefore, the eigenvalues of $\boldsymbol{A}$ are $\lambda_1=6$ and $\lambda_2=-1$.
Let's find an eigenvector associated with $\lambda_1=6$ like so:
Here, $x_{11}$ is a basic variablelink while $x_{12}$ is a free variablelink. Let $x_{12}=t$ where $t$ is some scalar. The solution can now be expressed as:
Therefore, the eigenspace $\mathcal{E}_1$ associated with $\lambda_1$ is:
The basis for $\mathcal{E}_1$ is:
Next, let's find the eigenvectors corresponding to $\lambda_2=-1$. We proceed like so:
Let $x_{22}=t$ where $t\in\mathbb{R}$. The solution can be expressed as:
The eigenspace $\mathcal{E}_2$ is:
The basis for $\mathcal{E}_2$ is:
The eigenbasis for $\mathbb{R}^2$ is the union of $\mathcal{B}_1$ and $\mathcal{B}_2$ that is:
Note that the combined basis $\mathcal{B}_1\cup\mathcal{B}_2$ is guaranteed to be linearly independent given that they correspond to distinct eigenvalues. We will prove this laterlink in this guide.
Showing that a matrix does not have an eigenbasis
Consider the following matrix:
Show that an eigenbasis does not exist for $\boldsymbol{A}$.
Solution. The characteristic polynomial of $\boldsymbol{A}$ is:
The eigenvalue of $\boldsymbol{A}$ is thus $\lambda=2$. The corresponding eigenvectors are obtained by:
Here, $x_1$ is a free variable and $x_2$ is a basic variable. Let $x_1=t$ where $t$ is some scalar. The solution set can now be expressed as:
Therefore, the eigenspace $\mathcal{E}$ associated with $\lambda=2$ is:
The basis for $\mathcal{E}$ is:
Since the eigenspace $\mathcal{E}$ is spanned by one basis vector, the dimensionlink of $\mathcal{E}$ is $1$. By definitionlink, the eigenbasis must be a basis for $\mathbb{R}^2$ in this case. However, one basis vector cannot span $\mathbb{R}^2$, which means there is no eigenbasis for $\boldsymbol{A}$.
Union of the basis for eigenspace corresponding to distinct eigenvalues is linearly independent
If $\lambda_1$, $\lambda_2$, $\cdots$, $\lambda_k$ are distinct eigenvalues of a matrix with corresponding eigenspace $\mathcal{E}_1$, $\mathcal{E}_2$, $\cdots$, $\mathcal{E}_k$ spanned by basis $\mathcal{B}_1$, $\mathcal{B}_2$, $\cdots$, $\mathcal{B}_k$ respectively, then $\mathcal{B}_1\cup \mathcal{B}_2\cup\cdots \cup\mathcal{B}_k$ is linearly independent.
Proof. Let $\lambda_1$, $\lambda_2$, $\cdots$, $\lambda_k$ be distinct eigenvalues with corresponding eigenspace $\mathcal{E}_1$, $\mathcal{E}_2$, $\cdots$, $\mathcal{E}_k$ spanned by basis $\mathcal{B}_1$, $\mathcal{B}_2$, $\cdots$, $\mathcal{B}_k$ respectively. Suppose the union of the eigenvectors is:
Where $n\ge{k}$. Note that $n$ may be greater than $k$ because the dimension of the eigenspace corresponding to an eigenvalue may be greater than $1$.
Now, consider the criterialink for linear independence:
Where $c_1$, $c_2$, $\cdots$, $c_n$ are some scalar coefficients. Our goal is to show that all of these coefficients must equal zero for \eqref{eq:S8hmL8bic98jR6Mwu9A} to hold - this will imply that $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_n\}$ is linearly independent by definitionlink. Let's group the eigenbasis vectors with the same eigenvalue:
Here, we've color-coded the eigenbasis vectors with the same eigenvalue - this is only an example to demonstrate what we mean by the grouping of eigenvectors. For instance, in this case, $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are the eigenbasis vectors of $\mathcal{B}_1$ corresponding to the first eigenvalue. Let's define the following vectors:
We can rewrite \eqref{eq:qmIrjxX8JAhopmIRuIR} as:
For \eqref{eq:OCeQpwpa6e9mJEXYLcg} to hold, the following must be true:
To understand why, suppose these eigenvectors are not equal to the zero vector:
We can make $\boldsymbol{w}_1$ in \eqref{eq:OCeQpwpa6e9mJEXYLcg} the subject to get:
This means that we can express $\boldsymbol{w}_1$ as a linear combination of the other vectors $\boldsymbol{w}_2$, $\boldsymbol{w}_3$, $\cdots$, $\boldsymbol{w}_k$. In other words, $\boldsymbol{w}_1$ is linearly dependent on the other vectors. Now, recall that all the eigenvectors corresponding to distinct eigenvalues are linearly independent by theoremlink. This means that $\boldsymbol{w}_1$, $\boldsymbol{w}_2$, $\cdots$, $\boldsymbol{w}_k$ cannot be eigenvectors, which implies they must be zero vectors. This is why \eqref{eq:ZwFNg4Pk0JaWJNtRaGK} holds.
Gong back to \eqref{eq:RP57MID6AK8Y0KxDkzh}, we have that:
Because basis vectors are linearly independent by definitionlink, the following is true:
Since all the coefficients must be zero for \eqref{eq:qmIrjxX8JAhopmIRuIR} to hold, the union of the basis $\{\boldsymbol{v}_1, \boldsymbol{v}_2, \cdots, \boldsymbol{v}_n\}$ is linearly independent by definitionlink. This completes the proof.