search
Search
Map of Data Science
search toc
Thanks for the thanks!
close
account_circle
Profile
exit_to_app
Sign out
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
near_me
Linear Algebra
52 guides
keyboard_arrow_down
1. Vectors
2. Matrices
3. Linear equations
4. Matrix determinant
5. Vector space
6. Special matrices
7. Eigenvalues and Eigenvectors
8. Orthogonality
9. Matrix decomposition
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
auto_stories new
settings

Constructing a Basis for a Vector Space

schedule Mar 5, 2023
Last updated
local_offer
Linear Algebra
Tags
map
Check out the interactive map of data science
Theorem.

A set in n-dimensional vector space containing more or less than n vectors

Let $V$ be a vector space and let $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_n\}$ be a basis for $V$.

• If a set has more than $n$ vectors, then the set is linearly dependent.

• If a set has fewer than $n$ vectors, then the set does not span $V$.

Proof. We start by proving the first case. Suppose a set $W$ contains $m$ vectors in $V$ defined below:

$$W = \{\boldsymbol{w}_1,\boldsymbol{w}_2,\cdots,\boldsymbol{w}_m\}$$

Where $m\gt{n}$. To showlink that $W$ is linearly dependent, we must show that there exists a non-zero solution to:

$$$$\label{eq:YrWzjiXdbNK669rfWH4} x_1\boldsymbol{w}_1+ x_2\boldsymbol{w}_2+\cdots+ x_m\boldsymbol{w}_m=\boldsymbol{0}$$$$

Since $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_n\}$ are basis vectors for V, we know that all vectors in V can be expressed as a linear combination of these vectors. Since the vectors in W are in V, there must exist some:

\begin{align*} \boldsymbol{w}_1&=a_{1,1}\boldsymbol{v}_1\;+\;a_{1,2}\boldsymbol{v}_2+\dots+a_{1,n}\boldsymbol{v}_n\\ \boldsymbol{w}_2&=a_{2,1}\boldsymbol{v}_1\;+\;a_{2,2}\boldsymbol{v}_2+\dots+a_{2,n}\boldsymbol{v}_n\\ &\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\\ \boldsymbol{w}_m&=a_{m,1}\boldsymbol{v}_1+a_{m,2}\boldsymbol{v}_2+\dots+a_{m,n}\boldsymbol{v}_n \end{align*}

Consider the homogeneous system of linear equations:

\begin{aligned} a_{1,1}x_1 +\;a_{2,1}x_2 \;+\cdots+a_{m,1}x_m&=0\\ a_{1,2}x_1 +\;a_{2,2}x_2 \;+\cdots+a_{m,2}x_m&=0\\ \vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&\vdots\\ a_{1,n}x_1 +\;a_{2,n}x_2 +\cdots+a_{m,n}x_m&=0 \end{aligned}

We know from theoremlink that because there are more unknowns than equations ($m\gt{n}$), the linear system has infinitely many non-zero solutions $x_1$, $x_2$, $\cdots$, $x_m$. Since there exists at least one non-zero solution to \eqref{eq:YrWzjiXdbNK669rfWH4}, we conclude that $W$ is linearly dependent.

* * *

Let us now prove the second case. Suppose $W$ is set of $m$ vectors in $V$ defined below:

$$W = \{\boldsymbol{w}_1,\boldsymbol{w}_2,\cdots,\boldsymbol{w}_m\}$$

Where $m\lt{n}$. Let's prove that $W$ does not span $V$ by contradiction. Consider the linear combination of $n$ basis vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ in $V$ below:

$$$$\label{eq:L3YPAqf2SZDnsdBkiDR} x_1\boldsymbol{v}_1+x_2\boldsymbol{v}_2+\cdots+x_n\boldsymbol{v}_n$$$$

Note that we did not specify that $V$ is $n$-dimensional here - we're simply stating that there are $n$ basis vectors in $V$. We will later prove that if $V$ has $n$ basis vectors, then $V$ must be $n$-dimensional. For now though, you can assume that $V$ could be any dimension.

Now, assume that $W$ spans $V$, which means that every vector in $V$ can be expressed as a linear combination of vectors in $W$, that is:

\label{eq:mw2Y6ey9JpHDhideRi4} \begin{aligned} \boldsymbol{v}_1&= a_{1,1}\boldsymbol{w}_1+a_{2,1}\boldsymbol{w}_2+\dots+a_{m,1}\boldsymbol{w}_m\\ \boldsymbol{v}_2&=a_{1,2}\boldsymbol{w}_1+a_{2,2}\boldsymbol{w}_2+\dots+a_{m,2}\boldsymbol{w}_m\\ &\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\\ \boldsymbol{v}_n&=a_{1,n}\boldsymbol{w}_1+a_{2,n}\boldsymbol{w}_2+\dots+a_{m,n}\boldsymbol{w}_m \end{aligned}

Now, suppose we are solving the following homogenous system of linear equations:

\label{eq:uZdCZy35fVid8nJn2xh} \begin{aligned} a_{1,1}x_1 \;+\;a_{1,2}x_2 \;+\cdots+a_{1,n}x_n&=0\\ a_{2,1}x_1 \;+\;a_{2,2}x_2 \;+\cdots+a_{2,n}x_n&=0\\ \vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&\vdots\\ a_{m,1}x_1 +\;a_{m,2}x_2 +\cdots+a_{m,n}x_n&=0 \end{aligned}

Since $m\lt{n}$ in \eqref{eq:uZdCZy35fVid8nJn2xh}, there are more columns than rows (or more unknowns than equations), which means that the system \eqref{eq:uZdCZy35fVid8nJn2xh} has infinitely many non-zero solutions $x_1$, $x_2$, $\cdots$, $x_n$ by theoremlink.

Now, let's substitute the vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ from \eqref{eq:mw2Y6ey9JpHDhideRi4} into \eqref{eq:L3YPAqf2SZDnsdBkiDR} to get:

\label{eq:L0nggT3E0tYyhqM4HMW} \begin{aligned} &\;\;\;\;\;x_1\boldsymbol{v}_1+x_2\boldsymbol{v}_2+\cdots+x_n\boldsymbol{v}_n\\ &=x_1(a_{1,1}\boldsymbol{w}_1+a_{2,1}\boldsymbol{w}_2+\dots+a_{m,1}\boldsymbol{w}_m)\\ &+x_2(a_{1,2}\boldsymbol{w}_1+a_{2,2}\boldsymbol{w}_2+\dots+a_{m,2}\boldsymbol{w}_m)\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\\ &+x_n(a_{1,n}\boldsymbol{w}_1+a_{2,n}\boldsymbol{w}_2+\dots+a_{m,n}\boldsymbol{w}_m) \end{aligned}

We can reformulate \eqref{eq:L0nggT3E0tYyhqM4HMW} as:

\label{eq:KIQKJlzx5XlMM9MP2LK} \begin{aligned} &\;\;\;\;\;x_1\boldsymbol{v}_1+x_2\boldsymbol{v}_2+\cdots+x_n\boldsymbol{v}_n\\ &=w_1(a_{1,1}x_1+a_{1,2}x_2+\cdots+a_{1,n}x_n)\\ &+w_2(a_{2,1}x_1+a_{2,2}x_2+\cdots+a_{2,n}x_n)\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\vdots\\ &+w_m(a_{m,1}x_1+a_{m,2}x_2+\dots+a_{m,n}x_n) \end{aligned}

The sums in the brackets of \eqref{eq:KIQKJlzx5XlMM9MP2LK} are given by \eqref{eq:uZdCZy35fVid8nJn2xh}, which means:

\begin{align*} x_1\boldsymbol{v}_1+x_2\boldsymbol{v}_2+\cdots+x_n\boldsymbol{v}_n &=\boldsymbol{w}_1(0)+\boldsymbol{w}_2(0)+\cdots+\boldsymbol{w}_m(0)\\ &=\boldsymbol{0} \end{align*}

Let's summarize what we have derived:

$$x_1\boldsymbol{v}_1+x_2\boldsymbol{v}_2+\cdots+x_n\boldsymbol{v}_n=\boldsymbol{0}$$

We have already shown that there exists at least one non-zero solution $x_1$, $x_2$, $\cdots$, $x_n$. Therefore, by definitionlink of linear dependence, we have that $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ are linearly dependent. This contradicts the fact that $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ are linearly independent as they are basis vectors. Since we have a contradiction, our assumption that $W$ spans $V$ is incorrect. This proves that $W$ does not span $V$.

Theorem.

Number of bases for a finite-dimensional vector space

All bases for a finite-dimensional vector space have the same number of vectors.

Proof. Consider an $n$-dimensional vector space $V$. In theoremlink, we have shown that:

• if a set has more than $n$ vectors, then the set is linearly dependent.

• if a set has fewer than $n$ vectors, then the set does not span $V$.

By definition, basis vectors must:

• be linearly independent.

• span the vector space $V$.

For these properties to be satisfied, there must be exactly $n$ basis vectors. This completes the proof.

Theorem.

Number of basis vectors of an n-dimensional vector space

The number of basis vectors in an $n$-dimensional vector space is exactly $n$. In other words, every basis of $\mathbb{R}^n$ contains exactly $n$ vectors.

Proof. We know from this section that the standard basis for $\mathbb{R}^n$ is:

$$\left\{ \begin{pmatrix} 1\\0\\\vdots\\0 \end{pmatrix}, \begin{pmatrix} 0\\1\\\vdots\\0 \end{pmatrix},\;\cdots\;, \begin{pmatrix} 0\\0\\\vdots\\1 \end{pmatrix} \right\}$$

Here, there are $n$ vectors in this basis. From theorem, we know that all bases for a finite-dimensional vector space have the same number of vectors. Since the standard basis for $\mathbb{R}^n$ has n vectors, we conclude that all bases of $\mathbb{R}^n$ must also have exactly $n$ vectors.

Using this theorem, we can now add one slight detail to theorem - instead of any arbitrary vector space $V$, we now know that $V$ must be $n$-dimensional if $V$ has $n$ basis vectors.

Theorem.

Relationship between dimensionality of vector space and linear dependence and span

Let $V$ be an $n$-dimensional vector space.

• If a set has more than $n$ vectors, then the set is linearly dependent.

• If a set has fewer than $n$ vectors, then the set does not span $V$.

Example.

Showing that a set of vectors is linearly dependent by inspection

Consider a set $S$ containing the following vectors:

$$\boldsymbol{v}_1= \begin{pmatrix} 3\\5 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2=\begin{pmatrix} 2\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3=\begin{pmatrix} 4\\3 \end{pmatrix}$$

Is $S$ linearly dependent or independent?

Solution. These vectors reside in $\mathbb{R}^2$. Because this set contains more than $2$ vectors, we know by theoremlink that the set is linear dependent. This also means that the set is not a basis for $\mathbb{R}^2$.

Example.

Showing that a set of vectors does not span a vector space by inspection

Consider a set $S$ containing the following vector:

$$\boldsymbol{v}_1= \begin{pmatrix} 3\\5\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2=\begin{pmatrix} 2\\1\\2 \end{pmatrix}$$

Give a reason why $S$ does not span $\mathbb{R}^3$.

Solution. By theoremlink, we know that $2$ vectors cannot span $\mathbb{R}^3$. This also means that $S$ is not a basis for $\mathbb{R}^3$.

Definition.

Dimension of a vector space

The dimension of a finite-dimensional vector space $V$, denoted by $\mathrm{dim}(V)$, is the number of basis vectors for $V$.

Example.

Finding the dimension of Rn

Find the dimension of $\mathbb{R}^2$ and $\mathbb{R}^n$.

Solution. Because $\mathbb{R}^2$ has $2$ basis vectors, its dimension is $2$, that is, $\dim(\mathbb{R}^2)=2$. Similarly, $\dim(\mathbb{R}^n)=n$.

Example.

Finding the dimension of a spanning set

Consider the following spanning set:

$$H=\text{span}(\boldsymbol{v}_1,\boldsymbol{v}_2)$$

Where vectors $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are:

$$\boldsymbol{v}_1= \begin{pmatrix} 3\\5\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2=\begin{pmatrix} 2\\1\\4 \end{pmatrix}$$

Find the dimension of $H$.

Solution. To find the dimension of $H$, we first must find the basis for $H$. The two vectors $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly independent and $H$ is defined to be their span. By definition then, the vectors form a basis for $H$. The basis is therefore:

$$\mathrm{Basis}=\{\boldsymbol{v}_1,\boldsymbol{v}_2\}$$

The dimension is simply the number of vectors in our basis, which means $\text{dim}(H)=2$. This should make sense because all possible linear combinations of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ will form a plane, which is a two-dimensional entity.

Theorem.

Plus/Minus theorem

Let $S$ be a non-empty set of vectors in vector space $V$.

• Let $S$ span $V$. If we remove a vector from $S$ that is linearly dependent on the other vectors of $S$, then $S$ still spans $V$.

• Let $S$ be linearly independent. If we insert a new vector that is outside the span of $S$ into $S$, then $S$ will still remain linearly independent.

Proof. We start by proving the first case. Let $S$ span $V$. Suppose there exists a vector $\boldsymbol{v}$ in $S$ that is linearly dependent on the other vectors of $S$, that is, $\boldsymbol{v}$ can be expressed as a linear combination of vectors in $S$. Removing $\boldsymbol{v}$ does not affect the span of $S$ because $\boldsymbol{v}$ can still be constructed using a linear combination of the remaining vectors in $S$.

Next, we prove the second case. Let $S$ be linearly independent. Suppose there exists a vector $\boldsymbol{v}$ in $V$ such that $\boldsymbol{v}$ is outside the span of $S$. By definition of span, this means that $\boldsymbol{v}$ cannot be constructed using a linear combination of vectors in $V$. Therefore, inserting $\boldsymbol{v}$ into $S$ will keep $S$ linearly independent.

This completes the proof.

Theorem.

Constructing a basis for a vector space

Let $S$ be a finite set of vectors in a finite dimensional vector space $V$.

• If $S$ spans $V$ but is not a basis for $V$, then $S$ can be reduced to a basis for $V$.

• If $S$ is a linearly independent set but is not a basis for $V$, then $S$ can be expanded to a basis for $V$.

Proof. Let's start by proving the first case. If $S$ spans $V$ but is not a basis of $V$, then by definition of basis, this means that there exists at least one pair of linearly dependent vectors. We remove one of these vectors from $S$. From the plus/minus theoremlink, we know that this new set $S$ still spans vector $V$ because removing a linearly dependent vector from a spanning set does not affect the span. We repeatedly remove any linearly dependent vector from $S$ until we are left with a set of linearly independent vectors that spans $V$ - at this point, $S$ becomes a basis for $V$ by definition.

Next, let's prove the second case. If $S$ is a linearly independent set with $k$ vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$ but is not a basis vector for $V$, then this means that $S$ does not span $V$ by the definition of basis. This means that there exists some $\boldsymbol{v}_{k+1}$ in $V$ that is not in $\mathrm{span}(S)$, that is, $\boldsymbol{v}_{k+1}$ cannot be constructed using the vectors in $S$. From the plus/minus theoremlink, we know that if we insert this vector $\boldsymbol{v}_{k+1}$ into $S$, then $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_{k},\boldsymbol{v}_{k+1}\}$ will still be linearly independent. We continue expanding $S$ in this way until $S$ spans $V$, thereby making $S$ a basis for $V$.

This completes the proof.

Theorem.

Triangular relationship between basis vectors, span and linear dependence

Let $V$ be an $n$-dimensional vector space and let $S$ be a set of $n$ vectors in $V$. Then $S$ is a basis for $V$ if and only if either:

• $S$ spans $V$.

• or $S$ is linearly independent.

Proof. Let's prove the forward proposition that if $S$ is a basis consisting of $n$ vectors in an $n$-dimensional vector space, then $S$ spans $V$ and $S$ is linearly independent. There is actually nothing to prove here because by definition of basis, $S$ is a linearly independent set that also spans $V$.

Let's now prove the backward proposition that if $S$ is a set of $n$ vectors in an $n$-dimensional vector space $V$ and $S$ spans $V$, then $S$ is a basis for $V$. Since we are given that $S$ spans $V$, we need to show that $S$ is also a linearly independent set. Let's assume the contrary that $S$ is a linearly dependent set. This means that there exists at least one vector that can be expressed as a linear combination of the other vectors in $S$. By the plus/minus theoremlink, we can remove this vector and the remaining $n-1$ vectors must also span $V$. However, this cannot be true because by theoremlink, if a set has fewer than $n$ vectors, then the set does not span $V$. We have a contradiction here, which means that our initial assumption that $S$ is a linearly dependent set is false. In other words, $S$ must be linearly independent. Because $S$ spans $V$ and is also a linearly independent set, then this means that $S$ must be a basis for $V$ by definition.

Next, let's prove that if $S$ is a set of $n$ linearly independent vectors in an $n$-dimensional vector space $V$, then $S$ is a basis for $V$. Since we are given that $S$ is linearly independent, we need to show that $S$ spans $V$. Let's assume the contrary once again that $S$ does not span $V$. This means that there exists at least one vector in $V$ that resides outside the span of $S$, that is, at least one vector in $V$ cannot be expressed as a linear combination of vectors in $S$. By the plus/minus theoremlink, we can add this vector from $S$ while keeping $S$ a linearly independent set. We now have $n+1$ linearly independent vectors in an $n$-dimensional vector space $V$, which is impossible because by theoremlink, if a set has more than $n$ vectors, then the set must be linearly dependent. We have a contradiction here, which means that our assumption that $S$ does not span $V$ is false. Because $S$ is a linearly independent set that spans $V$, we conclude that $S$ must be a basis for $V$ by definition.

This completes the proof.

Edited by 0 others
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!