search
Search
Unlock 100+ guides
search toc
close
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
near_me
Linear Algebra
54 guides
keyboard_arrow_down
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

# Comprehensive Guide on Orthogonal Complement

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Definition.

# Orthogonal complement

If $W$ is a subspace of $\mathbb{R}^n$, then the set of vectors in $\mathbb{R}^n$ that are orthogonal to every vector in $W$ is called the orthogonal complement, often denoted as $W^\perp$.

Example.

## Orthogonal complement of a line in R2

We know from this examplelink in our guide on subspace that a set of vectors on the line $W$ that passes through the origin is a subspace of $\mathbb{R}^2$. The orthogonal complement $W^\perp$ is shown below:

Notice how all the vectors in $W^\perp$ like the one drawn are perpendicular to any vector in $W$.

Theorem.

# Orthogonal complement is a subspace

If $W$ is a subspace of $\mathbb{R}^n$, then the orthogonal complement $W^\perp$ is a subspace of $\mathbb{R}^n$ as well.

Proof. To prove that $W^\perp$ is also a subspace, we must show that $W^\perp$ is closed under addition and scalar multiplication. Suppose vectors $\boldsymbol{w}_1$ and $\boldsymbol{w}_2$ are in $W^\perp$. By definition of orthogonal complements, any vector $\boldsymbol{v}$ in $W$ is perpendicular to $\boldsymbol{w}_1$ and $\boldsymbol{w}_2$. This means that:

\begin{align*} \boldsymbol{v}\cdot\boldsymbol{w}_1&=0\\ \boldsymbol{v}\cdot\boldsymbol{w}_2&=0\\ \end{align*}

\begin{align*} \boldsymbol{v}\cdot\boldsymbol{w}_1+ \boldsymbol{v}\cdot\boldsymbol{w}_2 &=0\\ \boldsymbol{v}\cdot(\boldsymbol{w}_1+ \boldsymbol{w}_2) &=0\\ \end{align*}

This means that the vector $\boldsymbol{w}_1+\boldsymbol{w}_2$ is also orthogonal to $\boldsymbol{v}$. In other words, $\boldsymbol{w}_1+\boldsymbol{w}_2$ also resides in $W^\perp$, which means that $W^\perp$ is closed under addition.

Next, let's check if $W^\perp$ is closed under scalar multiplication. Let $\boldsymbol{w}$ be a vector in $W^\perp$ and $k$ be some scalar. Again, any vector $\boldsymbol{v}$ in $W$ must be perpendicular to $\boldsymbol{w}$ by definition of orthogonal complements. Therefore, we have that:

$$\boldsymbol{v}\cdot\boldsymbol{w}=0$$

Multiplying $k$ to both sides gives:

\begin{align*} k(\boldsymbol{v}\cdot\boldsymbol{w})&=0\\ \boldsymbol{v}\cdot{k\boldsymbol{w}}&=0\\ \end{align*}

This means that $k\boldsymbol{w}$ must be orthogonal to $\boldsymbol{v}$, and so $k\boldsymbol{w}$ must be in $W^\perp$. Therefore, $W^\perp$ is closed under scalar multiplication.

Because $W^\perp$ is closed under addition and scalar multiplication, we have that $W^\perp$ is a subspace of $V$. This completes the proof.

Theorem.

# Intersection of a subspace and its orthogonal complement is the zero vector

If $W$ is a subspace of $\mathbb{R}^n$ and $W^\perp$ is an orthogonal complement, then the only vector that is contained in both $W$ and $W^\perp$ is the zero vector $\boldsymbol{0}$. This is often written as $W\cap{W^\perp}=\{\boldsymbol{0}\}$.

Proof. Let $\boldsymbol{v}$ be a vector contained in both $W$ and $W^\perp$. By definition, this means that $\boldsymbol{v}$ must be orthogonal to itself, that is:

$$$$\label{eq:Y4hb3BFWL0DvjL29INI} \boldsymbol{v}\cdot\boldsymbol{v}=0$$$$

By theoremlink, we have that $\boldsymbol{v}\cdot\boldsymbol{v}= \Vert\boldsymbol{v}\Vert^2$. Therefore, \eqref{eq:Y4hb3BFWL0DvjL29INI} becomes:

$$\Vert\boldsymbol{v}\Vert^2= 0$$

The only way for this to be true is if $\boldsymbol{v}$ is the zero vector. This completes the proof.

Theorem.

# Orthogonality of null space and row space (1)

If $\boldsymbol{A}$ is an $m\times{n}$ matrix, then every vector in the null space of $\boldsymbol{A}$ is orthogonal to every vector in the column space of $\boldsymbol{A}^T$, that is:

$$\big(\mathrm{nullspace}(\boldsymbol{A})\big)^\perp = \mathrm{col}(\boldsymbol{A}^T)$$

Proof. Let $\boldsymbol{v}$ be a vector in the null space of $\boldsymbol{A}$. This means that:

$$$$\label{eq:aSlJ4aYXtf1iXjCL4tO} \boldsymbol{Av}=\boldsymbol{0}$$$$

Let $\boldsymbol{w}$ be a vector in the column space of $\boldsymbol{A}^T$. Let the column vectors of $\boldsymbol{A}^T$ be:

$$\boldsymbol{A}^T= \begin{pmatrix} \vert&\vert&\cdots&\vert\\ \boldsymbol{w}_1&\boldsymbol{w}_2&\cdots&\boldsymbol{w}_m\\ \vert&\vert&\cdots&\vert \end{pmatrix}$$

Because $\boldsymbol{w}$ is in the column space of $\boldsymbol{A}^T$, we know that $\boldsymbol{w}$ can be expressed as a linear combination of the column vectors of $\boldsymbol{A}^T$, that is:

$$$$\label{eq:ozC6kWTfD1iVWt2QRuD} \boldsymbol{w}= c_1\boldsymbol{w}_1+ c_2\boldsymbol{w}_2+ \cdots+ c_m\boldsymbol{w}_m$$$$

Where $c_1$, $c_2$, $\cdots$, $c_m$ are real numbers. Using theoremlink, we can rewrite \eqref{eq:ozC6kWTfD1iVWt2QRuD} as:

$$$$\label{eq:IUwDETw6WjU6WmU1TKa} \boldsymbol{w}= \boldsymbol{A}^T\boldsymbol{c}$$$$

Where vector $\boldsymbol{c}$ holds $c_1$, $c_2$, $\cdots$, $c_m$.

Now, let's take the dot product of $\boldsymbol{v}$ and $\boldsymbol{w}$ to get:

\begin{align*} \boldsymbol{v}\cdot\boldsymbol{w}&= \boldsymbol{v}^T\boldsymbol{w}\\ &=\boldsymbol{v}^T\boldsymbol{A}^T\boldsymbol{c}\\ &=(\boldsymbol{A}\boldsymbol{v})^T\boldsymbol{c}\\ &=(\boldsymbol{0})^T\boldsymbol{c}\\ &=\boldsymbol{0}\cdot\boldsymbol{c}\\ &=0 \end{align*}

Note the following:

Because the dot product of $\boldsymbol{v}$ and $\boldsymbol{w}$ is zero, we know that $\boldsymbol{v}$ and $\boldsymbol{w}$ are perpendicular by theoremlink. This completes the proof.

Theorem.

# Orthogonality of null space and row space (2)

If $\boldsymbol{A}$ is an $m\times{n}$ matrix, then every vector in the null space of $\boldsymbol{A}^T$ is orthogonal to every vector in the column space of $\boldsymbol{A}$, that is:

$$\mathrm{nullspace}(\boldsymbol{A}^T) = \big(\mathrm{col}(\boldsymbol{A})\big)^\perp$$

Proof. The flow of the proof is very similar to that of theoremlink. Let $\boldsymbol{v}$ be a vector in the null space of $\boldsymbol{A}^T$. By definition of null space, we have that:

$$$$\label{eq:KXnAn1zBNEY5lnLeoDg} \boldsymbol{A}^T\boldsymbol{v}=\boldsymbol{0}$$$$

Let $\boldsymbol{w}$ be a vector in the column space of $\boldsymbol{A}$. This means that $\boldsymbol{w}$ can be obtained like so:

$$$$\label{eq:aK9ar7a9TAo79VTnpME} \boldsymbol{Ac}=\boldsymbol{w}$$$$

Where $\boldsymbol{c}$ is some vector containing real numbers. Now, we take the dot product of $\boldsymbol{v}$ and $\boldsymbol{w}$ to get:

\begin{align*} \boldsymbol{v}\cdot\boldsymbol{w}&= \boldsymbol{v}^T\boldsymbol{w}\\ &=\boldsymbol{v}^T\boldsymbol{Ac}\\ &=(\boldsymbol{A}^T\boldsymbol{v})^T\boldsymbol{c}\\ &=\boldsymbol{0}^T\boldsymbol{c}\\ &=\boldsymbol{0}\cdot\boldsymbol{c}\\ &=0 \end{align*}

Because the dot product of $\boldsymbol{v}$ and $\boldsymbol{w}$ is zero, we have that $\boldsymbol{v}$ and $\boldsymbol{w}$ are perpendicular. This completes the proof.

Theorem.

# Sum of the dimension of a subspace and its complement

If $W$ is a subspace of $\mathbb{R}^n$ and $W^\perp$ is the orthogonal complement of $W$, then:

$$\dim(W)+\dim(W^\perp)=n$$

Proof. Let $W$ be a subspace of $\mathbb{R}^n$ with basis $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_k\}$. By definitionlink, the dimension of a vector space is equal to the number of basis vectors of the vector space. Because there are $k$ basis vectors, the dimension of $W$ is $k$, that is:

$$\dim(W)=k$$

Let's now find the dimension of the orthogonal complement $W^\perp$. Let $\boldsymbol{A}$ be a matrix whose columns are the basis vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$. Since these basis vectors are in $\mathbb{R}^n$, the shape of $\boldsymbol{A}$ is $n\times{k}$.

The column space of $\boldsymbol{A}$ is defined as the span of its column vectors, which in this case form a basis for $W$. Therefore, we have that:

$$$$\label{eq:CxMxt9DV4W7UnrqyqhF} \mathrm{col}(\boldsymbol{A})=W$$$$

Now, consider $\boldsymbol{A}^T$, which has the shape $k\times{n}$. We know from the rank-nullity theoremlink that:

$$$$\label{eq:ixU9gDtwWTO8kvYKsKm} \mathrm{rank}(\boldsymbol{A}^T)+ \mathrm{nullity}(\boldsymbol{A}^T)=n$$$$

From theoremlink, $\mathrm{rank}(\boldsymbol{A}^T)=\mathrm{rank}(\boldsymbol{A})$. Therefore, \eqref{eq:ixU9gDtwWTO8kvYKsKm} becomes:

$$$$\label{eq:Hj97Hppg8vjxSxMMBV1} \mathrm{rank}(\boldsymbol{A})+ \mathrm{nullity}(\boldsymbol{A}^T)=n$$$$

By definition, the rank of $\boldsymbol{A}$ is equal to the dimension of the column space of $\boldsymbol{A}$, that is:

$$$$\label{eq:M2NyVgEP75ctpiIfOlb} \mathrm{rank}(\boldsymbol{A})= \dim(\mathrm{col}(\boldsymbol{A}))$$$$

Substituting \eqref{eq:CxMxt9DV4W7UnrqyqhF} into \eqref{eq:M2NyVgEP75ctpiIfOlb} gives:

$$$$\label{eq:Yew07DZOZcuyPGGvpHK} \mathrm{rank}(\boldsymbol{A})= \dim(W)$$$$

Next, the nullity of $\boldsymbol{A}^T$ is defined as the dimension of the null space of $\boldsymbol{A}^T$, that is:

$$$$\label{eq:D1CHaLGWUeqs0YusZQN} \mathrm{nullity}(\boldsymbol{A}^T) =\mathrm{dim}\big(\mathrm{nullspace}(\boldsymbol{A}^T)\big)$$$$

By theoremlink, we have that $\mathrm{nullspace}(\boldsymbol{A}^T) = \big(\mathrm{col}(\boldsymbol{A})\big)^\perp$. Therefore, \eqref{eq:D1CHaLGWUeqs0YusZQN} becomes:

$$$$\label{eq:VQN8hBfs8CMcu5grmae} \mathrm{nullity}(\boldsymbol{A}^T) =\mathrm{dim}\Big(\big(\mathrm{col}(\boldsymbol{A})\big)^\perp\Big)$$$$

Substituting \eqref{eq:CxMxt9DV4W7UnrqyqhF} into \eqref{eq:VQN8hBfs8CMcu5grmae} gives:

$$$$\label{eq:OnopLjC6zKOlzmHRX7R} \mathrm{nullity}(\boldsymbol{A}^T) =\mathrm{dim}(W^\perp)$$$$

Finally, substituting \eqref{eq:Yew07DZOZcuyPGGvpHK} and \eqref{eq:OnopLjC6zKOlzmHRX7R} into \eqref{eq:Hj97Hppg8vjxSxMMBV1} gives:

$$\dim(W)+\dim(W^\perp)=n$$

This completes the proof.

Definition.

# Direct sum

Let $W$ and $W'$ be subspaces of a vector space $V$ such that they both include the zero vector, that is, $W\cap{W'}=\{\boldsymbol{0}\}$. The direct sum of $W$ and $W'$, denoted as $W\oplus{W'}$, is defined as:

$$W\oplus{W'}= \{\boldsymbol{w}+\boldsymbol{w}' \;|\;\boldsymbol{w}\in{W}\text{ and }\boldsymbol{w}'\in{W'} \}$$
Theorem.

# Expressing a vector using a subspace and its orthogonal complement

If $W$ is a finite-dimensional subspace of the vector space $V$, then:

$$V=W\oplus{W^\perp}$$

This means that any vector in $V$ can be expressed by summing some vector in $W$ and $W^\perp$.

Proof. Let $W$ and $W^\perp$ be a subspace of $\mathbb{R}^n$. By theoremlink, we know the following:

$$\dim(W)+\dim(W^\perp)=n$$

Suppose we have the following:

• let $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_k\}$ be the basis for the subspace $W$.

• let $\{\boldsymbol{w}_1,\boldsymbol{w}_2,\cdots,\boldsymbol{w}_{n-k}\}$ be the basis for the subspace $W^\perp$.

Let $\boldsymbol{v}\in{W}$ and $\boldsymbol{w}\in{W^\perp}$. By definitionlink, the basis vectors span their respective vector space, so we can express $\boldsymbol{v}$ and $\boldsymbol{w}$ as a linear combination of basis vectors:

\begin{align*} \boldsymbol{v}&=c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k \\ \boldsymbol{w}&= d_1\boldsymbol{w}_1+ d_2\boldsymbol{w}_2+\cdots+ d_{n-k}\boldsymbol{w}_{n-k} \end{align*}

The vector $\boldsymbol{v}+\boldsymbol{w}$ is:

\begin{align*} \boldsymbol{v}+\boldsymbol{w}&=c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k+ d_1\boldsymbol{w}_1+ d_2\boldsymbol{w}_2+\cdots+ d_{n-k}\boldsymbol{w}_{n-k} \end{align*}

Our goal now is to show that $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots, \boldsymbol{v}_k,\boldsymbol{w}_1,\boldsymbol{w}_2,\cdots ,\boldsymbol{w}_{n-k}\}$ is a basis for $V$. Let's start by checking that this is a linearly independent set using the definitionlink of linear independence:

$$$$\label{eq:klHFt5KfS93i8SeHCAS} c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k+ d_1\boldsymbol{w}_1+ d_2\boldsymbol{w}_2+\cdots+ d_{n-k}\boldsymbol{w}_{n-k}=\boldsymbol{0}$$$$

Let's move all the vectors in $W^\perp$ to the right-hand side:

$$$$\label{eq:UNgVP8tCvZUoKIyOKlW} c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k= -d_1\boldsymbol{w}_1- d_2\boldsymbol{w}_2-\cdots- d_{n-k}\boldsymbol{w}_{n-k}$$$$

Here:

• the left-hand side is $\boldsymbol{v}$, which is a vector in $W$.

• the right-hand side is some vector in $W^\perp$.

We know from theoremlink that the only vector that resides in both $W$ and $W^\perp$ is the zero vector. Therefore, the left-hand side and the right-hand side of \eqref{eq:UNgVP8tCvZUoKIyOKlW} must be the zero vector, that is:

\begin{align*} c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k&=0\\ -d_1\boldsymbol{w}_1- d_2\boldsymbol{w}_2-\cdots- d_{n-k}\boldsymbol{w}_{n-k}&=0 \end{align*}

Multiplying both sides of the bottom equation by $-1$ gives:

\begin{align*} c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+\cdots+ c_k\boldsymbol{v}_k&=0\\ d_1\boldsymbol{w}_1+ d_2\boldsymbol{w}_2+\cdots+ d_{n-k}\boldsymbol{w}_{n-k}&=0 \end{align*}

Now, $\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_k\}$ is a basis for $W$, which means that the vectors in this set are linearly independent. By definitionlink of linear independence, $c_1$, $c_2$, $\cdots$, $c_k$ must be zero. Similarly, $\{\boldsymbol{w}_1,\boldsymbol{w}_2,\cdots, \boldsymbol{w}_{n-k}\}$ is a basis for $W^\perp$, which again means that the vectors in this set are linearly independent. Therefore, $d_1$, $d_2$, $\cdots$, $d_{n-k}$ must also be zero.

This means that the only way for the equality \eqref{eq:klHFt5KfS93i8SeHCAS} to hold is if all the coefficients on the left are zero. The set $S=\{\boldsymbol{v}_1,\boldsymbol{v}_2,\cdots,\boldsymbol{v}_k, \boldsymbol{w}_1,\boldsymbol{w}_2,\cdots,\boldsymbol{w}_{n-k}\}$ is therefore a linearly independent set. Since there are a total of $n$ vectors in $S$, we have by the triangular theoremlink that $S$ is a basis for $\mathbb{R}^n$.

Because $S$ is a basis for $\mathbb{R}^n$, any vector in $\mathbb{R}^n$ can be expressed as some linear combination of the vectors in $S$. Remember that the vectors in $S$ are simply the basis vectors of $W$ and $W^\perp$. Therefore, any vector $\boldsymbol{x}$ in $\mathbb{R}^n$ can be written as a sum of vectors in $W$ and $W^\perp$ like so:

$$\boldsymbol{x}=\boldsymbol{v}+\boldsymbol{w}$$

This completes the proof.

Theorem.

# Unique vector representation using subspace and its orthogonal complement

The vector representation of $V=W\oplus{W}^\perp$ is unique.

Proof. Let vector $\boldsymbol{x}\in{V}$, $\boldsymbol{v}_1,\boldsymbol{v}_2\in{W}$ and $\boldsymbol{w}_1,\boldsymbol{w}_2\in{W}^\perp$. Suppose that the representation is not unique, that is, there exist two distinct representations of $\boldsymbol{x}$ like so:

\label{eq:cf13KJaJeZ81VeQvrmM} \begin{aligned} \boldsymbol{x}&=\boldsymbol{v}_1+\boldsymbol{w}_1\\ \boldsymbol{x}&=\boldsymbol{v}_2+\boldsymbol{w}_2 \end{aligned}

Equating the two equations and rearranging gives:

$$$$\label{eq:iiHHgNssxJcb31Yzyx3} \boldsymbol{v}_2+\boldsymbol{w}_2=\boldsymbol{v}_1+\boldsymbol{w}_1 \;\;\;\;\;\;\;\;\Longleftrightarrow\;\;\;\;\;\;\;\; \boldsymbol{v}_1-\boldsymbol{v}_2=\boldsymbol{w}_2-\boldsymbol{w}_1$$$$

Since $W$ is a subspace and thus closed by addition, we have that vector $\boldsymbol{v}_1-\boldsymbol{v}_2\in{W}$. Similarly, $\boldsymbol{w}_2-\boldsymbol{w}_1\in{W}^\perp$.

By theoremlink, we know that the only vector that resides both in $W$ and $W^\perp$ is the zero vector. For the equality in \eqref{eq:iiHHgNssxJcb31Yzyx3} to hold, the left-hand side and the right-hand side must be the zero vector:

\begin{align*} \boldsymbol{v}_1-\boldsymbol{v}_2=\boldsymbol{0} \;\;\;\;\;\;\;&\Longleftrightarrow\;\;\;\;\;\;\; \boldsymbol{v}_1=\boldsymbol{v}_2\\ \boldsymbol{w}_2-\boldsymbol{w}_1=\boldsymbol{0} \;\;\;\;\;\;\;&\Longleftrightarrow\;\;\;\;\;\;\; \boldsymbol{w}_2=\boldsymbol{w}_1\\ \end{align*}

From \eqref{eq:cf13KJaJeZ81VeQvrmM}, we conclude that the two representations must be the same. This completes the proof.

Theorem.

# Orthogonal complement of the orthogonal complement

If $W$ is a subspace of $\mathbb{R}^n$ and $W^\perp$ is its orthogonal complement, then the orthogonal complement of $W^\perp$ is $W$. Mathematically, this translates to:

$$(W^\perp)^\perp=W$$

Proof. Let $\boldsymbol{x}\in(W^\perp)^\perp$. By theoremlink, any vector in $\mathbb{R}^n$ can be expressed as a sum of a vector in $W$ and $W^\perp$. Suppose $\boldsymbol{v}\in{W}$ and $\boldsymbol{w}\in{W}^\perp$ such that:

$$$$\label{eq:Sg6OytqGqOnnw6BEoyh} \boldsymbol{x}=\boldsymbol{v}+\boldsymbol{w}$$$$

Let's take the dot product with $\boldsymbol{w}$ on both sides to get:

$$\boldsymbol{x}\cdot\boldsymbol{w} =(\boldsymbol{v}+\boldsymbol{w})\cdot\boldsymbol{w}$$

We know that $\boldsymbol{x}\cdot\boldsymbol{w}=0$ because $\boldsymbol{x}\in(W^\perp)^\perp$ and $\boldsymbol{w}\in{W}^\perp$. Therefore, we get:

\begin{align*} 0&=(\boldsymbol{v}+\boldsymbol{w})\cdot\boldsymbol{w}\\ 0&=\boldsymbol{v}\cdot\boldsymbol{w}+\boldsymbol{w}\cdot\boldsymbol{w}\\ \end{align*}

We know that $\boldsymbol{v}\cdot\boldsymbol{w}=0$ because $\boldsymbol{v}\in{W}$ and $\boldsymbol{w}\in{W}^\perp$. Next, by theoremlink, $\boldsymbol{w}\cdot\boldsymbol{w}= \Vert\boldsymbol{w}\Vert^2$. Therefore, we end up with:

$$0=\Vert\boldsymbol{w}\Vert^2$$

This implies that $\boldsymbol{w}$ must be equal to the zero vector. We now go back to \eqref{eq:Sg6OytqGqOnnw6BEoyh} to get:

$$\boldsymbol{x}=\boldsymbol{v}$$

Since $\boldsymbol{x}\in(W^\perp)^\perp$ and $\boldsymbol{v}\in{W}$, we conclude that $(W^\perp)^\perp=W$. This completes the proof.

Edited by 0 others
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!