**Linear Algebra**

# Gaussian Elimination

*schedule*Mar 5, 2023

*toc*Table of Contents

*expand_more*

**interactive map of data science**

# Algebraic approach to solve simultaneous equations

Consider the following system of linear equations:

One way of solving this is to algebraically manipulate the given equations such that we are left with an equation with one unknown variable. Once we solve for one variable, the other variable can easily be found through substitution. Let's demonstrate this.

We multiply the top equation by $2$ and the bottom equation by $3$ to get:

Even though we have modified the original set of linear equations, multiplying the rows like so does not change the solution of the system.

Now, subtract the bottom equation from the top equation:

Substituting $x=1$ into the top equation in \eqref{eq:kJIkIimOl75yQKMMHL4} gives:

The solution to our linear system is therefore $x=1$ and $y=2$.

# Systematic approach to solve simultaneous equations (Gaussian elimination)

Let's now take another approach to solve our linear system called Gaussian elimination. We will cover the details of this algorithm later, but for now, we will simply go over its steps. We first express the linear system in the following matrix form:

## Augmented matrices and elementary row operations

Next, we rewrite \eqref{eq:EAGQ6wW6J7xoNf93euG} as an augmented matrix, which is a matrix that combines the matrix on the left and the vector on the right-hand side:

As you can see, the augmented matrix leaves out the variables and captures the essence of our system of linear equations. Augmented matrices can therefore be translated back to the matrix form \eqref{eq:EAGQ6wW6J7xoNf93euG} or the original system of linear equations \eqref{eq:kJIkIimOl75yQKMMHL4}.

Recall from earlier that we multiplied the top row by $3$ and the bottom row by $2$ when solving for the variables. We can also perform the same operations on the augmented matrix:

This is equivalent to the simultaneous equation \eqref{eq:QvqY2GDyN0GsPJ24Hhz}. Such operations that do not alter the solutions of the system are called elementary row operations. Another example of an elementary row operation is interchanging the rows like so:

This clearly does not affect the solutions because the ordering of the linear equations does not matter.

Let's now manipulate and simplify our augmented matrix:

Note the following:

for the second step, we subtracted the bottom row from the top row. Again, this does not alter the solution set and is also considered to be an elementary row operation.

for the third step, we performed yet another elementary row operation of dividing the top row by $6$ and the bottom row by $-6$.

Notice how we have a zero term in middle. This means that we have managed to solve for $x$ because:

However, when simplifying an augmented matrix, we often don't stop here and we keep simplifying until the augmented matrix looks like the following:

The augmented matrix here is even simpler than \eqref{eq:cO6nps6cg0oOrhWvDSS} because it directly gives us the solution:

This form of the augmented matrix is called the reduced row echelon form, which we formally define below.

# Reduced row echelon form of a matrix

Let $\boldsymbol{A}$ be a matrix. The reduced row echelon of $\boldsymbol{A}$, often denoted as $\mathrm{rref}(\boldsymbol{A})$, has the following properties:

unless the row has all $0$s, all the rows must begin with a leading value of $1$.

the leading $1$ in the $i+1$-th row must be on the right-hand side of the leading $1$ in the $i$-th row.

rows with all $0$s are grouped at the bottom.

every row with a leading $1$ has all other columns except the right-most column set to $0$.

If only the first three properties are met, then the form is called a row echelon form.

Some textbooks are more lenient with the structure of the row echelon form - they allow leading coefficients to be **any value** and not just $1$. We will stick with this lenient version of the row echelon form.

## Non-reduced row echelon form (1)

Consider the following augmented matrix:

Why is this not in reduced row echelon form?

Solution. The augmented matrix is in row echelon form because the leading value of nonzero rows is strictly to the right of the leading value of the previous row. However, this form is not in the reduced version because the first row does not begin with a leading $1$. Dividing the top row by $2$ will give us:

This is still not in the reduced row echelon form because the first row with a leading $1$ has a non-zero value for the second column. We can multiply the second row by two and then subtract the bottom row from the top row to get the reduced row echelon form:

## Non-reduced row echelon form (2)

Consider the following augmented matrix:

Why is this not in row echelon form?

Solution. This is not in row echelon form because the leading $1$ in the first row occurs on the right-hand side of the leading $1$ in the second row. We can swap the rows twice to get the (reduced) row echelon form:

Now, given any two successive rows, the leading $1$ of the latter row appears on the right-hand side of the leading $1$ of the row above it.

# Gaussian Elimination (1)

Consider the following system of linear equations in matrix form:

Find the reduced row echelon form and solve the system.

Solution. The augmented matrix is:

We now perform a set of elementary row operations to simplify this into the reduced row echelon form. We will denote the first, second and third rows as $\boldsymbol{r}_1$, $\boldsymbol{r}_2$ and $\boldsymbol{r}_3$ respectively. Note that these rows refer to the rows of the latest form of the augmented matrix.

We will now introduce the Gaussian elimination algorithm, which is a systematic procedure to find the reduced row echelon form of a given augmented matrix. The Gaussian elimination algorithm consists of two stages:

forward phase - make all the values below leading $1$s to be zero.

backward phase - make all the values apart from the leading $1$s and the last column zero.

Once we find the reduced row echelon form, we can then easily find the solutions.

Let's first carry out the forward phase. We swap $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ so that we get a leading $1$ for the first row:

Because the goal of the forward phase is to make all the values below the leading $1$s zero, we must convert the following red terms to zero:

We can achieve this by $3r_1-r_2$, that is, multiplying the first row by $3$ and then subtracting the second row:

Again, $3r_1-r_2$ is an elementary row operation and does not change the solution of our system! In essence, we are simplifying the system step-by-step until we end up with a form that allows us to directly find the solution!

Next, to make the red term become $0$, we perform $2\boldsymbol{r}_1-\boldsymbol{r}_3$ to get:

Notice how we have $0$s under the leading $1$ in the first row - in a sense, we have **eliminated** those numbers. We are done with the first column, so let's focus on the following sub-matrix in green:

Typically, we divide the second row by $5$ to get a leading one, but strictly speaking, we do not need to be too concerned about getting leading $1$s just yet. What's more important is to get a $0$ below the value $5$, which we achieve by performing $3\boldsymbol{r}_2-5\boldsymbol{r}_3$ to get:

We can divide the second row by $5$ to get a leading $1$, which will conclude the forward phase since we will have $0$s below all leading $1$s. Again, we won't divide by $5$ because we would end up with fractions in the second row. Don't worry too much about getting the leading $1$s just yet because we can always divide by the first value in the row to convert them into leading $1$ at any time!

We now begin with the backward phase. In this phase, the objective is to convert the following red terms into zeros:

Remember, the forward phase guarantees that the last row has the most number of leading $0$s. Therefore, we can use this last row to convert the following red terms into zeros:

We perform $\boldsymbol{r}_2-2\boldsymbol{r}_3$ to get:

We perform $\boldsymbol{r}_1-2\boldsymbol{r}_3$ to get:

Great, what's left is to eliminate the $2$ on top of the $5$. Before we do so, let's first divide the second row by $5$ to get a leading one:

Finally, we perform $\boldsymbol{r}_1-2\boldsymbol{r}_2$ to get:

What we have now is the reduced row-echelon form of the augmented matrix!

Once we find the augmented matrix, solving the system is a piece of cake 🍰. Converting the augmented matrix back into a system of linear equations:

Great, we're done!

# Gaussian elimination (2)

Consider the following system of linear equations:

Use Gaussian Elimination to solve the system.

Solution. The augmented matrix is:

Let's now perform a series of elementary row operations to obtain the reduced row echelon form:

The elementary row operation taken for each step is as follows:

$(1)$: $3\boldsymbol{r}_1-2\boldsymbol{r}_2$.

$(2)$: $\boldsymbol{r}_1-\boldsymbol{r}_2$.

(3): $\boldsymbol{r}_2-5\boldsymbol{r}_2$.

This concludes the forward phase. Next, let's simplify the last row by dividing by $7$ to get:

We now begin with the backward phase:

The elementary row operations taken are:

$(5)$: $\boldsymbol{r}_1+3\boldsymbol{r}_3$.

$(6)$: $\boldsymbol{r}_1-\boldsymbol{r}_2$.

$(7)$: $\boldsymbol{r}_1/2$ and $\boldsymbol{r}_2/10$.

Finally, we perform $\boldsymbol{r}_1-2\boldsymbol{r}_2$ to get our reduced row echelon form:

Converting this back into a linear system gives us the solution:

We're done!

# Using row echelon form to check for the existence of a solution

The row echelon form, instead of the reduced version, is sufficient to check for the existence of solutions. In other words, the row echelon form determines whether the system is consistentlink or inconsistentlink.

The reduced row echelon form is better for computing the actual solutions but if all we want to do is to check that a solution exists, then the row echelon form is sufficient.

## Consistent systems

Consider the following row echelon form of some augmented matrix:

Note that we colored the last column to emphasize that these values are not coefficients. We can already tell that a solution exists, that is, the system is consistent, because:

the last row gives us $z$.

substituting $z$ into the second row gives us $y$.

substituting $y$ and $z$ into the first equation gives us $x$.

Similarly, consider the following row echelon form:

The last row is all zeros, which means that we are left with $2$ equations and $3$ unknowns. Since one variable is free to vary, the system is consistent and has infinitely many solutions!

## Inconsistent systems

Consider the following row echelon form of some augmented matrix:

In this case, a solution does not exist because the last row leads to a contradiction of $0=3$. In other words, this system of linear equations is inconsistent.

## Showing that a system is consistent (1)

Consider the following system of linear equations:

Show that the system is consistent.

Solution. The augmented matrix of the system is:

Let's obtain the row echelon form:

Because we can solve for the unknowns, we conclude that the system is consistent with an unique solution.

## Showing that a system is consistent (2)

Consider the following simultaneous equations:

Show that the system is consistent.

Solution. The row echelon form is:

We get all zeros for the last row, which means that we are left with one equation with two unknowns. Therefore, the system is consistent with infinitely many solutions.

## Showing that a system is inconsistent

Consider the following system of linear equations:

Show that the system is inconsistent.

Solution. The row echelon form is:

Because we get a contradiction ($0=1$), we conclude that the system is inconsistent and has no solutions.

# Reduced row echelon form of homogeneous linear system

If $\boldsymbol{B}$ is the reduced row echelon form of some matrix $\boldsymbol{A}$, then solutions to the homogeneous linear system $\boldsymbol{Ax}=\boldsymbol{0}$ and $\boldsymbol{Bx}=\boldsymbol{0}$ are the same.

Proof. Suppose the augmented matrix of the homogenous linear system $\boldsymbol{Ax}=\boldsymbol{0}$ is:

Performing elementary row operations on this augmented matrix:

does not affect the rightmost column whose entries are all zeros. For instance, multiplying a row by a scalar will keep the column all zeroes. This means that we can simply focus on row-reducing the coefficient matrix instead of the augmented matrix.

does not change the solution of the system.

Since the reduced row echelon form $\boldsymbol{B}$ of $\boldsymbol{A}$ is obtained by performing a series of elementary row operations on $\boldsymbol{A}$, we know that $\boldsymbol{Ax}=\boldsymbol{0}$ and $\boldsymbol{Bx}=\boldsymbol{0}$ share the same solutions. This completes the proof.