We already saw several interesting things we can do with Matrix and Matrix multiplication, in this section, we will learn a very famous matrix. This matrix is always introduced in the first class of linear algebra, but here we are going to try a very special approach.

Assume we have a lovely matrix \(A\), where \(A\) has some interesting columns:

$$A = \begin{bmatrix} 2 & -1 & 1\\ 1 & 1 & 2\\ 1 & 2 & 3 \end{bmatrix}$$

The columns of matrix \(A\) has some relations, namely the first column plus the second column gives you the third column:

$$\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}+\begin{bmatrix} -1\\ 1\\ 2 \end{bmatrix} = \begin{bmatrix} 1\\ 2\\ 3 \end{bmatrix}$$

What do we know about the columns of this matrix? We know that the real “meat” of the matrix is the first two columns since the last column is the linear combination of the first two.

So, how about we deconstruct this matrix into two? where the first matrix contains all the “meat” of the matrix (here the “meat” just means the basis of column space), and the second matrix tells us all the other information, like the last column is the first column plus the second column?

Let’s do this!

first matrix is easy! We just get rid of the last column and call it \(C\):

$$C = \begin{bmatrix} 2 & -1 \\ 1 & 1 \\ 1 & 2 \end{bmatrix}$$

then we write the “bone” of this matrix using the basis \(C\).

How do we construct it? We can immediately tell that to get the first column:

$$\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}$$

we need just one portion of

$$\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}$$

and zero portion of

$$\begin{bmatrix} -1\\ 1\\ 2 \end{bmatrix}$$

In another word, one portion of the first basis vector and zero of the second:

$$R_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$

To get the second column vector, we need zero portion of

$$\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}$$

and one portion of

$$\begin{bmatrix} -1\\ 1\\ 2 \end{bmatrix}$$

. In another word, one portion of the first basis vector and zero of the second:

$$R_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

At last, To get the third column vector, we need one portion of

$$\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}$$

and one portion of

$$\begin{bmatrix} -1\\ 1\\ 2 \end{bmatrix}$$

. In another word, one portion of the first basis vector and zero of the second:

$$R_3 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$$

Now we put these three “bones” together, and we call the combination matrix \(R\):

$$R = \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{bmatrix}$$

You can check that the matrix \(C\) multiply by the matrix \(R\) is exactly the matrix \(A\)!

$$A = \begin{bmatrix} 2 & -1 & 1\\ 1 & 1 & 2\\ 1 & 2 & 3 \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ 1 & 1 \\ 1 & 2 \end{bmatrix}\begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{bmatrix} = CR$$

Here comes the cool part. Do you know why we name the “bone” matrix \(R\)? The \(R\) stands for the row, if you learned the Gaussian elimination before, you will notice that this matrix looks like the result of the Gaussian elimination, the Row Echelon Form! Specifically, in this case, we get the Reduced Row Echelon Form of matrix \(A\).

We can do this for any matrix. When you do it on the square matrix with all independent columns(rows), the Reduced Row Echelon Form(RREF) is just the identity matrix.

So, not only we can combine two matrices into one, we can also decompose a matrix into many. This specific example is called CR factorization, there are more of them in the future, like LU decomposition or diagonalization. Each of them plays an important role in linear algebra.

Like this post? Share on: TwitterFacebookEmail


Published

Last Updated

Category

Chapter 6 Matrix and Matrix multiplication

Stay in Touch