In this chapter, we learned quite a lot of practical concepts like solving linear systems or inverse matrices. Now we can finally learn something more abstract and beautiful(in my opinion), which is the fundamental theorem of linear algebra. Here, we are just going to learn the first part of it which is also called Rank–Nullity theorem.

The name of this theorem is actually not universal. It is popularized by one of my favorite professors: Gilbert Strang. This name is related to the famous theorem in calculus, the fundamental theorem of Calculus. The theorem itself is not hard to prove, so if you are interested, you can try to do it. But before we dig into the theorem, we will quickly introduce two more concepts.

The first one is the Rank of a matrix & transformation. The Rank of a matrix is defined as the dimension of its column space. For example:

$$A = \begin{bmatrix} 1 & 2\\ 3 & 4 \end{bmatrix}$$

The Column space is the span of its columns. In this case, we have \(span \left(

\begin{bmatrix} 1\\ 3 \end{bmatrix}
,
\begin{bmatrix} 2\\ 4 \end{bmatrix}
\right)\), the two column vectors are linear independent so they span the entire 2d space. Then the dimension of its column space is 2. So we can say:

$$Rank(A) = 2$$

The second one is the Nullity of a matrix & transformation. The Nullity of a matrix is defined as the dimension of its null space or Kernel. For example:

$$A = \begin{bmatrix} 2 & 1\\ 2 & 1 \end{bmatrix}$$

The null space of this transformation can be found by calculating the following equation:

$$A\pmb{x} = \pmb{0}$$

if you solve this equation, you will find that the solution for \(\pmb{x}\) is \(span \left(

\begin{bmatrix} 1\\ -2 \end{bmatrix}
\right)\), so the dimension of null space is 1, therefore we can say:

$$Nullity(A) = 1$$

Now we finished our appetizer, let’s check our main course. And of course, To understand the theorem better, we have to put this in the frame of linear transformation.

Let’s say we have a transformation matrix \(T\). This matrix transform all vectors in vector space \(V\) to another vector space \(W\). So if you pick any vector in \(W\), say \(\pmb{w} \in W\). Then their must be at least one vector \(\pmb{v}\) in vector space \(V\), such that:

$$\pmb{w} = T(\pmb{v})$$

What we are interested is the dimension. So what is the dimension of \(V\)? This is easy. If your matrix \(T\) is a \(m \times n\) matrix, it means you take the vector with dimension \(n\) and you output a vector with dimension \(m\). So, the input vector space must have the dimension of \(n\).

How about the dimension of \(W\)? The vector space \(W\) has a more complicated situation. You might think, like how we get the dimension of \(V\), the dimension of \(W\) is simply \(m\). That is not true. For example:

$$A = \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix}$$

This zero matrix returns only vector zero, which means our \(W\) space has dimension of zero. But \(m\) in this case is 2 so not true at all.

So what is it? Well, I think I just spoil the answer at the beginning. Yes! You guessed it! It is the Rank of this transformation. Why? Remember how the transformation matrix is defined? The column of the transformation matrix is the basis vectors after transformation. So the span of the “transformed” basis has to be the vector space \(W\).

There is another name for \(W\). For a linear transformation. Vector space \(W\) is called the image of transformation. It is the “output” of that transformation.

Alright! We now know the dimension of vector space before and after the transformation, there is only one part of the theorem remains.

When we looked at the dimension of \(V\) and \(W\), you will notice that the dimension of \(V\) is always bigger or equal to the dimension of \(W\). This comes from the fact that linear transformation cannot map one vector to two different ones, just like in function, one \(x\) value will always give you one \(y\) value, so the domain will always be bigger than the range.

This is not particularly interesting, but one might ask another question: where did the “missing” dimension go? If your transformation transforms a 3d space into a 1d space, the rest 2 dimension actually go to the Null space of that transformation! This is the last piece of our Rank-Nullity theorem:

$$Rank(T) + Nullity(T) = dim(V) = n$$

How is this useful in practice? Well, once we know the rank of a transformation, we can quickly tell the dimension of its null space, and this works for unknown matrix too. Even if we don’t know the exact transformation in the future, by applying the Rank-Nullity Theorem, we can know the dimension of image space or null space.

Like this post? Share on: TwitterFacebookEmail


Published

Last Updated

Category

Chapter 5 Solving Linear system using matrix

Stay in Touch