Linear Algebra and Systems of equations#
- Linear algebra can help you solve systems of linear equations, which makes it very useful in many domains.
- Here is an example of system of equations: - Let’s say you have list of unknown variables: x, y, z - and equations relating them: $$ \begin{aligned} 2x + 5y + 3z &= -3 \\ 4x + 0y + 8z &= 0 \\ 1x + 3y + 0z &= 2 \end{aligned} $$
- If you have equations like above, it is called a linear system of equations. This looks a lot like vector matrix multiplication. So, we can represent it as: $$ \overbrace{ \begin{bmatrix} 2 & 5 & 3 \\ 4 & 0 & 8 \\ 1 & 3 & 0 \end{bmatrix} }^{A} \overbrace{ \begin{bmatrix} x \\ y \\ z \end{bmatrix} }^{\vec{x}} = \overbrace{ \begin{bmatrix} -3 \\ 0 \\ 2 \end{bmatrix} }^{\vec{v}} $$
- So, we can write it as: $$ A \vec{x} = \vec{v} $$
- This means we are looking for a vector \(\vec{x}\), which after the transformation lands on vector \(\vec{v}\).
Solving Linear Equations#
Consider, for the 2D case: \(A \vec{x} = \vec{v}\)
2 cases:
- Keep things 2D:
or, A has non-zero determinant. \(det(A) \neq 0\)
- A squishes things to a lower dimention, like a line or a point:
or, A has zero determinant. \(det(A) = 0\)
Case 1: \(det(A) \neq 0\)#
In this case there will always be one and only one vector \(\vec{x}\), which after the transformation lands on vector \(\vec{v}\).
Inverse transformation: A transformation which reverses the effect of another transformation.
Here, you can see it just reverses the effect of A.
The inverse transformation is denoted as \(A^{-1}\).
If you do \(A^{-1} A\), you get the matrix which basically does nothing, called the identity matrix. and it is denoted as \(I\). Example:
$$ I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} $$
So, we can write: \(A^{-1} A = I\)
Note: The reason I does nothing is because it is just unit vectors.
Note: Matrix multiplication is not commutative, so \(A A^{-1} \neq I\)
Once, we find this inverse (which in practical applications, is done using computers), we can solve for \(\vec{x}\) as: $$ \begin{aligned} A \vec{x} &= \vec{v} \\ A^{-1} A \vec{x} &= A^{-1} \vec{v} \\ I \vec{x} &= A^{-1} \vec{v} \\ \vec{x} &= A^{-1} \vec{v} \end{aligned} $$
Case 2: \(det(A) = 0\)#
- In this case, there is no inverse transformation, because A squishes things to a lower dimension, and you cannot unsquish it back.
- Similarly, for 3D case, if A squishes things to a plane or a line, there is no inverse transformation.
- Solution can still exist, if you are lucky enough, that the vector \(\vec{v}\) lies on the line or plane.
Rank, Column space and Null space#
- We have specific terms to describe the squishing effect of A.
- Rank of a matrix A is the dimension of the space that A maps to.
- When the output of a transformation is a line(its 1D), the rank is 1.
- When the output of a transformation is a plane(its 2D), the rank is 2.
- When the output of a transformation is the entire 3D space(its 3D), the rank is 3.
- The word rank means the same as dimension in this context.
- The set of all possible outputs for your matrix is called the column space of the matrix.
- So, set of all possible outputs of \(A \vec{v}\) is the column space of A. The columns of your matrix A tells you where the basis vectors land after the transformation, and the span of those transformed basis vectors gives you all possible outputs. In other words, the column space is the span of the columns of A.
- A more precise definition of rank is that its the number of dimensions in the column space.
- When this rank is as high as possible, the matrix is called full rank.
NOTE: The zero vector will always be included in the column space, because linear transformation must keep the origin fixed.
- Now, we can see that for matrices with are not full rank, many vectors can land
on origin after the transformation. For example: In 2D, if a transformation squishes
everything to a line, then all the vectors which are perpendicular to that line will
fall on origin after the transformation.
- Similarly, if a 3D transformation squishes things to a plane, then all the vectors
which are perpendicular to that plane will fall on origin after the transformation.
- If a 3D transformation squishes things to a line, then there is a whole plane of vectors
which will fall on origin after the transformation.
- This set of all vectors which fall on origin after the transformation is called the null space or kernel of the matrix.
- When \(\vec{v}\) happens to be 0. $$ A \vec{x} = \vec{0} $$ then, the null space give you all of the possible solutions to this equation.