VECTORS
VECTORS
Span: The span of vector means what kind of "space" we can explore by doing the linear combination of the original
vectors.
If a vector doesn’t help other vector of same dimension reach its maximum span, then these two vectors are called
linearly dependent.
Ex- I am student of maths domain. My one friend is also in maths domain. So our conversation will only be limited to
maths domain. My other friend is in physics domain, I can discuss all possible use cases of maths in physics and vice
versa and we unlock a higher dimensional space comparising of all possible linear combination of Maths + Physics.
If the vectors are linearly independent, their span can fill maximum possible space; if they are linearly dependent, their span is limited to a
subspace of the larger space.
Matrix: In terms of vector, Matrix are like operators used to do linear transformation on the vectors. If we see Vectors as ingredients of a dish, We
can say matrix like is like the recipe book for that dish.
Matrix Multiplication: Suppose more than 1 linear transformations have been done on a vector, then to calculate the overall effect of all the
transformations, we can do matrix multiplication of all matrices and use this matrix to transform the vector we will give the same result.
Ex: To make tea, we have our traditional way in which we use fresh milk, Tea leaves, sugar, Elaichi etc in boiling water. The other way is to use a
premix chai and just put it in boiling water and have it. So make a premix by combining all the items together in such a way taste of both Chai is
same.
Determinant: The factor by any linear transformation changes the area or volume is called determinant of that transformation.
https://youtu.be/fDAPJ7rvcUw
https://youtu.be/4HHUGnHcDQw
• Eigenvalues and eigenvectors are concepts from linear algebra that deal with how matrices transform vectors. Imagine you have a special
machine that takes vectors as input and stretches, shrinks, or flips them in some way. Eigenvectors are special directions (the vectors) that
get scaled (stretched or shrunk) by this machine, but don't get rotated or flipped. The amount by which they are scaled is called the
eigenvalue.
• Here's a breakdown of the terms:
• Eigenvalue (λ): A scalar value (a number) associated with a matrix that tells you how much an eigenvector gets stretched or shrunk by the
matrix transformation.
• Positive eigenvalue: The vector gets stretched.
• Negative eigenvalue: The vector gets shrunk and flipped.
• Eigenvalue of 1: The vector keeps its length (no scaling).
• Eigenvector (v): A non-zero vector that keeps its direction (possibly with a flip) after being transformed by the matrix. It only gets scaled by
the corresponding eigenvalue.
• Here's an analogy: Imagine the matrix as a stretching machine. An eigenvector is like a special kind of fabric that only gets longer or shorter
when stretched, without changing its orientation. The eigenvalue tells you by what factor the fabric gets stretched.
• Eigenvalues and eigenvectors are useful in many fields, including:
• Computer graphics: 3D rotations, image compression.
• Data analysis: Principal component analysis (PCA) for dimensionality reduction. Principal Component Analysis (PCA): In data analysis and
machine learning, PCA uses eigenvectors to reduce the dimensionality of data while preserving the most significant information. It's widely
used for feature selection and data compression.
• Google PageRank Algorithm: PageRank uses eigenvectors to rank web pages in search results. The web pages are treated as nodes in a
network, and eigenvectors help determine their importance
• Rank of a matrix- A matrix is said to be of rank r if it has atleast one non zero minor of order r and all higher order
than r minor vanishes. The rank of a matrix refers to the maximum number of linearly independent rows (or
columns) in the matrix.
• For a n*n matrix, it means that out of n vector, r vectors are linearly independent hence adding to the span and n-r
vectors are linear dependent and hence not adding to the span.
• Solution of linear equations- Solely based on Rank of coefficient
• Unique Sol: If Rank of Augmented matrix = Rank of Coefficient Matrix=m
• No solution: Rank of augmented matrix != Rank of Coefficient Matrix
• Infinite no of sol: If Rank of Augmented matrix = Rank of Coefficient Matrix =r<m, then we may express r variables
in terms of m-r variables and for all infinite arbitrary values of m-r variables, we will get corresponding value of r
variables.
WHY/HOW:
Suppose we make a coefficient matrix and Augmented matrix and reduce it to row echelon form by doing elementary
row transformations. This way in our last row we will have m-1 elements as 0 and last element of last row will be
either zero or non zero. If Last element is zero in coefficient matrix and it is not zero in augmented matrix, this means
zero equal to some non zero number which is inconsistent. If Last element is zero in coefficient matrix as well as in
augmented matrix, that means we will get no condition for last variable to be satisfied which means it can take any
values as 0=0 for all values of last variable. If Last element is non zero in coefficient matrix and it is non zero in
augmented matrix as well then we can get a unique value of last variable and hence for all the variables
This formula is quite useful in finding nth power of matrix A as A^n=P* D^-n * P ^-
¹
Not all matrices are diagonalizable. If no of independent Eigen vector is less than no of time Eigen
values appear as roots of Characteristics Equation, then that matrix is not diagonalizable and such
eigen values are called defective eigen value.