Topic 18 – Linear Algebra

Why do I need to learn about linear algebra?

Linear algebra is a fundamental tool for understanding many modern theories and techniques such as artificial intelligence, machine learning, deep learning, data mining, security, digital imagine processing, and natural language processing.

Linear algebra provides a powerful language that unifies algebra, geometry, and computation. It enables compact representation, allowing many equations to be expressed as a single 2D array. It also facilitates convenient manipulation, as algebraic operations on vectors and matrices naturally correspond to geometric transformations. By linking algebra, geometry, and computation within a single framework, linear algebra serves as a foundation for both geometric interpretation and computational implementation.

What can I do after finishing learning about linear algebra?

You will be prepared to learn modern theories and techniques to create modern security, machine learning, data mining, image processing or natural language processing software.

That sounds useful! What should I do now?

Linear algebra can be difficult if you try to memorize all of its formulas. The best way to study it is to focus on the systems of equations in the problems that interest you, and then look for notations and concepts that make it easier to analyze or solve those systems.

Please read this book to grasp the core concepts of linear algebra: David C. Lay et al. (2022). Linear Algebra and Its Applications. Pearson Education.

Alternatively, please audit the course and do read its lecture notes: MIT 18.06 – Linear Algebra, Spring 2005 (Lecture Notes).

While auditing this course, refer to this book for a better understanding of some complex topics: Gilbert Strang (2016). Introduction to Linear Algebra. Wellesley-Cambridge Press.

Terminology Review:

  • Linear Equations.
  • Row Picture.
  • Column Picture.
  • Triangular matrix is a square matrix where all the values above or below the diagonal are zero.
  • Lower Triangular Matries.
  • Upper Triangular Matries.
  • Diagonal matrix is a matrix in which the entries outside the main diagonal are all zero.
  • Tridiagonal Matries.
  • Identity Matries.
  • Transpose of a Matrix.
  • Symmetric Matries.
  • Pivot Columns.
  • Pivot Variables.
  • Augmented Matrix.
  • Echelon Form.
  • Reduced Row Echelon Form.
  • Elimination Matrices.
  • Inverse Matrix.
  • Factorization into A = LU.
  • Free Columns.
  • Free Variables.
  • Gauss-Jordan Elimination.
  • Vector Spaces.
  • Rank of a Matrix.
  • Permutation Matrices.
  • Subspaces.
  • Column space, C(A) consists of all combinations of the columns of A and is a vector space in ℝᵐ.
  • Nullspace, N(A) consists of all solutions x of the equation Ax = 0 and lies in ℝⁿ.
  • Row space, C(Aᵀ) consists of all combinations of the row vectors of A and form a subspace of ℝⁿ. We equate this with C(Aᵀ), the column space of the transpose of A.
  • The left nullspace of A, N(Aᵀ) is the nullspace of Aᵀ. This is a subspace of ℝᵐ.
  • Linearly Dependent Vectors.
  • Linearly Independent Vectors.
  • Linear Span of Vectors.
  • A basis for a vector space is a sequence of vectors with two properties:
    • They are independent.
    • They span the vector space.
  • Given a space, every basis for that space has the same number of vectors; that number is the dimension of the space.
  • Dimension of a Vector Space.
  • Dot Product.
  • Orthogonal Vectors.
  • Orthogonal Subspaces.
  • Row space of A is orthogonal to  nullspace of A.
  • Matrix Spaces.
  • Rank-One Matrices.
  • Orthogonal Complements.
  • Projection Matrices: P = A(AᵀA)⁻¹Aᵀ. Properties of projection matrix: Pᵀ = P and P² = P. Projection component: Pb = A(AᵀA)⁻¹Aᵀb = (AᵀA)⁻¹(Aᵀb)A.
  • Linear regression, least squares, and normal equations: Instead of solving Ax = b we solve Ax̂ = p or AᵀAx̂ = Aᵀb.
  • Linear Regression.
  • Orthogonal Matrices.
  • Orthogonal Basis.
  • Orthonormal Vectors.
  • Orthonormal Basis.
  • Orthogonal Subspaces.
  • Gram–Schmidt process.
  • Determinant: A number associated with any square matrix letting us know whether the matrix is invertible, the formula for the inverse matrix, the volume of the parallelepiped whose edges are the column vectors of A. The determinant of a triangular matrix is the product of the diagonal entries (pivots).
  • The big formula for computing the determinant.
  • The cofactor formula rewrites the big formula for the determinant of an n by n matrix in terms of the determinants of smaller matrices.
  • Formula for Inverse Matrices.
  • Cramer’s Rule.
  • Eigenvectors are vectors for which Ax is parallel to x: Ax = λx. λ is an eigenvalue of A, det(A − λI)= 0.
  • Diagonalizing a matrix: AS = SΛ 🡲 S⁻¹AS = Λ 🡲 A = SΛS⁻¹. S: matrix of n linearly independent eigenvectors. Λ: matrix of eigenvalues on diagonal.
  • Matrix exponential eᴬᵗ.
  • Markov Matrices: All entries are non-negative and each column adds to 1.
  • Symmetric Matrices: Aᵀ = A.
  • Positive Definite Matrices: all eigenvalues are positive or all pivots are positive or all determinants are positive.
  • Similar Matrices: A and B = M⁻¹AM.
  • Singular Value Decomposition (SVD) of a matrix: A = UΣVᵀ, where U is orthogonal, Σ is diagonal, and V is orthogonal.
  • Linear Transformations: T(v + w) = T(v)+ T(w) and T(cv)= cT(v) . For any linear transformation T we can find a matrix A so that T(v) = Av.
  • Change-of-basis Matrix.
  • Left Inverse Matries: LA=I, Right Inverse Matrices: AR=I.
  • Pseudo Inverse Matrices: A⁺=VΣ⁺Uᵀ.

After finishing linear algebra, please click on Topic 19 – Probability & Statistics to continue.

 

(Visited 101 times, 1 visits today)

Leave a Reply