Why do I need to learn about linear algebra?
Linear algebra is a fundamental tool for understanding many modern theories and techniques such as artificial intelligence, machine learning, deep learning, data mining, security, digital imagine processing and natural language processing.
What can I do after finishing learning about linear algebra?
You will be prepared to learn modern theories and techniques to create modern security, machine learning, data mining, image processing or natural language processing software.
That sounds useful! What should I do now?
Please read this David C. Lay et al. (2016). Linear Algebra and Its Applications. Pearson Education book.
Alternatively, please watch this MIT 18.06 Linear Algebra, Spring 2005 course (Lecture Notes) and read this Gilbert Strang (2016). Introduction to Linear Algebra. Wellesley-Cambridge Press book.
- Triangular matrix.
- Column space, C(A) consists of all combinations of the columns of A and is a vector space in ℝᵐ.
- Nullspace, N(A) consists of all solutions x of the equation Ax = 0 and lies in ℝⁿ.
- Row space, C(Aᵀ) consists of all combinations of the row vectors of A and form a subspace of ℝⁿ. We equate this with C(Aᵀ), the column space of the transpose of A.
- The left nullspace of A, N(Aᵀ) is the nullspace of Aᵀ. This is a subspace of ℝᵐ.
- A basis for a vector space is a sequence of vectors with two properties:
• They are independent.
• They span the vector space.
- Given a space, every basis for that space has the same number of vectors; that number is the dimension of the space.
- Dot product.
- Orthogonal vectors.
- Orthogonal subspaces.
- Row space of A is orthogonal to nullspace of A.
- Orthogonal complements.
- Projection matrix: P = A(AᵀA)⁻¹Aᵀ. Properties of projection matrix: Pᵀ = P and P² = P. Projection component: Pb = A(AᵀA)⁻¹Aᵀb = (AᵀA)⁻¹(Aᵀb)A.
- Linear regression, least squares, and normal equations: Instead of solving Ax = b we solve Ax̂ = p or AᵀAx̂ = Aᵀb.
- Orthogonal matrix.
- Orthogonal basis.
- Orthonormal vectors.
- Orthonormal basis.
- Gram–Schmidt process.
- Determinant: A number associated with any square matrix encoding a lot of information about the matrix.
After finishing the books please click Topic 18 – Probability & Statistics to continue.