Linear Algebra, in a nutshell
Vectors
There are these things called vectors. They don’t exist in isolation. Rather, they’re defined as parts of vector spaces. A vector space has objects called vectors , and scalars such that is a vector and is a scalar. Vector addition and scalar multiplication behave as you’d expect: it commutes, associates, distributes, etc. Don’t think too hard about what vectors are, just what they do.
The canonical example of a vector is a list of real or complex numbers. These vectors exist in or . We will be focusing on these vectors soon, but for now we work with generic vector spaces.
We say a set of vectors is linearly independent if the only solution to is for all . We say a set of vectors spans a vector space if for any vector , there is some solution to . If is linearly independent and spans , then we say is a basis of .
All bases of a vector have the same length (i.e. the same number of vectors). Therefore we define the dimension of a vector space to be the length of the basis.
A subspace of a vector space is a vector space whose elements are all in and whose addition/multiplication operations are identical. That is, if under , then under as well, etc.
Linear Maps
A linear map is a function such that and for all vectors , and all scalars . ( and are vector spaces.)
The most boring map is , the identity map. It is the function such that .
The null space of a linear map is the set of vectors such that . As the term implies, the null space is a vector space itself. The range of a linear map is the set of vectors such that there exists some vector satisfying . In other words, the range is the set of outputs possible from . (Take note that and .) The Rank-Nullity Theorem states .
Also, linear maps can be injective, surjective, and invertible. is injective if . ( injective is equivalent to .) is surjective if . is bijective if it is both injective and surjective. And is invertible if . Obviously, must be of the form , i.e. , for to be invertible.
A linear endomorphism is a map of the form . In other words, the domain and codomain are identical; this is the definition of an endomorphism. In an endomorphism, injectivity, surjectivity, and invertibility are all identical.
Now we discuss invariant subspaces on linear endomorphisms. A subspace is invariant under a transformation if for all . You can decompose a vector space into invariant subspaces.
The most useful kind of invariant subspace is that with dimension . These subspaces are essentially equivalent to eigenvectors. We say is an eigenvector if for some scalar . Also, we call the associated eigenvalue. If is an eigenvector, then is an invariant subspace with dimension .
A set of eigenvectors in distinct invariant subspaces is linearly independent.
All complex vector spaces have an eigenvalue. All real vector spaces with odd dimension have an eigenvalue. The former is essentially equivalent to the Fundamental Theorem of Algebra, and the latter is equivalent to the fact that all real polynomials with odd degree have a root.
Inner-Product Spaces
Up until now any arbitrary vector space sufficed: the scalars could come from any field. Now we must specifically deal with either the real or complex vector spaces and .
An inner product satisfies the following:
- , with equality iff
(To clarify, is the complex conjugate of .)
These are our axioms, but one can also deduce that and .
The standard example of an inner product is the dot product. Note that the complex dot product is not quite what you’d expect; . The motivation is so that .
Also, the norm of is .
Some (in)equalities you might expect from dealing with the dot product:
- Pythagorean Theorem: If , then .
- Cauchy-Schwarz: , with equality when one of , is a multiple of the other.
- Triangle Inequality: , with equality when one of , is a non-negative multiple of the other.
With the inner product defined, we can construct orthonormal bases of a vector space. In an orthonormal basis, is if and if . An orthonormal basis always exists, and can be explicitly constructed given some basis with the Gram-Schmidt Procedure.
Spectral Theorem
A linear functional is a map from . For all linear functionals , there is a unique vector such that for all .
Fix some vector . For a linear map , observe is a linear functional from , so there exists some such that . Each is associated with a unique , so we can consider a function from , and furthermore, is a linear map.
Say now that is a linear endomorphism, i.e. . Then we say is self-adjoint if and is normal if . Obviously all self-adjoint operators are normal.
The Spectral Theorem states that has a diagonal matrix iff
- In a complex vector space: is normal
- In a real vector space: is self-adjoint
I’ve been trying really hard to avoid matrices, but bringing them up is very natural here. To use terms we have defined before, “has a diagonal matrix” is equivalent to “has an orthonormal basis of eigenvectors”.
Conclusion
So that’s linear algebra! A lot of detail has been left out, and we haven’t even touched matrices, but it’s still quite doable right?