Linear Algebra, in a nutshell

Vectors

There are these things called vectors. They don’t exist in isolation. Rather, they’re defined as parts of vector spaces. A vector space has objects called vectors uu, vv and scalars kk such that u+vu+v is a vector and kvkv is a scalar. Vector addition and scalar multiplication behave as you’d expect: it commutes, associates, distributes, etc. Don’t think too hard about what vectors are, just what they do.

The canonical example of a vector is a list of real or complex numbers. These vectors exist in n\mathbb{R}^n or n\mathbb{C}^n. We will be focusing on these vectors soon, but for now we work with generic vector spaces.

We say a set of vectors {vi}\{v_i\} is linearly independent if the only solution to aivi=0\sum a_iv_i = 0 is ai=0a_i=0 for all aia_i. We say a set of vectors spans a vector space VV if for any vector vVv\in V, there is some solution to aivi=v\sum a_iv_i = v. If {vi}\{v_i\} is linearly independent and spans VV, then we say {vi}\{v_i\} is a basis of VV.

All bases of a vector have the same length (i.e. the same number of vectors). Therefore we define the dimension of a vector space to be the length of the basis.

A subspace UU of a vector space VV is a vector space whose elements are all in VV and whose addition/multiplication operations are identical. That is, if u+v=wu+v=w under UU, then u+v=wu+v=w under VV as well, etc.

Linear Maps

A linear map TT is a function VWV\to W such that T(u+v)=Tu+TvT(u+v)=Tu+Tv and T(av)=aTvT(av) = aTv for all vectors uu, vv and all scalars aa. (VV and WW are vector spaces.)

The most boring map is II, the identity map. It is the function VVV\to V such that Tv=vTv = v.

The null space of a linear map is the set of vectors vv such that Tv=0Tv=0. As the term implies, the null space is a vector space itself. The range of a linear map is the set of vectors vv such that there exists some vector uu satisfying Tu=vTu=v. In other words, the range is the set of outputs possible from TT. (Take note that null TV\text{null } T \in V and range TW\text{range } T \in W.) The Rank-Nullity Theorem states dim null T+dim range T=dim V\text{dim } \text{null } T + \text{dim } \text{range } T = \text{dim } V.

Also, linear maps can be injective, surjective, and invertible. TT is injective if a=bTa=Tba = b \iff Ta = Tb. (TT injective is equivalent to null T={0}\text{null } T = \{0\}.) TT is surjective if range T=W\text{range } T = W. TT is bijective if it is both injective and surjective. And TT is invertible if TT1=T1T=ITT^{-1} = T^{-1}T = I. Obviously, TT must be of the form VVV\to V, i.e. V=WV=W, for TT to be invertible.

A linear endomorphism is a map of the form VVV\to V. In other words, the domain and codomain are identical; this is the definition of an endomorphism. In an endomorphism, injectivity, surjectivity, and invertibility are all identical.

Now we discuss invariant subspaces on linear endomorphisms. A subspace UU is invariant under a transformation TT if TuUTu \in U for all uUu \in U. You can decompose a vector space into invariant subspaces.

The most useful kind of invariant subspace is that with dimension 11. These subspaces are essentially equivalent to eigenvectors. We say vv is an eigenvector if Tv=λvTv = \lambda v for some scalar vv. Also, we call λ\lambda the associated eigenvalue. If vv is an eigenvector, then {av}\{av\} is an invariant subspace with dimension 11.

A set of eigenvectors in distinct dim 1\text{dim } 1 invariant subspaces is linearly independent.

All complex vector spaces have an eigenvalue. All real vector spaces with odd dimension have an eigenvalue. The former is essentially equivalent to the Fundamental Theorem of Algebra, and the latter is equivalent to the fact that all real polynomials with odd degree have a root.

Inner-Product Spaces

Up until now any arbitrary vector space sufficed: the scalars could come from any field. Now we must specifically deal with either the real or complex vector spaces n\mathbb{R}^n and n\mathbb{C}^n.

An inner product u,v\langle u, v\rangle satisfies the following:

(To clarify, z¯\overline{z} is the complex conjugate of zz.)

These are our axioms, but one can also deduce that u,v+w=u,v+u,w\langle u, v+w\rangle = \langle u, v\rangle + \langle u, w\rangle and v,aw=a¯v,w\langle v, aw\rangle = \overline{a}\langle v, w\rangle.

The standard example of an inner product is the dot product. Note that the complex dot product is not quite what you’d expect; (wi),(zi)=wizi¯\langle (w_i), (z_i)\rangle = \sum w_i\overline{z_i}. The motivation is so that z,z=|zi|2\langle z, z\rangle = \sqrt{\sum |z_i|^2}.

Also, the norm ||v||||v|| of vv is v,v\sqrt{\langle v, v\rangle}.

Some (in)equalities you might expect from dealing with the dot product:

With the inner product defined, we can construct orthonormal bases of a vector space. In an orthonormal basis, ei,ej\langle e_i, e_j\rangle is 11 if i=ji=j and 00 if iji\neq j. An orthonormal basis always exists, and can be explicitly constructed given some basis with the Gram-Schmidt Procedure.

Spectral Theorem

A linear functional is a map from FnFF^n\to F. For all linear functionals ϕ\phi, there is a unique vector vv such that ϕ(u)=u,v\phi(u) = \langle u, v\rangle for all uVu\in V.

Fix some vector wVw\in V. For a linear map T:VWT\colon V\to W, observe Tv,w\langle Tv, w\rangle is a linear functional from FnFF^n\to F, so there exists some TwVT^{\star} w\in V such that Tv,w=v,Tw\langle Tv, w\rangle = \langle v, T^{\star}w\rangle. Each ww is associated with a unique TwT^{\star}w, so we can consider TT^{\star} a function from WVW\to V, and furthermore, TT^{\star} is a linear map.

Say now that TT is a linear endomorphism, i.e. T:VVT\colon V\to V. Then we say TT is self-adjoint if T=TT=T^{\star} and TT is normal if TT=TTTT^{\star}=T^{\star}T. Obviously all self-adjoint operators are normal.

The Spectral Theorem states that TT has a diagonal matrix iff

I’ve been trying really hard to avoid matrices, but bringing them up is very natural here. To use terms we have defined before, “has a diagonal matrix” is equivalent to “has an orthonormal basis of eigenvectors”.

Conclusion

So that’s linear algebra! A lot of detail has been left out, and we haven’t even touched matrices, but it’s still quite doable right?