# Linear Algebra, in a nutshell

## Vectors

There are these things called vectors. They don’t exist in isolation.
Rather, they’re defined as parts of *vector spaces*. A vector
space has objects called *vectors*
$u$,
$v$
and *scalars*
$k$
such that
$u+v$
is a vector and
$kv$
is a scalar. Vector addition and scalar multiplication behave as you’d
expect: it commutes, associates, distributes, etc. Don’t think too hard
about what vectors *are*, just what they *do*.

The canonical example of a vector is a list of real or complex numbers. These vectors exist in $\mathbb{R}^n$ or $\mathbb{C}^n$. We will be focusing on these vectors soon, but for now we work with generic vector spaces.

We say a set of vectors
$\{v_i\}$
is *linearly independent* if the only solution to
$\sum a_iv_i = 0$
is
$a_i=0$
for all
$a_i$.
We say a set of vectors *spans* a vector space
$V$
if for any vector
$v\in V$,
there is some solution to
$\sum a_iv_i = v$.
If
$\{v_i\}$
is linearly independent and spans
$V$,
then we say
$\{v_i\}$
is a *basis* of
$V$.

All bases of a vector have the same length (i.e. the same number of
vectors). Therefore we define the *dimension* of a vector space
to be the length of the basis.

A *subspace*
$U$
of a vector space
$V$
is a vector space whose elements are all in
$V$
and whose addition/multiplication operations are identical. That is, if
$u+v=w$
under
$U$,
then
$u+v=w$
under
$V$
as well, etc.

## Linear Maps

A linear map $T$ is a function $V\to W$ such that $T(u+v)=Tu+Tv$ and $T(av) = aTv$ for all vectors $u$, $v$ and all scalars $a$. ($V$ and $W$ are vector spaces.)

The most boring map is $I$, the identity map. It is the function $V\to V$ such that $Tv = v$.

The *null space* of a linear map is the set of vectors
$v$
such that
$Tv=0$.
As the term implies, the null space is a vector space itself. The
*range* of a linear map is the set of vectors
$v$
such that there exists some vector
$u$
satisfying
$Tu=v$.
In other words, the range is the set of outputs possible from
$T$.
(Take note that
$\text{null } T \in V$
and
$\text{range } T \in W$.)
The *Rank-Nullity Theorem* states
$\text{dim } \text{null } T + \text{dim } \text{range } T = \text{dim } V$.

Also, linear maps can be injective, surjective, and invertible. $T$ is injective if $a = b \iff Ta = Tb$. ($T$ injective is equivalent to $\text{null } T = \{0\}$.) $T$ is surjective if $\text{range } T = W$. $T$ is bijective if it is both injective and surjective. And $T$ is invertible if $TT^{-1} = T^{-1}T = I$. Obviously, $T$ must be of the form $V\to V$, i.e. $V=W$, for $T$ to be invertible.

A *linear endomorphism* is a map of the form
$V\to V$.
In other words, the domain and codomain are identical; this is the
definition of an endomorphism. In an endomorphism, injectivity,
surjectivity, and invertibility are all identical.

Now we discuss *invariant subspaces* on linear endomorphisms.
A subspace
$U$
is invariant under a transformation
$T$
if
$Tu \in U$
for all
$u \in U$.
You can decompose a vector space into invariant subspaces.

The most useful kind of invariant subspace is that with dimension
$1$.
These subspaces are essentially equivalent to *eigenvectors*. We
say
$v$
is an eigenvector if
$Tv = \lambda v$
for some scalar
$v$.
Also, we call
$\lambda$
the associated *eigenvalue*. If
$v$
is an eigenvector, then
$\{av\}$
is an invariant subspace with dimension
$1$.

A set of eigenvectors in distinct $\text{dim } 1$ invariant subspaces is linearly independent.

All complex vector spaces have an eigenvalue. All real vector spaces with odd dimension have an eigenvalue. The former is essentially equivalent to the Fundamental Theorem of Algebra, and the latter is equivalent to the fact that all real polynomials with odd degree have a root.

## Inner-Product Spaces

Up until now any arbitrary vector space sufficed: the scalars could come from any field. Now we must specifically deal with either the real or complex vector spaces $\mathbb{R}^n$ and $\mathbb{C}^n$.

An inner product $\langle u, v\rangle$ satisfies the following:

- $\langle v, v\rangle \geq 0$, with equality iff $v=0$
- $\langle u+v, w\rangle = \langle u, w\rangle + \langle v, w\rangle$
- $\langle av, w\rangle = a\langle v, w\rangle$
- $\langle v, w\rangle = \overline{\langle w, v\rangle}$

(To clarify, $\overline{z}$ is the complex conjugate of $z$.)

These are our *axioms*, but one can also deduce that
$\langle u, v+w\rangle = \langle u, v\rangle + \langle u, w\rangle$
and
$\langle v, aw\rangle = \overline{a}\langle v, w\rangle$.

The standard example of an inner product is the dot product. Note that the complex dot product is not quite what you’d expect; $\langle (w_i), (z_i)\rangle = \sum w_i\overline{z_i}$. The motivation is so that $\langle z, z\rangle = \sqrt{\sum |z_i|^2}$.

Also, the *norm*
$||v||$
of
$v$
is
$\sqrt{\langle v, v\rangle}$.

Some (in)equalities you might expect from dealing with the dot product:

- Pythagorean Theorem: If $\langle u, v\rangle = 0$, then $||u+v||^2 = ||u||^2+||v||^2$.
- Cauchy-Schwarz: $|\langle u, v\rangle| \leq ||u|| ||v||$, with equality when one of $u$, $v$ is a multiple of the other.
- Triangle Inequality: $||u+v||\leq ||u||+||v||$, with equality when one of $u$, $v$ is a non-negative multiple of the other.

With the inner product defined, we can construct *orthonormal*
bases of a vector space. In an orthonormal basis,
$\langle e_i, e_j\rangle$
is
$1$
if
$i=j$
and
$0$
if
$i\neq j$.
An orthonormal basis always exists, and can be explicitly constructed
given some basis with the *Gram-Schmidt Procedure*.

## Spectral Theorem

A *linear functional* is a map from
$F^n\to F$.
For all linear functionals
$\phi$,
there is a unique vector
$v$
such that
$\phi(u) = \langle u, v\rangle$
for all
$u\in V$.

Fix some vector $w\in V$. For a linear map $T\colon V\to W$, observe $\langle Tv, w\rangle$ is a linear functional from $F^n\to F$, so there exists some $T^{\star} w\in V$ such that $\langle Tv, w\rangle = \langle v, T^{\star}w\rangle$. Each $w$ is associated with a unique $T^{\star}w$, so we can consider $T^{\star}$ a function from $W\to V$, and furthermore, $T^{\star}$ is a linear map.

Say now that
$T$
is a linear endomorphism,
i.e. $T\colon V\to V$.
Then we say
$T$
is *self-adjoint* if
$T=T^{\star}$
and
$T$
is *normal* if
$TT^{\star}=T^{\star}T$.
Obviously all self-adjoint operators are normal.

The Spectral Theorem states that $T$ has a diagonal matrix iff

- In a complex vector space: $T$ is normal
- In a real vector space: $T$ is self-adjoint

I’ve been trying really hard to avoid matrices, but bringing them up is very natural here. To use terms we have defined before, “has a diagonal matrix” is equivalent to “has an orthonormal basis of eigenvectors”.

## Conclusion

So that’s linear algebra! A lot of detail has been left out, and we
haven’t even *touched* matrices, but it’s still quite doable
right?