# Additional Exercises to 2nd Edition of Axler

The notation used will not be exactly the same as Axler. For instance, we will denote the set $\{a_1, \ldots, a_n\}$ as $\{a_i\}$ out of laziness.

## Preliminaries

Axler does many things right. It jumps straight into the main protagonists of linear algebra — vector spaces — without boring a reader with the details of a field or a vector space. This is all sensible for a first read, but it is useful to eventually learn what a field and vector space actually are.

### Field axioms

Definition.A field is a set of elements $F$, including two specialdistinctelements $0$ and $1$, with two binary operators $+$ and $\cdot$ such that

Commutativity:

- $a + b = b + a$
- $a\cdot b = b\cdot a$
Associativity:

- $(a + b) + c = a + (b + c)$
- $(a\cdot b) \cdot c = a \cdot (b \cdot c)$
Distributivity: $a\cdot (b + c) = a \cdot b + b \cdot c$Identity:

- $a + 0 = a$
- $a \cdot 1 = a$
Inverse

- For all $a$, there exists an element $b$ such that $a + b = 0$.
- For all $a \neq 0$, there exists an element $b$ such that $a \cdot b = 1$.
for all elements $a$, $b$, $c \in F$.

Even though the field is really the triple $(F, +, \cdot)$, we will simply refer to the field as $F$ for brevity.

Typically we omit $\cdot$ when it is clear, e.g. we will write $a\cdot b$ as $ab$.

Examples of fields include
$\mathbb{Q}$
and
$\mathbb{R}$,
as well as the integers modulo
$p$,
i.e. $\mathbb{F}_p$.
Note that
$\mathbb{Z}$
is *not* a field.

A couple of results you should try to prove:

- Prove that if
$a + b = a + c$,
then
$b + c$.
- Note from above that the additive inverse is unique, i.e. if $a + b = a + c = 0$, then $b = c$.

- Prove that if
$ab = ac$,
then either
$a = 0$
or
$b = c$.
- Note from above that the multiplicative inverse of any non-zero element is unique, i.e. if $ab = ac = 1$ and $a\neq 0$, then $b = c$.

- Prove that $0a = 0$.

### Vector space axioms

Definition.A vector space consists of a set $V$ ofvectors— including a special vector $0$ — over a field $F$ ofscalars, and two binary operators $+ \colon V \times V \to V$ and $\cdot \colon F\times V \to V$ which satisfy

Commutativity: $u + v = v + u$Associativity:

- $(u + v) + w = u + (v + w)$
- $(a \cdot b) \cdot v = a \cdot (b \cdot v)$
Distributivity:

- $(a + b) \cdot v = a \cdot v + b \cdot v$
- $a \cdot (u + v) = a \cdot u + a \cdot v$
for all $u$, $v \in V$ and $a$, $b \in F$.

Note that $+$ can either represent the binary operator $+ \colon F \times F \to F$ or $+ \colon V \times V \to V$. Similarly, $\cdot$ can either represent the binary operator $\cdot \colon F \times F \to F$ or $\cdot \colon F \times V \to V$. In the interest of conciseness, we will not explicitly differentiate the two. You will have to infer.

Furthermore, we denote the vector space as $V$ for brevity, even though it really also involves a field and two binary operators.

A result you should try to prove:

- Prove that $0\cdot v = 0$.

## Notes

### Existence of direct complement for infinite vector spaces

Given a finite-dimensional vector space
$V$
and a subspace
$U$
of
$V$,
there exists some subspace
$W$
of
$V$
such that
$U \oplus W = V$.
(We can prove this by explicitly extending a basis of
$U$.)
However, what if
$V$
is **infinite**? Then our explicit extension of a basis
doesn’t work, because infinite vector spaces cannot have a finite
basis.

In fact, the existence of this direct complement is guaranteed not through a proof but through the Axiom of Choice. (This statement being true, in other words, is equivalent to AoC.)

## Chapter 2

- Suppose that
$\{v_1, \ldots, v_n\}$
forms a basis of a vector space
$V$.
Prove that
- replacing any element $v_i$ with $kv_i$ where $k$ is a non-zero scalar
- replacing any element $v_i$ with $v_i+v_j$ where $j\neq i$

- Prove that every vector in a vector space has a unique representation in any basis. More concretely, given $\vec{v}$ in a vector space $V$ and a basis $\{\vec{b_i}\}$ of $V$, show that there is only one choice of scalars $\{k_i\}$ such that $\vec{v}=\sum k_i\vec{b_i}$.
- Prove that if $U$ is a subspace of $V$ and $\dim U = \dim V$, then $U = V$.

## Chapter 3

- Suppose $T$ is invertible. Show that if $\{v_i\}$ is a basis of $V$, then so is $\{Tv_i\}$.
- Prove that for any invertible linear map $T$, $kT$ is also invertible for all non-zero scalars $k$.
- Prove that for any non-invertible linear map $T$, $kT$ is also non-invertible for all scalars $k$.
- Suppose $T \colon V\to V$ is a linear map. Show that there exists some map $S\colon V \to V$ such that $TST = T$.

### Matrix exercises (feel free to skip)

- Show the matrix of that $P^{-1}TP$ with respect to a basis $B$ is the matrix of $T$ with respect to the basis $P$. (Here “the basis P” means the basis of vectors $\{Pb_1, \ldots, Pb_n\}$, where $\{b_1, \ldots, b_n\}$ is the basis $B$.)
- Show that the product of two square upper-triangular matrices $A$ and $B$ is an upper-triangular square matrix $C$. Also, show that the $i$th entry on the diagonal of $C$ is the product of the $i$th entry onthe diagonals of $A$ and $B$.

## Chapter 5

- Suppose that $\{U_i\}$ is a collection of invariant subspaces of $V$ under a linear map $T$. Show that $\sum U_i$ is also invariant under $T$.
- (CMU 21341 Final, Spring 2011) Let
$V$
be a finite-dimensional vector space over
$\mathbb{C}$.
Suppose
$S, T\in \mathcal{L}(V, V)$
are such that
$ST=TS$.
Let
$\lambda \in \mathbb{C}$
be an eigenvalue of
$S$.
Show that there exists
$\mu \in \mathbb{C}$
and
$v\in V$
with
$v\neq 0$,
such that
*both*$Sv=\lambda v$ and $Tv = \mu v$.

## Chapter 6

- Prove the Extended Triangle Inequality: $||\sum v_i|| \leq \sum ||v_i||$ with equality if and only if all $v_i$ are multiples of each other.
- Prove that for any inner product space $V$ over the real or complex numbers, $||v_1+\cdots+v_n||^2 \leq n(||v_1||^2+\cdots+||v_n||^2)$ (where $v_i$ are elements of $V$).

## Chapter 7

## Commentary

Theorem 7.25 states any normal operator can be expressed as a block diagonal matrix with blocks of size $1$ or $2$ (and the size $2$ block matrices are scalar multiples of the rotation matrix).

This statement isn’t terrible (knowing the explicit representation of
a linear map is *useful*, I guess), but there’s a much more
natural way to state it. Return to the Spectral Theorem:
(normal/self-adjoint) operators in
($\mathbb{C}$/$\mathbb{R}$)
have an orthonormal basis of eigenvectors. A (somewhat contrived)
reformulation of the Spectral Theorem is that
$T$
can be decomposed into invariant orthogonal subspaces of
$\text{dim } 1$.
And obviously every subspace of
$\text{dim } 1$
is self-adjoint (and thus normal), so we can say

$T$ can be decomposed into normal invariant orthogonal subspaces of dimension $1$ iff it is (normal/self-adjoint) in ($\mathbb{C}$/$\mathbb{R}$).

So the equivalent reformulation of 7.25 would be

$T$ can be decomposed into normal invariant orthogonal subspaces of dimension $1$ or $2$ iff it is normal in $\mathbb{R}$.

- (Corollary to 7.6) Show that $\text{null } T = \text{null } T^\star$ if $T$ is normal. Also, show that the converse does not hold.
- Show that $\text{null } T = \text{null } \sqrt{T^\star T}$.
- (Generalization of uniqueness of polar decomposition) If $R_1$ and $R_2$ are positive operators such that $||R_1v|| = ||R_2v||$ for all $v$, show that $R_1=R_2$.

## Chapter 8

## Commentary

Let’s restate 8.5 and 8.9 in their full forms, which Axler alludes to later in the chapter.

- (Higher-powered 8.5) There exists some non-negative integer $m$ such that $\text{null } T^k \subsetneq \text{null } T^{k+1}$ for $k < m$ and $\text{null } T^k = \text{null } T^{k+1}$ for $k \geq m$.
- (Higher-powered 8.9) There exists some non-negative integer $m$ such that $\text{range } T^k \supsetneq \text{range } T^{k+1}$ for $k < m$ and $\text{range } T^k = \text{range } T^{k+1}$ for $k \geq m$.

- Consider a vector space $V$ and a linear map $T\colon V\to V$. Given two invariant subspaces $U_1$, $U_2$ whose intersection consists only of the $0$ vector, show that $\text{char}(U_1 + U_2) = \text{char} U_1 \cdot \text{char} U_2$ and $\text{minpoly}(U_1 + U_2) = \text{lcm}(\text{minpoly} U_1, \text{minpoly} U_2)$.
- Suppose $X$ has eigenvalues $\lambda_1, \ldots, \lambda_n$ with multiplicities $m_1, \ldots, m_n$. Then show $X^k$ has eigenvalues $\lambda_1^k, \ldots, \lambda_n^k$ with multiplicities $m_1, \ldots, m_n$ for all $k \geq 1$.