Two Points of Pedantry

Linear Maps vs. Matrices

You probably learned that matrices were arrays of numbers in high school pre-calculus. Arrays that you can add and do this magical “multiplication”. And then later down the road, you learn that matrices are actually linear maps.

That, too, is a lie. A white lie, given that matrices and linear maps are deeply related. (In fact, there is an isomorphism between matrices and linear maps.) But they are not quite the same.

Let me use “change in basis” as an example. You have this “array of numbers” TT, and you can “magically” “transform” it into something “similar” called P1TPP^{-1}TP by “changing the basis”. Note all the air quotations; you’re not exactly changing the basis. You’re just doing something analogous.

When you’re considering a linear map T:VVT \colon V \to V, you analyze what happens to basis vectors for different choices of bases. So in a “change in basis” you don’t actually change the linear map TT whatsoever. You are just considering the behavior of a different set of vectors.

Meanwhile, when you have a similar matrix P1TPP^{-1}TP, it is actually different. Yes, it’s different the same way 46\frac{4}{6} is different from 23\frac{2}{3}, but it is different. These are different representations; the array of numbers is literally not the same. Sure, these similar matrices belong in the same equivalence class (where two matrices are equivalent if they are similar), but two similar matrices are not actually identical. A “change of basis”, from the perspective of a matrix, is actually a change.

These distinctions are subtle. Obviously there is still a very close connection between matrices and linear maps. But — especially as you go further — the distinctions between a matrix and a linear map can be important. Rather than blindly knowing how to do matrix calculations or turning a blind eye to concepts that more naturally originate from matrices (yet still have solid grounding in the theoretical, “only consider linear transformations” side of things), it is most fruitful to understand how both perspectives work in tandem.

…but that won’t change the fact that, the first time you learn linear algebra, you should try to avoid matrices as much as possible. Learning the matrix perspective after you learn the theory perspective is not so hard. The other way around is much worse.

Polynomials vs. Polynomial Functions

For more details see StackExchange.

You are likely familiar with these things that look like a0++anxna_0 + \cdots + a_n x^n, where each of the aia_i are an element of some field FF. But what is xx? It turns out xx is just some meaningless symbol.

Then you have these things called polynomial functions that look like f(x)=a0++anxnf(x) = a_0 + \cdots + a_n x^n. And initially you might think, “Okay, this is just a distinction without a difference; surely a polynomial like x2x^2 corresponds to the polynomial function f(x)=x2f(x) = x^2.” And yes, this intuition usually holds; in any infinite field, there is a bijection between polynomials and polynomial functions. But this is not the case in a finite field. Consider the polynomials 00 and x2+xx^2+x in 𝔽2\mathbb{F}_2. Clearly these are different polynomials, yet the polynomial functions f(x)=0f(x) = 0 and f(x)=x2+xf(x) = x^2 + x are identical; they exhibit identical behavior.1

So our notions of equality among polynomials and polynomial functions are not always identical. (Although if you see the StackExchange link above, modifications can be made so that the two notions do once again coincide.)


  1. This is because x(x+1)x(x+1) is always divisible by 2.↩︎