Eager evaluation makes functions hard to reason about mathematically
In CMU’s 15-150 Functional Programming, we’re using a language called SML. SML uses eager evaluation, which means that when you have some code like
1;
f(x) = x +
1 div 0) f(
SML will say, “OK let us look at 1 div 0
first”. And
then it will raise a Div
exception because hey, you can’t
do that!
A language like Haskell, in contrast, will use lazy
evaluation. What Haskell will do is first expand f
with no regards to its input to get something like
1 div 0) => (1 div 0) + 1 => Div error f(
(OK, Haskell actually gives you Infinity
. But pretend
that we’re dealing with SML except it does lazy evaluation instead.)
Why does this matter?
Often we will want to prove one piece of code is equivalent to another, via structural induction or whatever. If we have a function like
f(a, b) = a + b
then we should be able to say that
f(g(a), g(b)) = g(a) + g(b)
But no! We cannot! Because the way equality works is (roughly) that
two pieces of code have to evaluate to the same thing. So you
actually have to prove that g(a)
and g(b)
even
evaluate to something, as opposed to not raising an
exception.
It turns out that in this case it is equal, regardless of whatever
you set g
, a
, and b
to, because
is a simple enough function. But it is annoying to prove, which is
exactly my point, and if we set f
to something more
complicated we would not be able to prove this.
A case where (with eager evaluation) mathematical equality breaks is with
f(x) = raise Fail ""
(A quick definition: raise Fail ""
just raises the
exception Fail
.)
You’d want to be able to say that f(x)
evaluates to
Fail
— in other words, f(x) = Fail
(in the
mathematical sense) regardless of what x
is, right? It’d
only make sense for f(x) = raise Fail
in code to mean the
same thing as f(x) = Fail
.
But no! Plug in 1 div 0
and all the sudden
f(1 div 0) = Div
. Arghhh!!! This function had one job, to
just raise the exception Fail
! Why can’t it do something
that simple?
This makes it annoying to mathematically reason about functions. Now you have to handle edge cases (i.e. where errors pop up), or just work in the world where everything behaves as you want and never handle said edge cases. The former is unappealing, and the latter misses the point. For a paradigm of programming all about formal verification and relying on mathematics, not being able to rely on the very definition of a function isn’t very mathematical.
Eager evaluation has its advantages…
but mathematical verification is not one of them. I understand that when writing real-world code, which is chock-full of side effects, lazy evaluation might make things a bit harder to work about. But also running real-world code should not require an eternity and then some, so I’m not sure “practicality in the real world” is exactly the main concern of functional languages :)
Appendix: run it yourself!
If you have SML New Jersey installed on your machine, run
fun f (x : int) : int = raise Fail "";
f(1);
f(1 div 0);
in the SML REPL. You should get Fail
and
Div
errors, respectively.