I just finished a Great Courses series about differential equations. This is kind of a big deal for me because in college, I changed my major away from Physics largely because the linear algebra course exhausted me and I looked at DiffEq coming up next and went "NOPE".
But the approach of this lecture series was highly visual, using a lot of diagrams and computer tools to show how the functions behaved. Turns out this fits my style of learning a lot more than trying to parse text equations. I was captivated by the course, and managed to follow the material all the way to the end. This is not to say that I could necessarily pass an exam, I generally didn't do the homework problems so I admit that my understanding of it isn't terribly rigorous, but the subject no longer intimidates me. I can even look at differential equations in physics (like Maxwell's equations for instance) and go "yeah I see what's going on there."
Moreover, there's a sort of... flavor... to math at this level that I find really interesting: One thing I realized at some point in the course is that a lot of differential equations are sort of "unsolvable", or at least they're far too complicated to be easily sorted out, compared to something like 2x - 1 = 7. And yet, there are still tools that can be used to understand a lot about them, like the large-scale shape of solutions, or the behavior at critical points, and that's enough to be able to deal with the equations usefully. It feels both cool to know, and also somehow profound.
is how much stuff doesn't have a solution
but also, how much you start to wonder what a "solution" is supposed to be
like, what is the square root of 2? that's not a number, not quite; it's a minimal recipe for how to get to a number, and it's written in a way that lets you know a lot of convenient properties of that number. if you want the actual number we know a bunch of ways to get as much precision out of it as you want, bit it does feel like something has been lost then
but integrals we can't evaluate and differential equations we can't solve also fit that definition, even if the list of convenient properties is much shorter
and then you find out that quintic and higher polynomials also have no general solutions in familiar operations, and while they can be found and written out, they use things called hypergeometric functions that are themselves just very general recipes for evaluating a number with as much precision as you want
and this is like 9th grade algebra and it turns out they quietly hid this from you the whole time, maybe to avoid instilling teenagers with the disillusionment of knowing that our teeny tiny mathematical language can only express the answers to a zero-measure slice of all possible questions
but if you look at it from the other direction, it's actually quite impressive that we managed to look out across the chaos of numbers and somehow eke out a couple tools that are useful for any of it at all
Hot take: √2 is a computational recipe in precisely the same sense as 2 is a computational recipe. While we're used to thinking of numbers as strings of digits written in some base, that's a notation that itself implies a computational recipe (10₂ = 1 × 2¹ + 0 × 2⁰ = 2) in terms of lower-level and more fundamental recipes like "0," "1," and "2," as well as operations like "+," "×," and "^" that can combine number-recipes to get new number-recipes. Math doesn't come with any pre-defined notion, though, of what those recipes are.
Concretely, you need some other set of axioms to define what numbers are. One way to do so is to define two functions zero and succ and then use those functions to construct 0 = zero(), 1 = succ(zero()), 2 = succ(succ(zero())), and so forth. The exact definitions of those functions can be a bit subtle, but they give you a way to turn any non-negative integer into a computational recipe involving only "0" and "+ 1," reducing substantially how many different concepts you need to take as axiomatic.
This is all made much more explicit by the construction of surreal numbers. A surreal number is written as two sets, stating what other surreal numbers a given number "knows" is less than it and what other numbers are greater than it. For example, 0 is written as { | }, indicating that zero is fundamental enough that it doesn't use any other numbers in its construction. By contrast, 1 is written as { 0 | }, indicating that it is larger than zero, but doesn't require any other numbers to define. Following the pattern, 2 = { 0, 1 | } and 3 = { 0, 1, 2 | }, while –1 = { | 0 } by the same argument.
What makes the surreal numbers unique is that you can assign each a birthday, roughly corresponding to how many dependencies your computational recipe has. Zero has birthday 0, since { | } doesn't require any other numbers to be defined yet. Similarly, both 1 = { 0 | } and –1 = { | 0 } have birthday 1, since you only need one other number to define them. Numbers like 2 = { 0, 1 | } and ½ = { 0 | 1 } have birthday 2, and so forth.
From that perspective, what makes 2 and √2 different from each other as computational recipes is that 2 has a finite birthday, while √2 does not. To define something like √𝑥, you need an infinitely long sum of terms, each giving you a more precise "answer." Each term in that sum looks like (𝑥^𝑛 / 𝑛!), meaning that our recipe at a minimum depends on each 𝑛 we need for the terms in that sum. As a result, √2 is at least as complex to write down as is a list of every non-negative integer (that is, √2 has a birthday at least as large as ω = { ℕ | } = { 0, 1, 2, … | }). Importantly, there's no shorter recipe for √2 — we don't have any way of defining that number that's simpler than by reference to the "√(·)" and "2" recipes. By contrast, √4 could be defined in the same way as √2, but there's also the much simpler recipe that we already gave above: 2 = { 0, 1 | }.
This is absolutely A Lot™, but my point is that there's always a computational recipe hidden in our concept of what a number is. For the things that we like to think of as numbers, that's short and compact enough that we can for the most part ignore it, but when we get to more involved operations like √(·) that require infinite sums over polynomials to compute exactly, the complexity of that computational recipe is much harder to ignore.
