I just finished a Great Courses series about differential equations. This is kind of a big deal for me because in college, I changed my major away from Physics largely because the linear algebra course exhausted me and I looked at DiffEq coming up next and went "NOPE".
But the approach of this lecture series was highly visual, using a lot of diagrams and computer tools to show how the functions behaved. Turns out this fits my style of learning a lot more than trying to parse text equations. I was captivated by the course, and managed to follow the material all the way to the end. This is not to say that I could necessarily pass an exam, I generally didn't do the homework problems so I admit that my understanding of it isn't terribly rigorous, but the subject no longer intimidates me. I can even look at differential equations in physics (like Maxwell's equations for instance) and go "yeah I see what's going on there."
Moreover, there's a sort of... flavor... to math at this level that I find really interesting: One thing I realized at some point in the course is that a lot of differential equations are sort of "unsolvable", or at least they're far too complicated to be easily sorted out, compared to something like 2x - 1 = 7. And yet, there are still tools that can be used to understand a lot about them, like the large-scale shape of solutions, or the behavior at critical points, and that's enough to be able to deal with the equations usefully. It feels both cool to know, and also somehow profound.
is how much stuff doesn't have a solution
but also, how much you start to wonder what a "solution" is supposed to be
like, what is the square root of 2? that's not a number, not quite; it's a minimal recipe for how to get to a number, and it's written in a way that lets you know a lot of convenient properties of that number. if you want the actual number we know a bunch of ways to get as much precision out of it as you want, bit it does feel like something has been lost then
but integrals we can't evaluate and differential equations we can't solve also fit that definition, even if the list of convenient properties is much shorter
and then you find out that quintic and higher polynomials also have no general solutions in familiar operations, and while they can be found and written out, they use things called hypergeometric functions that are themselves just very general recipes for evaluating a number with as much precision as you want
and this is like 9th grade algebra and it turns out they quietly hid this from you the whole time, maybe to avoid instilling teenagers with the disillusionment of knowing that our teeny tiny mathematical language can only express the answers to a zero-measure slice of all possible questions
but if you look at it from the other direction, it's actually quite impressive that we managed to look out across the chaos of numbers and somehow eke out a couple tools that are useful for any of it at all
Hot take: β2 is a computational recipe in precisely the same sense as 2 is a computational recipe. While we're used to thinking of numbers as strings of digits written in some base, that's a notation that itself implies a computational recipe (10β = 1 Γ 2ΒΉ + 0 Γ 2β° = 2) in terms of lower-level and more fundamental recipes like "0," "1," and "2," as well as operations like "+," "Γ," and "^" that can combine number-recipes to get new number-recipes. Math doesn't come with any pre-defined notion, though, of what those recipes are.
Concretely, you need some other set of axioms to define what numbers are. One way to do so is to define two functions zero and succ and then use those functions to construct 0 = zero(), 1 = succ(zero()), 2 = succ(succ(zero())), and so forth. The exact definitions of those functions can be a bit subtle, but they give you a way to turn any non-negative integer into a computational recipe involving only "0" and "+ 1," reducing substantially how many different concepts you need to take as axiomatic.
This is all made much more explicit by the construction of surreal numbers. A surreal number is written as two sets, stating what other surreal numbers a given number "knows" is less than it and what other numbers are greater than it. For example, 0 is written as { | }, indicating that zero is fundamental enough that it doesn't use any other numbers in its construction. By contrast, 1 is written as { 0 | }, indicating that it is larger than zero, but doesn't require any other numbers to define. Following the pattern, 2 = { 0, 1 | } and 3 = { 0, 1, 2 | }, while β1 = { | 0 } by the same argument.
What makes the surreal numbers unique is that you can assign each a birthday, roughly corresponding to how many dependencies your computational recipe has. Zero has birthday 0, since { | } doesn't require any other numbers to be defined yet. Similarly, both 1 = { 0 | } and β1 = { | 0 } have birthday 1, since you only need one other number to define them. Numbers like 2 = { 0, 1 | } and Β½ = { 0 | 1 } have birthday 2, and so forth.
From that perspective, what makes 2 and β2 different from each other as computational recipes is that 2 has a finite birthday, while β2 does not. To define something like βπ₯, you need an infinitely long sum of terms, each giving you a more precise "answer." Each term in that sum looks like (π₯^π / π!), meaning that our recipe at a minimum depends on each π we need for the terms in that sum. As a result, β2 is at least as complex to write down as is a list of every non-negative integer (that is, β2 has a birthday at least as large as Ο = { β | } = { 0, 1, 2, β¦ | }). Importantly, there's no shorter recipe for β2 β we don't have any way of defining that number that's simpler than by reference to the "β(Β·)" and "2" recipes. By contrast, β4 could be defined in the same way as β2, but there's also the much simpler recipe that we already gave above: 2 = { 0, 1 | }.
This is absolutely A Lotβ’, but my point is that there's always a computational recipe hidden in our concept of what a number is. For the things that we like to think of as numbers, that's short and compact enough that we can for the most part ignore it, but when we get to more involved operations like β(Β·) that require infinite sums over polynomials to compute exactly, the complexity of that computational recipe is much harder to ignore.
I've written elsewhere that I dislike the phrase "imaginary" number because it implies that some numbers aren't just part of human imagination.
But I want to explore this by pulling on a thread from your post.
While we're used to thinking of numbers as strings of digits written in some base, that's a notation that itself implies a computational recipe [...] Math doesn't come with any pre-defined notion, though, of what those recipes are.
Decimal notation is arbitrary, and there are other totally-valid notations. Not just bullshit silly ones like base-𦬠arithmetic, but useful and convenient ones like Continued Fractions. And due to something that seems very adjacent to the Sapir-Worf Hypothesis, I propose it's the notation that gives us a vibe that some numbers are "more finite" than others.
The Continued Fraction representation of β2 is just [1;2,2,2...]. Just a buncha twos. Repeating twos forever. If you want to know the millionth digit of the decimal notation of β2 you have to use Algorithms and Tricks, but if you want to know the millionth digit of the continued-fraction notation of β2, the answer is "2."
This is still infinite, but now it's infinite in the way that β is in decimal. It's infinity, but just one thing an infinite number of times. You can wrap your head around it easily. And it's qualitatively different from something like e (non-periodic but regular) or Ο (no known pattern).
I think this is really cool! In decimal notation, β2 "looks like" Ο, i.e. an inscrutable sequence of digits. But they're not the same. They're totally different types of numbers: β2 is algebraic (root of a polynomial), and Ο is transcendental. And continued fractions makes that much more apparent. There's a rational argument, for people who like both rationals and arguments, that continued fractions are a more "natural" notation than decimals.
So that's three notation systems so far: Decimal, Surreals, and Continued Fraction. They all agree that 2 is "easy" and β2 is "hard." But they disagree in other particulars: Surreals basically say that 3 and β are equally-easy, while the other two say that β is "harder." And Decimal notation says that β2, e, and Ο are equally-hard, while Continued Fractions separates them (not sure what Surreal numbers say on this subject).
So my main point is that some numbers seem more complicated, but it's often a question of perspective. And personally I consider decimal notation a type of brain-worms fixes you into a particular perspective of what's more or less number-ish, and it's sometimes useful to break out of that perspective.
Nonetheless, it's still true that a whole buncha numbers are basically inaccessible with finite means. Most of them, actually. That way likes Kolmogorov Complexity, and saying that the definition of a number is literally the Turing machine that prints its digits.
