This is not necessarily true. It is possible for all real numbers (and indeed all mathematical objects) to be definable under ZFC. It is also possible for that not to be the case. ZFC is mum on the issue.
Basically you can't do a standard countability argument because you can't enumerate definable objects because you can't uniformly define "definability." The naive definition falls prey to Liar's Paradox type problems.
I think you're overthinking it. Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers. The string that maps to a number is the number's definition in the system. Then for any number definition system, almost all real numbers have no definition.
Sure, you can do that. The parent's point is that if you want this mapping to obey the rules that an actual definition in (say) first-order logic must obey, you run into trouble. In order to talk about definability without running into paradoxes, you need to do it "outside" your actual theory. And then statements about cardinalities - for example "There's more real numbers than there are definitions." - don't mean exactly what you'd intuitively expect. See the result about ZFC having countable models (as seen from the "outside") despite being able to prove uncountable sets exist (as seen from the "inside").
No, this is a standard fallacy that is covered in most introductory mathematical logic courses (under Tarski's undefinability of truth result).
> Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers.
At this level of generality with no restrictions on "mapping", you can define a mapping from finite-length strings to all real numbers.
In particular there is the Lowenheim-Skolem theorem, one of its corollaries being that if you have access to powerful enough maps, the real numbers become countable (the Lowenheim-Skolem theorem in particular says that there is a countable model of all the sets of ZFC and more generally that if there is a single infinite model of a first-order theory, then there are models for every cardinality for that theory).
Normally you don't have to be careful about defining maps in an introductory analysis course because it's usually difficult to accidentally create maps that are beyond the ability of ZFC to define. However, you have to be careful in your definition of maps when dealing with things that have the possibility of being self-referential because that can easily cross that barrier.
Here's an easy example showing why "definable real number" is not well-defined (or more directly that its complement "non-definable real number" is not well-defined). By the axiom of choice in ZFC we know that there is a well-ordering of the real numbers. Fix this well-ordering. The set of all undefinable real numbers is a subset of the real numbers and therefore well-ordered. Take its least element. We have uniquely identified a "non-definable" real number. (Variations of this technique can be used to uniquely identify ever larger swathes of "non-definable" real numbers and you don't need choice for it, it's just more involved to explain without choice and besides if you don't have choice, cardinality gets weird).
Again, as soon as you start talking about concepts that have the potential to be self-referential such as "definability," you have to be very careful about what kinds of arguments you're making, especially with regards to cardinality.
Cardinality is a "relative" concept. The common intuition (arising from the property that set cardinality forms a total ordering under ZFC) is that all sets have an intrinsic "size" and cardinality is that "size." But this intuition occasionally falls apart, especially when we start playing with the ability to "inject" more maps into our mathematical system.
Another way to think about cardinality is as a generalization of computability that measures how "scrambled" a set is.
We can think of indexing by the natural numbers as "unscrambling" a set back to the natural numbers.
We begin with complexity theory where we have different computable ways of "unscrambling" a set back to the natural numbers that take more and more time.
Then we go to computability theory where we end up at non-computably enumerable sets, that is sets that are so scrambled that there is no way to unscramble them back to the natural numbers via a Turing Machine. But we can still theoretically unscramble them back to the natural numbers if we drop the computability requirement. At this point we're at definability in our chosen mathematical theory and therefore cardinality: we can define some function that lets us do the unscrambling even if the actual unscrambling is not computable. But there are some sets that are so scrambled that even definability in our theory is not strong enough to unscramble them. This doesn't necessarily mean that they're actually any "bigger" than the natural numbers! Just that they're so scrambled we don't know how to map them back to the natural numbers within our current theory.
This intuition lets us nicely resolve why there aren't "more" rational numbers than natural numbers but there are "more" real numbers than natural numbers. In either case it's not that there's "more" or "less", it's just that the rational numbers are less scrambled than the real numbers, where the former is orderly enough that we can unscramble it back to the natural numbers with a highly inefficient, but nonetheless computable, process. The latter is so scrambled that we have no way in ZFC to unscramble them back (but if you gave us access to even more powerful maps then we could scramble the real numbers back to the natural numbers, hence Lowenheim-Skolem).
It doesn't mean that in some deep Platonic sense this map doesn't exist. Maybe it does! Our theory might just be too weak to be able to recognize the map. Indeed, there are logicians who believe that in some deep sense, all sets are countable! It's just the limitations of theories that prevent us from seeing this. (See for example the sketch laid out here: https://plato.stanford.edu/entries/paradox-skolem/#3.2). Note that this is a philosophical belief and not a theorem (since we are moving away from formal definitions of "countability" and more towards philosophical notions of "what is 'countability' really?"). But it does serve to show how it might be philosophically plausible for all real numbers, and indeed all mathematical objects, to be definable.
I'll repeat Hamkins' lines from the Math Overflow post because they nicely summarize the situation.
> In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines.
> And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page [no longer on the Wikipedia page] on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.
I think I understand your argument (you could define "the smallest 'undefinable' number" and now it has a definition) but I still don't see how it overcomes the fact that there are a countable number of strings and an uncountable number of reals. Can you exhibit a bijection between finite-length strings and the real numbers? It seems like any purported such function could be diagonalized.
My other reply is so long that HN collapsed it, but addresses your particular question about how to create the mapping between finite-length strings and the real numbers.
Here's another lens that doesn't answer that question, but offers another intuition of why "the fact that there are a countable number of strings and an uncountable number of reals" doesn't help.
For convenience I'm going to distinguish between "collections" which are informal groups of elements and "sets" which are formal mathematical objects in some kind of formal foundational set theory (which we'll assume for simplicity is ZFC, but we could use others).
My argument demonstrates that the "definable real numbers" is not a definition of a set. A corollary of this is that the subcollection of finite strings that form the definitions of unique real numbers is not necessarily an actual subset of the finite strings.
Your appeal that such definitions are themselves clearly finite strings is only enough to demonstrate that they are a subcollection, not a subset. You can only demonstrate that they are a subset if you could demonstrate that the definable real numbers form a subset of the real numbers which as I prove you cannot.
Then any cardinality arguments fail, because cardinality only applies to sets, not collections (which ZFC can't even talk about).
After all, strictly speaking, an uncountable set does not mean that such a set is necessarily "larger" than a countable set. All it means is that our formal system prevents us from counting its members.
There are subcollections of the set of finite strings that cannot be counted by any Turing Machine (non-computably enumerable sets). It's not so crazy that there might be subcollections of the set of finite strings that cannot be counted by ZFC. And then there's no way of comparing the cardinality of such a subcollection with the reals.
Another way of putting it is this: you can diagonalize your way out of any purported injection between the reals and the natural numbers. I can just the same diagonalize my way out of any purported injection between the collection of definable real numbers and the natural numbers. Give me such an enumeration of the definable real numbers. I change every digit diagonally. This uniquely defines a new real number not in your enumeration.
Perhaps even more shockingly, I can diagonalize my way out of any purported injection from the collection of finite strings uniquely identifying real numbers to the set of all natural numbers. You purport to give me such an enumeration. I add a new string that says "create the real number such that the nth digit is different from the real number of the nth definition string." Hence such a collection is an uncountable subcollection of a countable set.
> Can you exhibit a bijection between finite-length strings and the real numbers? It seems like any purported such function could be diagonalized.
Let's start with a mirror statement. Can you exhibit an bijection between definitions and the subset of the real numbers they are supposed to refer to? It seems like any purported such bijection could be made incoherent by a similar minimization argument.
In particular, no such function from the finite strings to the real numbers, according to the axioms of ZFC can exist, but a more abstract mapping might. In much the same way that no such function from definitions to (even a subset of) the real numbers according to the axioms of ZFC can exist, but you seem to believe a more abstract mapping might.
I think your thoughts are maybe something along these lines:
"Okay so fine maybe the function that surjectively maps definitions to the definable real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [minimality]. Clearly it definitely exists though right? After all the set of all finite strings is clearly smaller than the real numbers and it's gotta be one of the maps from finite strings to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."
But I can derive an almost exactly analogous thing for cardinality.
"Okay so fine maybe the function that surjectively maps the natural numbers to the real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [diagonalization]. Clearly it definitely exists though right? After all the set of all natural numbers is clearly just as inexhaustible as the real numbers and it's gotta be one of the maps from the natural numbers to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."
I suspect that you feel more comfortable with the concept of cardinality than definability and therefore feel that "the set of all finite strings is clearly 'smaller' than the real numbers" is a more "solid" base. But actually, as hopefully my phrasing above suggests, the two scenarios are quite similar to each other. The formalities that prevent you from building a definability function are no less artificial than the formalities that prevent you from building a surjection from the natural numbers to the real numbers (and indeed fundamentally are the same: the Liar's Paradox).
So, to understand how I would build a map that maps the set of finite strings to the real numbers, when no such map can formally exist in ZFC, let's begin by understanding how I would rigorously build a map that maps all sets to themselves (i.e. the identity mapping), even when no such map can formally exist as a function in ZFC (because there is no set of all sets).
(I'm choosing the word "map" here intentionally; I'll treat "function" as a formal object which ZFC can prove exists and "map" as some more abstract thing that ZFC may believe cannot exist).
We'll need a detour through model theory, where I'll use monoids as an illustrative example.
The definition of an (algebraic) monoid can be thought of as a list of logical axioms and vice versa. Anything that satisfies a list of axioms is called a model of those axioms. So e.g. every monoid is a model of "monoid theory," i.e. the axiomos of a monoid. Interestingly, elements of a monoid can themselves be groups! For example, let's take the set {{}, {0}, {0, 1}, {0, 1, 2}, ...}, as the underlying set of a monoid whose monoid operation is just set union and whose elements are all monoids that are just modular addition.
In this case not only is the parent monoid a model of monoid theory, each of its elements are also models of monoid theory. We can then in theory use the parent monoid to potentially "analyze" each of its individual elements to find out attributes of each of those elements. In practice this is basically impossible with monoid theory, because you can't say many interesting things with the monoid axioms. Let's turn instead to set theory.
What does this mean for ZFC? Well ZFC is a list of axioms, that means it can also be viewed as a definition of a mathematical object, in this case a set universe (not just a single set!). And just like how a monoid can contain elements which themselves are monoids, a set universe can contain sets that are themselves set universes.
In particular, for a given set universe of ZFC, we know that in fact there must be a countable set in that set universe, which itself satisfies ZFC axioms and is therefore a set universe in and of itself (and moreover such a countable set's members are themselves all countable sets)!
Using these "miniature" models of ZFC lets us understand a lot of things that we cannot talk about directly within ZFC. For example we can't make functions that map from all sets to all sets in ZFC formally (because the domain and the codomain of a function must both be sets and there is no set of all sets), but we can talk about functions from all sets to all sets in our small countable set S which models ZFC, which then we can use to potentially deduce facts about our larger background model. Crucially though, that function from all sets to all sets in S cannot itself be a member of S, otherwise we would be violating the axioms of ZFC and S would no longer be a model of ZFC! More broadly, there are many sets in S, which we know because of functions in our background model but not in S, must be countable from the perspective of our background model, but which are not countable within S because S lacks the function to realize the bijection.
This is what we mean when we talk about an "external" view that uses objects outside of our miniature model to analyze its internal objects, and an "internal" view that only uses objects inside of our miniature model.
Indeed this is how I can rigorously reason about an identity map that maps all sets to themselves, even when no such identity function exists in ZFC (because again the domain and codomain of a function must be sets and there is no set of all sets!). I create an "external" identity map that is only a function in my external model of ZFC, but does not exist at all in my set S (and hence S can generate no contradiction to the ZFC axioms it claims to model because it has no such function internally).
And that is how we can talk about the properties of a definability map rigorously without being able to construct one formally. I can construct a map, which is a function in my external model but not in S, that maps the finite strings of S (encoded as sets, as all things are if you take ZFC as your foundation) that form definitions to some subset of the real numbers in S. But there's multiple such maps! Some maps that map the finite strings of S to the real numbers "run out of finite strings," but we know that all the elements of S are themselves countable, which includes the real numbers (or at least S's conception of the real numbers)! Therefore, we can construct a bijective mapping of the finite strings of S to the real numbers of S. Remember, no such function exists in S, but this is a function in our external model of ZFC.
Since this mapping is not a function within S, there is no contradiction of Cantor's Theorem. But it does mean that such a mapping from the finite strings of S to the real numbers of S exists, even if it's not as a formal function within S. And hence we have to grapple with the problem of whether such a mapping likewise exists in our background model (i.e. "reality"), even if we cannot formally construct such a mapping as a function within our background model.
And this is what I mean when I say it is possible for all objects to have definitions and to have a mapping from finite strings to all real numbers, even no such formal function exists. Cardinality of sets is not an absolute property of sets, it is relative to what kinds of functions you can construct. Viewed through this lens, the fact that there is no satisfiability function that maps definitions to the real numbers is just as real a fact as the fact that there is no surjective function from the natural numbers ot the real numbers. It is strange to say that the former is just a "formality" and the latter is "real."
For more details on all this, read about Skolem's Paradox.
Leads to really fun statements like "there exists a proof that all reals are equal to themselves" and "there does not exist a proof for every real number that it is equal to itself" (because `x=x`, for most real numbers, can't even be written down, there are more numbers than proofs).
Any HN comment is a finite expression, so it's impossible for me to specify a particular one. But the number of finite expressions is countable, and the number of reals is vastly more than a countable number, so most reals cannot be described in any human sense.
I think (I am not a mathematician) that depends on whether you accept non-constructive proofs as valid. Normally you reason that any mapping from natural numbers onto the reals is incomplete (eg Cantor's argument), and that the sets of computable or describable numbers are countable, and therefore there exist indescribable real numbers. But if you don't like that last step, you do have company:
There are more infinite sequences than finite ones.
So not all infinite sequences can be uniquely specified by a finite description.
Like √2 is a finite description, so is the definition of π, but since there is no way to map the abstract set of "finite description" surjectively to the set of infinite sequences you find that any one approach will leave holes.
But doesn't this assume what you intend to show? Of course you can't specify an infinite and non-repeating sequence, but how do you know that is a number?
Slightly longer answer decimal numbers between 0 and 1 can be written as the sum of a_0*10^0 + a_1*10^1 + a_2*10^2 + ... + a_i*10^i + ... where a_i is one of 0,1,2,3,4,5,6,7,8,9. for series in this shape you can prove that the sum of two series is the same iff and only if the sequence of digits are all the same (up to the slight complication of 0.09999999 = 0.1 and similar)
You can't know. However, it is a consequence of the axiom of choice (AC). You can't know if AC is true either; but mathematics without it is really really hard, so it usually assumed.
Even crazier than that: almost all numbers cannot be defined with any finite expression.