Anti-anti Banach-Tarski arguments

Many people, more often than not these are people from analysis or worse (read: physicists, which in general are not bad, but I am bothered when they think they have a say in how theoretical mathematics should be done), pseudo-mathematical, non-mathematical, philosophical communities, and from time to time actual mathematicians, would say ridiculous things like “We need to omit the axiom of choice, and keep only Dependent Choice, since the axiom of choice is a source for constant bookkeeping in the form of non-measurable sets”.

People often like to cite the paradoxical decomposition of the unit sphere given by Banach-Tarski. “Yes, it doesn’t make any sense, therefore the axiom of choice needs to be omitted”.

To those people I say that they know too little. The axiom of choice is not at fault here. The axiom of infinity is. Infinite objects are weird. Period. End of discussion.

Don’t believe me? Here’s my favorite rebuttal:

Theorem (ZF+DC). Suppose that all sets of reals are Lebesgue measurable, then there is a partition of the real line into strictly more parts than elements.

Proof. If $\aleph_1\leq2^{\aleph_0}$ then there is a non-measurable set. Therefore $\aleph_1\nleq2^{\aleph_0}$. However there is a definable surjection from $\Bbb R$ onto $\omega_1$:

Fix a bijection between $e\colon(0,1)\to \Bbb (0,1)^\omega$, if $r$ is a real number such that $e(r)$ is a well-ordered set (under the natural order of the real numbers) then map $r$ to the order type of $e(r)$. Otherwise map it to $0$. Easily we can see that this is a surjection onto $\omega_1$.

Consider the partition induced by considering the singletons in $\Bbb R\setminus(0,1)$ and the preimages of each ordinal from the surjection above. This has $2^{\aleph_0}+\aleph_1$ equivalence classes. But since $2^{\aleph_0}$ and $\aleph_1$ are incomparable as cardinals, this is a strictly larger partition. $\square$

We can do other crazy partitions too. It all depends on how much you are willing to work, and how much more you are willing to assume.

How is this not a paradoxical result? More parts than elements, all of which are non-empty? Is this not worse than the Banach-Tarski paradox, or at least comparably horrible? In fact, just the fact we can partition $\Bbb R$ into $\aleph_1$ parts, which is a number of parts incomparable with the number of elements should be alarming.

Many people will disregard that, but this act of disregarding this sort of paradox is exactly what we do when we restrict ourselves to Borel sets, or Lebesgue measurable sets. We disregard the part that bothers us. And the axiom of choice has been so good to us in so many ways, that discarding it only for the sake of not having to cope with the Banach-Tarski paradox is plain stupid.

Ramsey cardinals are large large small large cardinals

There is no well defined notion for what is a large cardinal. In some contexts those are inaccessibles, in others those are critical points of elementary embeddings, and sometimes $\aleph_\omega$ is a large cardinal.

But we can clearly see some various degrees of largeness by how much structure the existence of the cardinal imposes. Inaccessible cardinals prove there is a model for second-order $\ZFC$, and Ramsey cardinals imply $V\neq L$. Strongly compact cardinals even imply that $\forall A(V\neq L[A])$.

So we can humorously classify those notions. Large cardinals for us will be from the start regular limit cardinals.

We begin with large large cardinals, those are critical points of elementary embeddings (into transitive classes, of course). The first measurable is a small large large cardinal, and a strongly compact cardinal is a large large large cardinal. Woodin cardinals lie somewhere between (although they are not quite critical points, since the first Woodin is not weakly compact), and those have quite a rich structure so we can say that those are large small large large cardinals, and if we examine the higher levels of Cantor’s Attic, then supercompacts are small large large large cardinals, whereas extendible cardinals are considered large large large large cardinals, and those superhuge cardinals are large large large large large cardinals.

On the other side of the scale are small large cardinals, can be divided to small small large cardinals which are compatible with $V=L$ and have little structural consequences, so inaccessible cardinals are small small small large cardinals. Remarkable, subtle and ineffable cardinals are large small small large cardinals being closer to the point of enforcing $V\neq L$. And weakly compact cardinals along with the indescribable cardinals are those that make the bulk middle of the small small large cardinals.

Finally we arrive to the gap between $0^\#$ and measurable cardinals, there lie the large small large cardinals. From $\omega_1$-Erdős cardinals which are small large small large cardinals, to the titular Ramsey cardinals which are large large small large cardinals.

My love-hate relationship with forcing

Forcing is great. Forcing is an amazing method. If you can think about it, then you can probably force to make it happen. All it requires is some creativity and rudimentary understanding of the objects that you are working with.

Forcing is horrible. If you can think about it, you can encode it into generic objects. If you can’t think about it, you can encode it into generic objects. If you think that you can’t encode it into generic objects, then you are probably wrong, and you can still encode it into generic objects.

It gets even worse when talking about names and their properties. On the one hand, forcing is awesome. It allows us to talk with relative certainty about objects “in the next world”, and that is great. On the other hand, forcing is horrible, because when you really want to talk about the objects, you can’t because they don’t exist, only by name. Until some generic deity breathes life into them in the form of interpretation.

But then again, you can still encode all those crazy ideas into these names. How awesful is that?