# What a long strange trip it’s been…

As some of you may have noticed, I don’t use this blog to write about my papers in the “traditional way” math bloggers summarize and explain their recent work. I think my papers are prosaic enough to do that on their own. I do use this blog as an outlet when I have to complain about the arduous toil of being a mathematician (which has an immensely bright light side, of course, so in the big picture I’m quite happy with it).

This morning I woke up to see that my paper about the Bristol model was announced on arXiv. But unbeknownst to the common arXiv follower, this also marks the end of my thesis. The Hebrew University is kind enough to allow you to just stitch a bunch of your papers (along with an added introduction) and call it a thesis. And by “stitch” I mean literally. If they were published, you’re even allowed to use the published .pdf (on the condition that no copyright infringement occurs).

My dissertation is composed of three papers, all of which are on arXiv (links in the “Papers” page of this site):

1. Iterating symmetric extensions;
2. Fodor’s lemma can fail everywhere; and
3. The Bristol model: an abyss called a Cohen real.

Of course, the ideal situation is that all three papers have been accepted for publication, but all three of them are still under review. So it puts me at this odd situation where I will have essentially four sets of referees (one for each paper, and then two additional referees for my thesis), and so the output can end up oddly different between the resulting dissertation and the published papers. But that’s fine.

In any case. Those of you who are interested in reading my thesis can find it in those three papers. I am probably going to post the final thesis online when it will be approved, but the only thing you’re currently missing out is an introduction with some minor historical background and a summary of the three papers. So if you read all three, you don’t really need that introduction anyway.

Good. So what next? I have a few things lined up. More news will follow as reality unfolds itself like a reverse origami.

# Stationary preserving permutations are the identity on a club

This is not something particularly interesting, I think. But it’s a nice exercise in Fodor’s lemma.

Theorem. Suppose that $\kappa$ is regular and uncountable, and $\pi\colon\kappa\to\kappa$ is a bijection mapping stationary sets to stationary sets. Then there is a club $C\subseteq\kappa$ such that $\pi\restriction C=\operatorname{id}$.

Proof. Note that the set $\{\alpha\mid\pi(\alpha)<\alpha\}$ is non-stationary, since otherwise by Fodor's lemma there will be a stationary subset on which $\pi$ is constant and not a bijection. This means that $\{\alpha\mid\alpha\geq\pi(\alpha)\}$ contains a club. The same arguments shows that $\pi^{-1}$ is non-decreasing on a club. But then the intersection of the two clubs is a club on which $\pi$ is the identity. $\square$

This is just something I was thinking about intermittently for the past few years, but now I finally spent enough energy to figure it out. And it’s cute. (Soon I will post more substantial posts, on far more exciting topics! Don’t worry!)

If you follow my blog, you probably know that I am a big fan of Michael Stevens from the VSauce channel, who in the recent year or so released several very good videos about mathematics, and about infinity in particular. Not being a trained mathematician, Michael is doing an incredible task.

Non-mathematicians often tend to be Platonists “by default”, so they will assume that every question has an answer and sometimes it’s just that we don’t know that answer. But it’s out there. It’s a fine approach, but it can somewhat fly in the face of independence if you are not trained to think about the difference between true and provable.

This morning, as I was watching the new video of Physics Girl, there was an announcement about a new math channel. So of course I went to look into that channel. It’s fairly new, and there are only a handful of videos, but they already tackled some nice topics. The videos are written and hosted by a Cornell grad student, Kelsey Houston-Edwards. I watched the one about a hierarchy of infinities, and while I was a bit skeptic after the first minute, I was quite happy at the end, when the discussion went from just the fact that the reals are uncountable (although without a proof, and that’s fine, I guess, there are plenty of those on the internet), to a discussion about the continuum hypothesis.

In another video, Kelsey tackles mathematical Platonism and its somewhat-opposite, Formalism. And it’s done well. Kelsey doesn’t lean into one side or another, because at the end of the day, mathematicians—as opposed to mathematical philosophers—do mathematics, and that is their main concern. The philosophy is mostly a spice to add some taste and meaning to your work.

In any case, I enjoyed watching the few videos that I have, and I hope that you will as well. I’m sure that this is not the last that you’ll see me talk about Kelsey and her channel.

I’ve been given the chance to teach the course in axiomatic set theory in Jerusalem this semester. Today I gave my first lecture as a teacher. It went fine, I even covered more than I expected to, which is good, I guess. I am also preparing lecture notes, which I will probably post here when the semester ends. These predicated on some rudimentary understanding in logic and basic set theory, so there might be holes there to people unfamiliar with the basic course (at least the one that I gave with Azriel Levy for the past three years).

Yesterday, however, I spent most of my day thinking about how we—as a collective of set theorists—teach axiomatic set theory. About that usual course: axioms, ordinals, induction, well-founded sets, reflection, $V=L$ and the consistency of $\GCH$ and $\AC$, some basic combinatorics (clubs, Fodor’s lemma, maybe Solovay or even Silver’s theorem). Up to some rudimentary permutation.

Is this the right way to approach axiomatic set theory? This path is not easy to justify. Sure, you can justify things like well-founded sets by arguing that this is how we justify the Axiom of Foundation. And you could argue that this is a rudimentary foray into inner model theory, and that this is important. And you are absolutely right. But on the other hand, I feel that engaging the students should involve more set theory which is “interactive”. Where you obtain actual results, rather than just consistency of axioms, especially axioms which you have very little motivation towards.

I mean, look at how we teach (or learn) about algebraic structures. We don’t spend all semester just with the axioms of groups, or rings, proving things. We also see a lot of examples, and a lot of ways where these structures interact with mathematics. Set theory doesn’t have this luxury, we don’t have natural models to work with and their interactions with mathematics is meta-theoretical, rather than direct as it is the case with groups and rings.

So set theory, in essence, should be taught in a mixture of motivating examples and consistency proofs. I am taking this from my advisor, who is a wonderful teacher, as anyone who ever sat in his lectures could witness. A couple of years ago, Menachem gave a course about stationary tower forcing. In most texts about stationary tower forcing, you spend the first several dozen pages in technical concepts like Completely Jonsson cardinals, and so on. But Menachem started with the motivation: universally Baire sets, and their properties. Once you understand those, stationary tower forcing becomes much easier to digest, because it is with purpose. Last year, and next semester, Menachem is talking about inner models, and again a lot of motivation is given into fine structural considerations, mainly square’ish ones for the basic fine structure of $L$, but also through mice we get a good intuition as to what $K^{DJ}$ is supposed to be.

Right. So the basic axiomatic set theory course. What can we do about that? Well, my initial approach is to take $\ZF$ for starters. Motivate Foundation by talking about induction, and then prove that Foundation adds no new contradictions. After that, we’ll see exactly, but the next step is again motivation for either choice or Reflection principles. In either case, I feel that having motivation interspersed with consistency proofs is key here.

So now, let me ask you, my fellow set theorists, who have taught courses in axiomatic set theory. What is your experience on the matter? What is your take on my approach? This is my first time doing this, and I will definitely be reporting again during the semester and afterwards. But I also want to hear what you have to say on the matter. I will leave the comments open, but also feel free to contact me over email.

# Zornian Functional Analysis or: How I Learned to Stop Worrying and Love the Axiom of Choice

Back in the fall semester of 2015-2016 I had taken a course in functional analysis. One of the reasons I wanted to take that course (other than needing the credits to finish my Ph.D.) is that I was always curious about the functional analytic results related to the axiom of choice, and my functional analysis wasn’t strong enough to sift through these papers.

I was very happy when the professor, Matania Ben-Artzi, allowed me to write a final paper about the usage of the axiom of choice in the course, instead of taking an exam.

I have decided to finally post this paper online. It covers some possible disasters in functional analysis without the axiom of choice, or with “seemingly nice” assumptions (such as automatic continuity). You can find it in the Papers section.

My goal was to make something readable for analysts, rather than to provide a retread of older set theoretic proofs (and some model theoretic proofs). So some things are left with only a reference, and other set theoretic statements are formulated in a rather unusual way. If you are interested, I’d be happy to hear any remarks on this paper, or suggestions for improvements in the comments below or over email. If you know analysts that might be interested to read this, please let them know of the existence of the paper.

For the set theorists the paper can be seen as a nice historical overview of these results, and perhaps it can be of use in other ways.

# In praise of some history

Teaching pure mathematics is not a trivial thing. You have to overcome the several barriers that were constructed by the K12 education that mathematics is a bunch of “fit this problem into that mold”.

I recently had a chat with James Cummings about teaching. He said something that I knew long before, that being a good teacher requires a bit of theatricality. My best teacher from undergrad, Uri Onn, had told me when I started teaching, that being a good teacher is the same as being a good storyteller: you need to be able and mesmerize your audience and keep them on the edge of their seats, wanting more.

My answer to James was something that I had in mind for a while, but never put into words until then. You should know a bit of history of the topic you’re teaching.

If you look at [pure] mathematical education—at least undergrad—it is quite flat. You just have a list of theorems, each extending the previous, building this wonderful structure. But it’s a flat building, it’s a drawing. The theorems come one after the other, after the other… Historically, however, there were many decades (if not centuries) between one theorem to the next. Rolle’s theorem came about the late 17th century, but Lagrange’s theorem came only in the mid-19th century. So in one lecture, we covered some 150 years of mathematical progress. That is amazing, if you can point this out properly to your students. Not to mention the oppositions that people had to infinitesimal calculus in Rolle’s days, which makes it interesting, and contributes to the definitions given by Cauchy as solid foundations for analysis.

Similarly, the history of the Cantor-Bernstein theorem is incredible. As in the history behind’s König’s theorem (about cardinal arithmetic). Those things are amazing, what sort of motivations and mistakes people had made back then, when these fields were fresh.

The more I thought about it, the more I realized that there are two important reasons that one should always spice up their teaching with some historical facts.

1. The first is that mathematical education is flat, as I remarked above. We learn the theorem, one after another. In one lecture, you can cover decades or even centuries of mathematical progress. And it feels dry, it feels like “why are you teaching me all this?” sort of thing. I still remember that as a student, I do.

But with a bit of history, suddenly everything becomes three-dimensional, it becomes something that had actual progress. It shines a light on “there is a notion of mathematical progress”. Something that engineering students, for example, often baffled by.

2. The second reason is that often theorems and motivations were coming from attempts to disprove something. König, if we mentioned him already, proved his lemma as an attempt to prove that the real numbers cannot be well-ordered. Baire, Borel and Lebesgue rejected the axiom of choice because they felt it is preposterous that there are non-measurable sets.

When you explain this to students, you show them that their natural confusion about a topic, especially abstract and confusing topic, is natural. You show them that a lot of smart people made the same mistake before. And while today we know better, their instinctive recoil makes sense. This reinforces the idea that they didn’t misunderstand something, that they are not stupid, and that mathematics is often surprising (at least when dealing with the infinite).

So we have these two reasons. And I think these are excellent reasons for adding some historical references when talking about mathematics. Of course, you shouldn’t put more than a pinch of cumin in your stew, because cumin is not the main part of your dish, it’s just what makes a good meal into a great meal (well, at least good cumin). You shouldn’t talk only about history in a mathematical course. This should be the slight addition that gives taste, flavor and volume to your material.

Historical anecdotes are what turns a flat material into a fleshed form of progress, from one theorem to the other. Use them sparsely, use them well. But use them.

# Constructive proof that large cardinals are consistent

I am not a Platonist, as I keep pointing out. Existence, even not in mathematics, is relative and confusing to begin with, so I don’t pretend to try and understand it in a meaningful way.

However, we have a proof, a constructive proof that large cardinals are consistent. And they exist in an inner model of our universe.

Recall that $0^\#$ exists if and only there exists a non-trivial mouse. Now recall that such mice exist. Vacanati mice.

I’m sorry to all those who claim that inaccessible cardinals are inconsistent. Your claim is that reality is inconsistent. Which might just be the case…

Now you can ask whether or not large cardinals are a human construct. Here we run into a problem, as these non-trivial mice are a human construct themselves…

# Some thoughts about “automated theorem searching”

Let me begin by giving a spoiler warning. If you haven’t watched “The Prisoner” you might be spoiled about one of the episodes. Not that matters, for a show from nearly 40 years ago, but you should definitely watch it. It is a wonderful show. And even if you haven’t watched it, it’s just one episode, not the whole show. So you can keep on reading.

So, I’m fashionably late to the party (with some good excuse, see my previous post), but after the recent 200 terabytes proof for the coloring of Pythagorean triples, the same old questions are raised about whether or not at some point computers will be better than us in finding new theorems, and proving them too.

My answer is a cautious yes, with the caveat that we can still end up regressing to the middle ages with a side of nuclear winter. Or some other catastrophe. But to quote Rick Sanchez, “That’s planning for failure, it’s even stupider than regular planning.” and who am I to argue with the smartest man in the universe? (Although, granted, Rick doesn’t care about humanity and he has his portal gun, so that’s a solid backup plan…) But I digress.

Assuming that we continue on the path on which we are set, and we don’t collapse under our own weight in the world, it is probable that in a few centuries computers can do mathematics better than people. They will probably be able to search for, and prove, theorems that we wouldn’t even think about.

And here is where “The Prisoner” comes in. In the episode “The General”, Number Two devises a plan for super-saturated-quick-learning through flickering TV statics or something. The plan is to educate everyone, and achieve a measure of uniformity in the general populous. Number Six retorts, however, that if you just transmit facts to people, what you get is still just a row of cabbages, and Number Two answers “Indeed, knowledgeable cabbages”. Knowing a bunch of historical facts does not mean that you know history. You are devoid of context, intricacies, and you are only left with dry and meaningless facts. It is us, the people, who generate context, who weave the seams into the names, dates and numbers. We give history meaning, because it is meaningful to us, specifically. It has not meaning otherwise.

In a show about the struggle of the individual against the society that wants them to conform and be a subdued and productive member of society, this is one of the strongest episodes where this message was brought to the surface.

Now. Who writes all these lectures? Ah, that would be The General. A supercomputer which is capable of answering every question, from advanced mathematics to crop spraying. And before Number Two can feed into the computer a question which was driving part of the plot, the claim of the supercomputer’s power is contested by Number Six, who then types a short question and feeds it into the computer.

The computer crashes, causing a bit a kerfuffle. As things cool down, the following dialogue ensues:

Number Two: What was the question?
Number Six: It’s insoluble for man or machine.
Number Two: What was it?
Number Six: W, H, Y, question mark.
Number Two: “Why?”
Number Six: Why.
Number Two: …why?

Setting the epistemological awesomeness of Number Six aside; this is what separates, in my opinion, self-aware creatures from robots. We are capable of wondering “why” something is, and why is it even important? And here we circle back to automated theorem searching and proving. Until we develop a conscious machine, that can appreciate the intricate beauty of mathematical questions, and actively select whether or not a theorem is worth searching for, based on past knowledge, and past interest and the likes of it (and any of this will have to be a learning algorithm, because there is just no rigid definition for what is a beautiful mathematical statement), until we can create machines that have this ability, automated searching for theorems is doomed to produced a row of cabbages, even if knowledgeable cabbages, rather than mathematical results that would have been produced by mathematicians.

Because mathematics is meaningful to us, as humans, more than it is meaningful in any other non-existent way. And until computers can endow their existence with the search for meaning, they cannot appreciate why something like coloring of Pythagorean triples is interesting, or why the proof of independence of the continuum hypothesis from the axioms of set theory is beautiful.

Until then, mathematics is a human activity, and a social one too.

(Note, by the way, I didn’t say anything about computer verified proofs. That is a different story altogether, and I have different, albeit equally strong, opinions there.)

# Iterating Symmetric Extensions

I don’t usually like to write about new papers. I mean, it’s a paper, you can read it, you can email me and ask about it if you’d like. It’s there. And indeed, for my previous papers, I didn’t even mention them being posted on arXiv/submitted/accepted/published. This paper is a bit different; but don’t worry, this is not your typical “new paper” post.

If you don’t follow arXiv very closely, I have posted a paper titled “Iterating Symmetric Extensions“. This is going to be the first part of my dissertation. The paper is concerned with developing a general framework for iterating symmetric extensions, which oddly enough, is something that we didn’t really know how to do until now. There is a refinement of the general framework to something I call “productive iterations” which impose some additional requirements, but allow greater freedom in the choice of filters used to interpret the names. There is an example of a class-length iteration, which effectively takes everything that was done in the paper and uses it to produce a class-length iteration—and thus a class length sequence of models—where slowly, but surely, Kinna-Wagner Principles fail more and more. This means that we are forcing “diagonally” away from the ordinals. So the models produced there will not be defined by their set of ordinals, and sets of sets of ordinals, and so on.

One seemingly unrelated theorem extends a theorem of Grigorieff, and shows that if you take an iteration of symmetric extensions, as defined in the paper, then the full generic extension is one homogeneous forcing away. This is interesting, as it has applications for ground model definability for models obtained via symmetric extensions and iterations thereof.

But again, all that is in the paper. We’re not here to discuss these results. We’re not here to read some funny comic with a T-Rex and a befuddled audience either. We’re here to talk about how the work came into fruition. Well, parts of that process. Because I feel that often we don’t talk about these things. We present the world with a completed work, or some partial work, and we move on. We don’t stop to dwell on the hardship we’ve endured. We assume, and probably correctly, that most people have endured similar difficulties one time or another. So there is no need to explain, or expose any of the background details. Well. Screw that. This is my blog, and I can write about it if I want to. And I do.

So, the idea of iterating symmetric extensions came to me when I was finishing my masters, I was thinking about a way to extend symmetric extensions, because it seemed to me that we ran this tool pretty much into the ground, and I was looking for a tool that will enable us to dig deeper into the world of non-AC models. It was a good timing, too. Menachem [Magidor] had told me about this interesting model that they constructed in Bristol at some workshop, and it seemed like a good test subject (dubbed “The Bristol Model” from that point onward). When I settled into this idea, and Menachem explained to me the vague details of the construction, it immediately seemed to me as an iteration of symmetric extensions. So I set on to develop a method that will enable me to formalize and reconstruct this model (I did that, and while I have a set of notes with a written account, I will soon start transforming those into a proper paper, so I hope that by the end of July I will have something to show for).

The first idea came to me when I was in Vienna in September of 2013. I was sure it’s going to work easy peasy, and so I left it to focus on other issues of the hour. When I came back to this a few months later, Menachem and I talked about it and identified a few possible weak spots. Somehow we managed to convince ourselves that this is not a real issue, and I started working the details. Headstrong and cocksure, I was certain there just a few small technical details which will be solved in a couple of days worth of work. But math had other plans, and I spent about a year and a half before things worked out.

Specifically because I kept running into small problems. Whenever I wrote about some statement that it’s “clear” or “obvious”, there were troubles with that later. Whenever I was sure that something has to be true, it turned out to be false. And I had to rewrite my notes many times over. Usually more or less from scratch. Luckily for me, Martin Goldstern was visiting Jerusalem for a few months during the spring semester of 2015, and he was kind enough to hear my ideas and point a lot of these problems. “Oh, just make sure that such and such is true” he would say, and the next day I’d find him and say something along the lines “Yeah, it turned out that it’s false, so I had to do this and that to circumvent the problem, but now it simplified these proofs”. And the process repeated itself. This long process is one of the great sources for this blog post of mine, and this post and also that post.

Closing in on the summer, Yair [Hayut] was listening to whatever variant I had at the time, and at some point he disagreed with one of the things I had to say. “Surely you can’t disagree with this theorem, it only relies on the lemma that I showed you as the first lemma, and you’ve agreed to that”. He pondered a little bit, and said “No, I actually disagree with the lemma”. We paused, we thought about it, and we came up with one or two counterexamples to that lemma. It was exactly the issue Menachem and I identified, and suddenly all the problems that were plaguing me because obvious consequences of that very problem.

I had worked very hard over the course of the next two months, and I managed to salvage the idea from oblivion. It was a good thing, too, because shortly after I’d visit the Newton Institute, and I had the chance to present this over the course of 8 hours to whomever was interested. And a few people were. But the definition was just terrible. I was happy it’s working, though, so I left it aside to cool down for a bit, while I worked on other projects of my thesis.

And now, I sat down to write this paper. And as I was writing it, I realized how to simplify some of the horrible details, which is great. This caused some of the proofs to be clearer, better and more of what you’d expect of these proofs. And that’s all I ever wanted, really. It took me two years, but it feels good to be over with, I hope. Now we wait for the referee report… and a year from now, when I’ve forgotten all about this, I’ll probably grunt, groan, and revise the damn thing, when the report will show up. Or maybe sooner.

Well… I’m done venting. Next stop, writing up The Bristol Model paper.

Okay, maybe this sounds like I’m treating this as a rare process. And to some extent, it is. This is my first big piece of research. You can only have one first of those. Yes, mathematical research is a process. A long and slow process. I’m not here to complain about this, or argue otherwise. I’m here to point out the obvious, and complain that I never heard people talking about these sort of slow processes. Only about the “So he hopped on a plane, came over here, and we banged this thing together in a couple of weeks time”, which is really awesome and sort of exciting. But someone has to stand up and say “No, this was a slow and torturous process that drained the life out of me for the better part of two years”.

# Syntactic T-Rex: Irregularized

One of my huge pet peeves is with people who think that writing $1+2+3+\ldots=-\frac1{12}$ is a reasonable thing without context. Convention dictates that when no context is set, we interpret infinite summation as the usual convergence of a series, namely the limit of the partial sums, if it exists (and of course that $1+2+3+\ldots$ does not converge to any real number). However, a lot of people who are [probably] not mathematicians per se, insist that just because you can set up a context in which the above equality holds, e.g., Ramanujan summation or zeta regularization, then it is automatically perfectly fine to write this out of nowhere without context and being treated as wrong.

But those people forget that $0=1$ is also very true in the ring with a single element; or you know, just in any structure for a language including the two constant symbols $0$ and $1$, where both constants are interpreted to be the same object. And hey, who even said that $0$ and $1$ have to denote constants? Why not ternary relations, or some other thing?

Well. The short answer is that they are not used for anything other than constants, because the readers are mostly human (sometimes computers, and sometimes cats), and they take strong hinting from the choice of letters as setting up context. If I use $n$ for some index, it hints to the reader that this is a natural number, or $\varepsilon$ hints at a very small amount when it comes to analysis. If I use $\kappa$, at least in set theory, this hints at a cardinal. Whenever I work with people, we run into a joke that as far as large cardinals go $\delta$ is generally a Woodin cardinal, sometimes an extendible, and rarely a supercompact. And $\kappa$ is always regular, unless it was a measurable that we singularized somehow.

The point is that $$\lim_{\varepsilon\to\infty}\int_\pi^{\frac1{\omega}}\int_\delta^\kappa\varepsilon\cdot\aleph_0(\omega_3,\Omega,\Bbb R)\operatorname d\Bbb R\operatorname d\Omega=42$$ is a valid mathematical statement, which should cause most, if not all, mathematicians to cringe, look away, and possibly burst into tears. Because it feels wrong.

But hey, don’t leave the site just yet. I know that you didn’t come here to read my tirade against people who misunderstand the whole point of an implicit context. You came here for a Mathematical T-Rex comic!

(Thanks to Matt Inman of The Oatmeal for the template, which can be found here.)