On the other hand, I find the act of reading the scholarship of math education to be dreadful and unpleasant. It is filled with jargon and heroworship.
That being said, I’ve been extremely lucky to have great mentors and colleagues to bounce ideas off of. I’ve collected some of this advice in a Reddit post, which I’ll recreate here.
Here is some vocabulary that is commonly used when discussing math pedagogy, or pedagogy in general. In general the literature is pretty annoying and frustrating; there’s lots of jargon and lots of stuff is toohigh level.
So, this one has been on the back burner for a while. And it actually started as two separate projects that merged and separated and merged again. Continue reading...
]]>Joint work with Chris LambieHanson.
Abstract. The productivity of the $\kappa$chain condition, where $\kappa$ is a regular, uncountable cardinal, has been the focus of a great deal of settheoretic research.
In the 1970s, consistent examples of $\kappa$cc posets whose squares are not $\kappa$cc were constructed by Laver, Galvin, Roitman and Fleissner. Later, ZFC examples were constructed by Todorcevic, Shelah, and others. The most difficult case, that in which $\kappa = \aleph_2$, was resolved by Shelah in 1997.
In this work, we obtain analogous results regarding the infinite productivity of strong chain conditions, such as the Knaster property. Among other results, for any successor cardinal $\kappa$, we produce a ZFC example of a poset with precaliber $\kappa$ whose $\omega^{\mathrm{th}}$ power is not $\kappa$cc.
To do so, we carry out a systematic study of colorings satisfying a strong unboundedness condition. We prove a number of results indicating circumstances under which such colorings exist, in particular focusing on cases in which these colorings are moreover closed.
Downloads:
So I recently wrote about a fragment of mathematical content and a big part of it was the problem of stretchy braces. After building the “plain” HTML+CSS example at the end (reusing an extremely clever solution from the upcoming MathJax v3), I kept thinking: this should be easier. Luckily, this year I’m dedicating a chunk of my spare time to the MathOnWeb Community Group’s new task force focused on CSS, looking for (old and new) ideas that might help simplify equation layout using CSS.
So one thing led to another and I found myself coming back to an old thought of mine.
Stretchy characters like those braces, what are they really? Like, really really?
Let’s look at what they are called. As a matter of fact, they are called various things but the most generic term is possibly bracket. However in the context of equation layout, the more common terminology might be delimiter and fence. In particular, MathML provides an <mfenced>
tag (though for various reasons the equivalent <mrow>
+<mo>
constructions tend to be preferred by most tools).
Now both brackets, fences and delimiters sound awfully similar to a very common concept. Where do you usually put up a fence? Where do you delimit something? At a border. It’s a small idea, obviously, but what if we could solve the problem of stretchy constructions using borders?
What if somebody else already has?
Well, you could go visit codepen and simply search for brace and, lo and behold, you find 4 perfectly fine specimens in CSS. Turns out, designers love pretty things, who’d have thunk.
If you dig a little deeper, you’ll end up with basically three approaches.
The first one (with several interesting forks) is by Lauren Herda.
See the Pen SingleElement Curly Brace by Lauren Herda (@lrenhrda) on CodePen.
It is really pretty – look Ma, a single div! (Except that it doesn’t quite work on Chrome since an <hr>
gets overflow:hidden
from the user agent style sheet.)
That was fun. Let’s do two more: one from Jakob Christoffersen
See the Pen curly braces css by Jakob Christoffersen (@MasterThrasher) on CodePen.
and one from @mexn:
See the Pen CSS Curly Brace by Markus (@mexn) on CodePen.
Both are slighly more complicated than the first one. Instead of the radial gradient for the middle piece, they both use 6 elements with borderradius (though the last one has only two elements with pseudoelements). If you dive into their forks, you’ll find lots of interesting variations, too.
The point is: this problem has in a very real sense actually been solved in CSS and you can do lots of fun variations yourself.
Such as this one
See the Pen stretchy brace by Peter Krautzberger (@pkra) on CodePen.
or this one
See the Pen stretchy brace, singlediv by Peter Krautzberger (@pkra) on CodePen.
(Fun fact: using percentages in the border radius leads to some really cute behavior across sizes.)
Now you might say it hasn’t solved the real problem. Here are a couple of counterarguments:
It has no character! Gasp! It’s true that in typical print equation layout engines you’ll still have a character there. Well, you could just add a hidden one, no?
It doesn’t work well on small sizes! In typical print equation layout, you’ll see several sizes of a brace being used for smaller heights (with possibly slight design variations for readability) after which the layout would switch over to a stretchy constructions (made up of several glyphs stitched together). This is a very interesting problem to solve. And you know what? This touches on one of the hottest topics of CSS discussions in the past few years: it is a perfect use case for container queries. Go add a use case and push the web forward for everyone!
But perhaps current CSS is sufficient and someone will find a clever approach to achieve a similar effect. As I mentioned above, percentages in border radius have a neat effect; there is a lot of room to play with once you stop thinking about everything in terms of print traditions.
It’s not semantic! Gosh. What exactly does a (stretched) brace represent, semantically speaking? And, should you have decided to embue it with such rich meaning yourself, are you really unable to expose the relevant information using the web platform’s rich accessibility stack? No? Excellent  you should file a bug with ARIA and help push the web forward for everyone!
It can’t look like font x! Some fonts have a really tricky curly brace with basically an S shape in each half. I admit my CSSfoo is not good enough to do that. But besides the fact that a better designer might find a solution, I find the tradeoff acceptable. And if there’s a limitation in CSS, please file a bug with the CSS WG and help push the web forward for everyone!
It can’t do delimiter y! There are quite a few brackets, some more complex than others (Mathematical left white tortoise shell bracket anyone?) but few of those are used in stretchy ways and fewer still occur often (for comparison, the STIX2 fonts support ~30 delimiters). I really don’t have a problem with such edge cases remaining difficult for the time being if we can solve a practical problem for 99% of use cases. And if you do, … you know what to do.
So let’s do two more, the most important ones:
Parentheses,
See the Pen Stretchy parenthesis by Peter Krautzberger (@pkra) on CodePen.
and square brackets
See the Pen Stretchy brackets by Peter Krautzberger (@pkra) on CodePen.
See now, that wasn’t so hard?
I suspect that if we work a bit harder to unstuck ourselves from the traditions of (print) equation layout engines, then we might just find a lot of interesting solutions like this; solutions that help make equation layout on the web as easy as as designing a good page layout with CSS; solutions that work with the grain of the web; solutions that perhaps even lack but help identify (and resolve) shortcomings in the web platform that affect a much wider community; solutions that help move the web forward.
PS: I’ve started a little collection on codepen. Ping me if you see something that might fit!
]]>Abstract: We show that for any countable homogeneous ordered graph $G$, the conjugacy problem for automorphisms of $G$ is Borel complete. In fact we establish that each such $G$ satisfies a strong extension property called ABAP, which implies that the isomorphism relation on substructures of $G$ is Borel reducible to the conjugacy relation on automorphisms of $G$.
]]>But I want my own problems page, and it's my site. So to celebreate the new website, I created just that. For the first couple of problems, I've chosen to focus on the axiom of choice. And I don't think that I have much choice, but to keep that interest running. But I can promise that this is not the only type of problems that I will add there. Continue reading...
]]>The dissertation studies the ordinary (complex) character theory of M_{11} and M_{12}; it includes the foundations of character theory, as well as details on how to construct M_{11} and M_{12} via the notion of “transitive extension”. I think Sam has done a beautiful job and should be congratulated!
We are in the process of writing up a paper including some of Sam’s results. In fact the paper comes from a slightly different point of view. Our main result is the following:
Theorem
The point of this theorem is that we are able to construct the character table of G using only the assumption about multipletransitivity – there is no direct reference to the Mathieu groups in this paper.
In the course of this research, I asked a question on MathOverflow here. Now seems a good time to thank the contributors to that discussion, especially Frieder Ladisch, for their help!
]]>It is a static website, because I am tired of the WordPress format for a long long time now. So for the occasion, I also got a new domain, karagila.org. Isn't this nice? The only domain and all the links should work, at least for the foreseeable future. So there's nothing to worry about linkrot for now. But please do update your links! Continue reading...
]]>This is part of a series of posts aimed at helping my mom, who is not a scientist, understand what I’m up to as a mathematician.
Lately, Artificial Intelligence (AI) has made some remarkable milestones. There are computers that are better than humans at the strategy board game GO and at Poker. Computers can turn pictures into short moving clips and can “enhance” blurry pictures as in television crime shows. They can also produce new music in the style of Bach or customized to your tastes. It’s all very exciting, and it feels pretty surreal; remember back when Skype video calling felt like the future?
I’m going to give you a broad overview for how these types of AI work, and how they learn. There won’t be any equations or algebra.
Before we jump into the computer stuff, let’s make our very first AI. Well, this will be more “I” than “AI”, because I want you to play a game. You are going to be the “AI” that’s going to learn a task!
I want you to play Zrist for about 5 minutes (or longer if you like it). It’s a fun little platform game. See how far you can get. My best score was 37 400. We’ll use this experience to help describe how AI works. Okay, go play now!
Welcome back! I hope you had fun playing that game.
I want you to think about these questions, and give an answer to each of them. (It’s not a test, there are no wrong answers.)
We’ll come back to your answers in a moment. For now, I want you to watch a bit of a video of an AI (called Mar I/O) learning to play the original 1985 Super Mario Brothers. Watch maybe the first 4 or 5 minutes, and then skip to the middle of the video. You only need to watch a little bit to get the sense of what’s going on.
(If you like this, you can watch a livestream of Mar I/O’s attempts to beat the game level by level.)
First of all, this program starts off only knowing a couple of things:
Here are some things it doesn’t know:
If you’re interested, Mar I/O is a Recurrent Neural Net. There are other types of AI, but this is the we’ll look at today.
So at first it tries random stuff to increase its fitness score: jumping, standing still, ducking, running left, and none of these seem to increase its fitness. Then, when it presses right, mario starts progressing in the level and its fitness score goes up.
This is called training the AI. It measures its progress against a fitness score, and it reinforces behaviour that increases that score. i.e. It starts to favour pressing right because that seems to increase its fitness score.
This works great until it gets to the first enemy and mario runs right into it and dies. After a couple more tries, it starts to experiment some more (just like it was trying random things at the beginning of the level). Around the 2:20 mark of the video, Mar I/O presses the jump button right before the enemy and successfully clears it, allowing mario to move further right and increase its fitness score.
To recap:
Let’s go back to the platform game you played and look at how you learned to play the game.
What was the goal of the game?
How did you know you were doing well at the game?
I asked you to get as far in the level as you could; that was your goal. The game kept track of it by telling you your current high score. That was your fitness score!
How did you adapt to the rules changes? Did you get them on the first try?
If you’re anything like me, when the rules changed for the first time you thought, “Oh crap, what’s this?”, and then promptly died when the screen said “Mode: lag”. What were you supposed to do?! No one told you what to do!
When my character turned invisible, the screen stopped scrolling and I wasn’t sure what to do. At that point I just pressed buttons until it started to scroll again; i.e. I tried random things when I got stuck. As I continued to get stuck and unstuck, I recognized that I was getting stuck at the short walls, and that jumping over them saved me then. Trying the same trick saved me again when I was invisible. i.e. I was training on the short walls.
This is very similar to how Mar I/O trains and learns.
For comparison, here’s a video of one of the best Mario players in the world, CarlSagan42, taking 18 hours to beat an extremely difficult fanmade level. (Warning: there are a bunch of swear words.)
Notice a couple things:
These are all in common with Mar I/O.
How did you make decisions about what to do next? (What did you look for, and what did you ignore?)
In Zrist, you were probably looking for gaps (to jump over), those horrible red death blocks, and big walls to slide under. For each of these you developed a reaction: “When I see a gap, then I press C (to jump over it)”.
For each of these you had to remember a task: If I see a gap, then I jump over it.
For AI like Mar I/O, it stores these tasks by associating visual cues and inputs with button presses. For example, when it sees a wide open space it learns to press the right button. When it sees a gap in the ground it learns to press the A button (to jump).
Now Mar I/O doesn’t have any extra code which tells it “this is what a pit looks like” or “this is what a pipe is” or anything like that, (although it can see enemies as black tiles, it doesn’t know what an enemy is).
Each time it succeeds at increasing its fitness score it strengthens the connections between the visual cues and the sequence of button presses that got it there. Each connection like this is stored in the AI as an “artificial neuron”. So when you were playing Zrist, you probably developed a neuron relating to gaps (“If gap, then jump”), one for tall walls (“If tall wall, then slide”), and many others.
The very cool thing about modern AI is that you typically don’t need to tell it what or how many artificial neurons to make ahead of time, Mar I/O adds neurons as it learns. It’s just like how you didn’t need to know how many types of obstacles you would face in Zrist, you built up a list as you went. This is very powerful!
The flip side to this is that after Mar I/O learns to beat a level, we humans will have a hard time understanding what it’s using to make its decisions. It won’t always be clear to us what visual elements (called “features”) it’s using to make its decisions.
Hopefully you see some of the parallels between the way AIs learn things and the way humans learn things. There are a lot of similarities. Mimicking human learning has been very useful for creating AIs.
I’m going to point out a couple other ways that humans learn that help illustrate ways in which AI can learn.
Have you ever driven somewhere familiar and then forgotten how you got there? You were on autopilot. Similarly, have you ever been doing something with your hands, like playing the piano, but when you stop to think about what you’re actually doing, the task suddenly becomes much harder. This sort of muscle memory is very similar to what Mar I/O is doing. It learns sequences of moves and button presses, but there is no underlying reasoning.
I skipped over a big part of Mar I/O’s learning, which is that it actually contains many different “styles” of players (called species); it’s not just a single mario learning. After each species completes about 10 attempts at beating the level, we rank the species by which achieved the highest fitness. We then delete the bottom 10% of the species and replace them by blending some of the best species (in a process called breeding). This ensures that if one of the mediocre species discovers something useful (like shooting fireballs can kill enemies) it still has a chance to give that idea to the best performers. Similarly, the best performers get to share their ideas with the mediocre performers.
One of these processes is called a generation. For easy levels, Mar I/O only needed 40 or so generations. For difficult levels, Mar I/O needed over 250 generations! It can take a long time for these random mutations to produce helpful effects.
If this feels a lot like evolution, well that’s because it is! These AI learn by evolving and refining their strategies. This is a very deep and powerful idea, but I’ve already gone on long enough, so I’ll save it for another time.
The advancement of AI evokes many feelings: Awe and wonder, but also fear and skepticism. So I’ll end this post talking about what the future might look like.
AI are machines. The term artificial intelligence might better be described as artificial skill. Mar I/O is only able to maximize a fitness score. It’s quite good at that, but that’s the only thing it can do. This AI is highly specialized to Super Mario Brothers. While it’s possible that the underlying Mar I/O code can be adapted to other games (like Mario Kart), it requires human knowledge, judgement and skill to adapt it to other settings.
We don’t expect that Mar I/O will turn ever turn into a killer robot. At its core, Mar I/O is a (complicated) machine that presses buttons and is good at increasing a number (its fitness score).
If you want to learn more about AI, here are some good resources based on your background.
I just want nice pictures and videos. NO MATH!:
I am comfortable with the topics described here, but want a bit more substance:
I have a degree in math or computer science and want all the details. Leave no stone unturned:
Abstract: Lebesgue introduced a notion of density point of a set of reals and proved that any Borel set of reals has the density property, i.e. it is equal to the set of its density points up to a null set. We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to wellknown tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stemlinked and show that every stemlinked tree forcing has the density property.
This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert.
]]>
Abstract: Given a settheoretic property $\mathcal P$ characterized by the existence of elementary embeddings between some firstorder structures, we say that $\mathcal P$ holds virtually if the embeddings between structures from $V$ characterizing $\mathcal P$ exist somewhere in the generic multiverse. We showed with Schindler that virtual versions of supercompact, $C^{(n)}$extendible, $n$huge and rankintorank cardinals form a large cardinal hierarchy consistent with $V=L$. Sitting atop the hierarchy are virtual versions of inconsistent large cardinal principles such as the existence of an elementary embedding $j:V_\lambda\to V_\lambda$ for $\lambda$ much larger than the supremum of the critical sequence. The Silver indiscernibles, under $0^\sharp$, which have a number of large cardinal properties in $L$, are also natural examples of virtual large cardinals. With Bagaria, Hamkins and Schindler, we investigated properties of the virtual version of Vopěnka’s Principle, which is consistent with $V=L$, and established some surprising differences from Vopenka’s Principle, stemming from the failure of Kunen’s Inconsistency in the virtual setting. A recent new direction in the study of virtual large cardinal principles involves asking that the required embeddings exist in forcing extensions preserving a large segment of the cardinals. In the talk, I will discuss a mixture of results about the virtual large cardinal hierarchy and virtual Vopěnka’s Principle. Time permitting, I will give an overview of Woodin’s new results on virtual large cardinals in cardinal preserving extensions.
@ARTICLE{FriedmanGitman:ModelOfACNotDC,
author = {SyDavid Friedman and Victoria Gitman},
title = {A model of secondorder arithmetic satisfying {AC} but not {DC}},
note = {manuscript under review},
url = {},
pdf={https://boolesrings.org/victoriagitman/files/2018/03/ModelOfACNotDC.pdf},
}
Models of arithmetic are twosorted structures, having two types of objects, which we think of as numbers and sets of numbers. Their properties are formalized using a twosorted logic with separate variables and quantifiers for numbers and sets. By convention, we will denote number variables by lowercase letters and sets variables by uppercase letters. The language of secondorder arithmetic is the language of firstorder arithmetic $\mathcal L_A=\{+,\cdot,<,0,1\}$ together with a membership relation $\in$ between numbers and sets. A multitude of secondorder arithmetic theories, as well as the relationships between them, have been extensively studied (see [1]).
An example of a weak secondorder arithmetic theory is ${\rm ACA_0}$, whose axioms consist of the modified Peano axioms, where instead of the induction scheme we have the single secondorder induction axiom $$\forall X [(0\in X\wedge \forall n(n\in X\rightarrow n+1\in X))\rightarrow \forall n (n\in X)],$$ and the comprehension scheme for firstorder formulas. The latter is a scheme of assertions stating for every firstorder formula, possibly with class parameters, that there is a set whose elements are exactly the numbers satisfying the formula. One of the strongest secondorder arithmetic theories is ${\rm Z}_2$, often referred to as full secondorder arithmetic, which strengthens comprehension for firstorder formulas in ${\rm ACA}_0$ to full comprehension for all secondorder assertions. This means that for a formula with any number of secondorder quantifiers, there is a set whose elements are exactly the numbers satisfying the formula. The reals of any model of ${\rm ZF}$ is a model of ${\rm Z}_2$. We can further strengthen the theory ${\rm Z}_2$ by adding choice principles for sets: the choice scheme and the dependent choice scheme.
The choice scheme is a scheme of assertions, which states for every secondorder formula $\varphi(n,X,A)$ with a set parameter $A$ that if for every number $n$, there is a set $X$ witnessing $\varphi(n,X,A)$, then there is a single set $Y$ collecting witnesses for every $n$, in the sense that $\varphi(n,Y_n,A)$ holds, where $Y_n=\{m\mid \langle n,m\rangle\in Y\}$ and $\langle n,m\rangle$ is any standard coding of pairs. More precisely, an instance of the choice scheme for the formula $\varphi(n,X,A)$ is $$\forall n\exists X\varphi(n,X,A)\rightarrow \exists Y\forall n\varphi(n,Y_n,A).$$ We will denote by $\Sigma^1_n$${\rm AC}$ the fragment of the choice scheme for $\Sigma^1_n$assertions, making an analogous definition for $\Pi^1_n$, and we will denote the full choice scheme by $\Sigma^1_\infty$${\rm AC}$. The reals of any model of ${\rm ZF}+{\rm AC}_\omega$ (countable choice) satisfy ${\rm Z}_2+\Sigma^1_\infty$${\rm AC}$. It is a folklore result, going back possibly to Mostowski, that the theory ${\rm Z}_2+\Sigma^1_\infty$${\rm AC}$ is biinterpretable with the theory ${\rm ZFC}^$ (${\rm ZFC}$ without the powerset axiom, with Collection instead of Replacement) together with the statement that every set is countable.
The dependent choice scheme is a scheme of assertions, which states for every secondorder formula $\varphi(X,Y,A)$ with set parameter $A$ that if for every set $X$, there is a set $Y$ witnessing $\varphi(X,Y,A)$, then there is a single set $Z$ making infinitely many dependent choices according to $\varphi$. More precisely, an instance of the dependent choice scheme for the formula $\varphi(X,Y,A)$ is $$\forall X\exists Y\varphi(X,Y,A)\rightarrow \exists Z\forall n\varphi(Z_n,Z_{n+1},A).$$ We will denote by $\Sigma^1_n$${\rm DC}$ the dependent choice scheme for $\Sigma^1_n$assertions, with an analogous definition for $\Pi^1_n$, and we will denote the full dependent choice scheme by $\Sigma^1_\infty$${\rm DC}$. The reals of a model of ${\rm ZF}+{\rm DC}$ (dependent choice) satisfy ${\rm Z}_2+\Sigma^1_\infty$${\rm DC}$.
It is not difficult to see that the theory ${\rm Z}_2$ implies $\Sigma^1_2$${\rm AC}$, the choice scheme for $\Sigma^1_2$assertions. Models of ${\rm Z}_2$ can build their own version of Gödel’s constructible universe $L$. If a model of ${\rm Z}_2$ believes that a set $\Gamma$ is a wellorder, then it has a set coding a settheoretic structure constructed like $L$ along the wellorder $\Gamma$. It turns out that models of ${\rm Z}_2$ satisfy a version of Shoenfield’s absoluteness with respect to their constructible universes. For every $\Sigma^1_2$assertion $\varphi$, a model of ${\rm Z}_2$ satisfies $\varphi$ if and only its constructible universe satisfies $\varphi$ with set quantifiers naturally interpreted as ranging over the reals. All of the above generalizes to constructible universes $L[A]$ relativized to a set parameter $A$. Thus, given a $\Sigma^1_2$assertion $\varphi(n,X,A)$ for which the model satisfies $\forall n\exists X\varphi(n,X,A)$, the model can go to its constructible universe $L[A]$ to pick the least witness $X$ for $\varphi(n,X,A)$ for every $n$, because $L[A]$ agrees when $\varphi$ is satisfied, and then put the witnesses together into a single set using comprehension. So long as the unique witnessing set can be obtained for each $n$, comprehension suffices to obtain a single set of witnesses. How much more of the choice scheme follows from ${\rm Z}_2$? The reals of the classical FefermanLévy model of ${\rm ZF}$ (see [2], Theorem 8), in which $\aleph_1$ is a countable union of countable sets, is a $\beta$model of ${\rm Z}_2$ in which $\Pi^1_2$${\rm AC}$ fails. This is a particulary strong failure of the choice scheme because, as we explain below, $\beta$models are meant to strongly resemble the full standard model $P(\omega)$.
There are two ways in which a model of secondorder arithmetic can resemble the full standard model $P(\omega)$. A model of secondorder arithmetic is called an $\omega$model if its firstorder part is $\omega$, and it follows that its secondorder part is some subset of $P(\omega)$. But even an $\omega$model can poorly resemble $P(\omega)$ because it may be wrong about wellfoundedness by missing $\omega$sequences. An $\omega$model of secondorder arithmetic which is correct about wellfoundedness is called a $\beta$model. The reals of any transitive ${\rm ZF}$model is a $\beta$model of ${\rm Z}_2$. One advantage to having a $\beta$model of ${\rm Z}_2$ is that the constructible universe it builds internally is isomorphic to an initial segment $L_\alpha$ of the actual constructible universe $L$.
The theory ${\rm Z}_2$ also implies $\Sigma^1_2$${\rm DC}$ (see [1], Theorem VII.9.2), the dependent choice scheme for $\Sigma^1_2$assertions. In this article, we construct a symmetric submodel of a forcing extension of $L$ whose reals form a model of secondorder arithmetic in which ${\rm Z}_2$ together with $\Sigma^1_\infty$${\rm AC}$ holds, but $\Pi^1_2$${\rm DC}$ fails. The forcing notion we use is a tree iteration of a forcing for adding a real due to Jensen.
Jensen’s forcing, which we will call here $\mathbb P^J$, introduced by Jensen in [3], is a subposet of Sacks forcing constructed in $L$ using the $\diamondsuit$ principle. The poset $\mathbb P^J$ has the ccc and adds a unique generic real over $L$. The collection of all $L$generic reals for $\mathbb P^J$ in any model is $\Pi^1_2$definable. Jensen used his forcing to show that it is consistent with ${\rm ZFC}$ that there is a $\Sigma^1_3$definable nonconstructible real [3]. Recently Lyubetsky and Kanovei extended the “uniqueness of generic filters” property of Jensen’s forcing to finitesupport products of $\mathbb P^J$ [4]. They showed that in a forcing extension $L[G]$ by the $\omega$length finite supportproduct of $\mathbb P^J$, the only $L$generic reals for $\mathbb P^J$ are the slices of the generic filter $G$. The result easily extends to $\omega_1$length finite supportproducts as well.
We in turn extend the “uniqueness of generic filters” property to tree iterations of Jensen’s forcing. We first define finite iterations $\mathbb P^J_n$ of Jensen’s forcing $\mathbb P^J$, and then define an iteration of $\mathbb P^J$ along a tree $\mathcal T$ to be a forcing whose conditions are functions from a finite subtree of $\mathcal T$ into $\bigcup_{n<\omega}\mathbb P_n^J$ such that nodes on level $n$ get mapped to elements of the $n$length iteration $\mathbb P_n^J$ and conditions on higher nodes extend conditions on lower nodes. The functions are ordered by extension of domain and strengthening on each coordinate. We show that in a forcing extension $L[G]$ by the tree iteration of $\mathbb P^J$ along the tree ${}^{\lt\omega}\omega_1$ (or the tree ${}^{\lt\omega}\omega$) the only $L$generic filters for $\mathbb P_n^J$ are the restrictions of $G$ to level $n$ nodes of tree. We proceed to construct a symmetric submodel of $L[G]$ which has the tree of $\mathbb P_n^J$generic filters added by $G$ but no branch through it. The symmetric model we construct satisfies ${\rm AC}_\omega$ and the tree of $\mathbb P_n^J$generic filters is $\Pi^1_2$definable in it. The reals of this model thus provide the desired $\beta$model of ${\rm Z}_2$ in which $\Sigma^1_\infty$${\rm AC}$ holds, but $\Pi^1_2$${\rm DC}$ fails.
Our results also answer a longstanding open question of Zarach from [5] about whether the Reflection Principle holds in models of ${\rm ZFC}^$. The Reflection Principle states that every formula can be reflected to a transitive set, and holds in ${\rm ZFC}$ by the LévyMontague reflection because every formula is reflected by some $V_\alpha$. In the absence of the von Neumann hierarchy, it is not clear how to realize reflection, and indeed we show that it fails in $H_{\omega_1}\models{\rm ZFC}^$ of the symmetric model we construct.
@book {simpson:secondorderArithmetic,
AUTHOR = {Simpson, Stephen G.},
TITLE = {Subsystems of second order arithmetic},
SERIES = {Perspectives in Logic},
EDITION = {Second},
PUBLISHER = {Cambridge University Press, Cambridge; Association for
Symbolic Logic, Poughkeepsie, NY},
YEAR = {2009},
PAGES = {xvi+444},
ISBN = {9780521884396},
MRCLASS = {03F35 (0302 03B30)},
MRNUMBER = {2517689 (2010e:03073)},
DOI = {10.1017/CBO9780511581007},
URL = {http://dx.doi.org/10.1017/CBO9780511581007},
}
@incollection {levy:choicescheme,
AUTHOR = {L{\'e}vy, Azriel},
TITLE = {Definability in axiomatic set theory. {II}},
BOOKTITLE = {Mathematical {L}ogic and {F}oundations of {S}et {T}heory
({P}roc. {I}nternat. {C}olloq., {J}erusalem, 1968)},
PAGES = {129145},
PUBLISHER = {NorthHolland, Amsterdam},
YEAR = {1970},
MRCLASS = {02.60},
MRNUMBER = {0268037 (42 \#2936)},
MRREVIEWER = {G. Kreisel},
}
@incollection {jensen:real,
AUTHOR = {Jensen, Ronald},
TITLE = {Definable sets of minimal degree},
BOOKTITLE = {Mathematical logic and foundations of set theory ({P}roc.
{I}nternat. {C}olloq., {J}erusalem, 1968)},
PAGES = {122128},
PUBLISHER = {NorthHolland, Amsterdam},
YEAR = {1970},
MRCLASS = {02K05},
MRNUMBER = {0306002 (46 \#5130)},
MRREVIEWER = {D. A. Martin},
}
@ARTICLE {kanovei:productOfJensenReals,
AUTHOR = {Kanovei, Vladimir and Lyubetsky, Vassily},
TITLE = {A countable definable set of reals containing no definable elements},
EPRINT ={1408.3901}}
@incollection {Zarach1996:ReplacmentDoesNotImplyCollection,
AUTHOR = {Zarach, Andrzej M.},
TITLE = {Replacement {$\nrightarrow$} collection},
BOOKTITLE = {G\"odel '96 ({B}rno, 1996)},
SERIES = {Lecture Notes Logic},
VOLUME = {6},
PAGES = {307322},
PUBLISHER = {Springer},
ADDRESS = {Berlin},
YEAR = {1996},
MRCLASS = {03E30 (03E35)},
MRNUMBER = {1441120 (98g:03120)},
}
Abstract: Lebesgue introduced a notion of density point of a set of reals and proved that any Borel set of reals has the density property, i.e. it is equal to the set of its density points up to a null set. We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to wellknown tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stemlinked and show that every stemlinked tree forcing has the density property.
This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert.
]]>Well. Actually no. When I was a dewy eyed freshman, I had taken all my classes with 300 students from computer science and software engineering (BenGurion University has changed that since then). Our discrete mathematics professor was someone who was renowned as somewhat careless when it comes to details in questions and stuff like this (my older brother took calculus with the same professor about ten years before, one day he didn't show up to class, when my brother and two others went to see if he is at his office, he was surprised to find out that today is Tuesday). Continue reading...
]]>In the mean time, here’s a nice graph. It answers a question posed on Reddit that uses Chromatic numbers to solve a real life problem!
Here’s another irrelevant picture.
]]>Abstract: A linear order is called scattered if the rational order doesn’t embed into it. Scattered linear orders admit a derivative operation and an ordinal rank. In this talk we introduce some machinery needed to study the complexity of the classification of scattered linear orders of a given countable rank.
]]>Abstract: In his PhD thesis Wadge characterized the notion of continuous reducibility on the Baire space ${}^\omega\omega$ in form of a game and analyzed it in a systematic way. He defined a refinement of the Borel hierarchy, called the Wadge hierarchy, showed that it is wellfounded, and (assuming determinacy for Borel sets) proved that every Borel pointclass appears in this classification. Later Louveau found a description of all levels in the Borel Wadge hierarchy using Boolean operations on sets. Fons van Engelen used this description to analyze Borel homogeneous spaces.
In this talk, we will discuss the basics behind these results and show the first steps towards generalizing them to the projective hierarchy, assuming projective determinacy (PD). In particular, we will outline that under PD every homogeneous projective space is in fact strongly homogeneous.
This is joint work with Raphaël Carroy and Andrea Medini.
]]>It’s also difficult because most people in this field like this confusion, especially if they have a stake in it. It’s obviously a better sales pitch to say you’re helping all of STEM even if you’re actually working on a set of (arguably tricky) visual/print layout techniques. I don’t want to sound too cynical here; for many people it does come from the heart, they think they are helping STEM this way and it is what drives them. Besides, as they say, you cannot change others only yourself.
These days I spent much more time on the document level and, mostly, on mathematical documents. That brings up a slew of interesting problems but many are too ephemeral to share. The other day I had a particularly interesting piece of content as it highlights some aspects of the problem of this identification.
In this paper you find the following
The layout captured in this image combines a label (5.4) with an ordered list of three mathematical statement, one of which include a sublist of two items. Of course, these statements include quite a few bits of equational content but those aren't that important here. Instead, what's interesting is that a stretchy brace is used a visual cue that connects the single label with the list of statements, aligning its center with the label and extending to the height of the list.
How do you realize this kind of layout on the web? (And, for that matter, in LaTeX?) Before answering that, it’s worth to dive a little deeper.
There are two conflicting details here. On the one hand, the label (as per source and context) is actually an equation label. This means the authors intended this list of statements (each being a selfcontained sentence with several equational elements interspersed) to be treated as a single piece of equational content. Much like tables, images, or (since we’re in a math paper) theorem environments, this is an important piece of structural information and should not be lost.
On the other hand, the list is (nested) ordered (text) list and it is encoded as such by the authors. This is obviously an important piece of structural information and should not be lost.
And that’s a bit of a problem both for the web and for LaTeX: there’s no system for equation layout with a concept for ordered list builtin. And there’s no text layout system with stretchy braces.
If you look in the TeX source of the paper, you’ll see how this was hacked using \parbox
. On the web, you have a harder time since in practical terms you can’t really do this kind of hack of switching from equation layout to text layout. In theory (i.e., HTML5 spec dream land), you could try something like this
<math side="left">
<mtable>
<mlabeledtr>
<mtd>
<mtext>(5.4)</mtext>
</mtd>
<mtd>
<mo>{</mo>
<mtext>
<ol>
...
</ol>
</mtext>
</mtd>
</mlabeledtr>
</mtable>
</math>
Now this won’t work that well in real life. But the real question for me is: is that even correct? (in which sense)? This is a <math>
element consisting really only of text while the purely visual brace is the only element with “semantic” markup. Hm…
I find this one interesting because the problem is a case of visual layout clouding one’s judgement. You want to use stretchy braces, so in TeX you need math mode and the rest follows pretty “rationally”, no matter the hackiness. After all, it’s print; no need to care about anything but the looks.
On the one hand, there’s the gut reaction to say that authors should not do things like this. This may be based on the simple principle that, when you need to hack around a lot, you’re probably doing something wrong.
A less toxic response may be to criticize the content structure: should this really be an equation label? Isn’t it more like a theoremenvironment anyway? If not, should this enumeration not be numbered as subequations? And isn’t the brace a legacy from organizing content on a blackboard rather than something for print layout to mimic (let alone web layout)?
If I was one of the authors, I’d probably respond grumpily: how dare you question that this is the best (perhaps not good but best) way to represent this particular piece of mathematical content that I arrived at after years of study of a deep and complex research topic?
And they’d be right because this really only evades the two actual problems: the confusion of “equation” and “mathematical fragment” and the problem of stretchy characters.
On the one hand, it’s clear that this is a (complicated) unit of mathematical information. It must be treated as one. And while I would argue it is not an equation/formula (and certainly not in the sense of “equational layout” let alone MathML’s idea of it), if the authors want to count it as such, there should be a way. But on the web we’re severely limited when it comes to marking anything “an equation”, especially when it structures like regular lists come into play.
From a layout perspective is, however, the only notable problem is the stretched brace. It has no meaning here (if it ever has); it’s merely a stylistic element to help visually connect a list with a label. It is not “mathematics” or even “equational” in any sense of the word. And yet with the current state of web technology, the only way to realize it is by using tools specialized for precisely equation layout (and usually with misleading “semantics” to boot).
But we should be able to do this, no?
Here’s an example (using a technique of pure CSS stretchy braces developed by Davide Cervone for MathJax v3).
See the Pen case study: arxiv.org/1412.8106 by Peter Krautzberger (@pkra) on CodePen.
]]>]]>Pass on what you have learned. Strength, mastery. But weakness, folly, failure also. Yes, failure most of all. The greatest teacher, failure is.
]]>
Abstract: Questions about infinity are fascinating, and can lead into deep mathematical topics in set theory. The mathematics of infinite sets wasn’t clearly understood until Cantor defined cardinal numbers in the late 19th century, stating that two sets are the same size if there is a onetoone correspondence between them. One surprising result from set theory, first proved by Cantor in 1873, is that there are precisely as many rational numbers (fractions) as there are counting numbers. Over one hundred years later, mathematicians Neil Calkin and Herbert S. Wilf published a more elegant proof of this fact.
This article is the result of our work to develop the ideas in the CalkinWilf proof, so that they would be accessible to the teachers in our three different Math Teachers’ Circles. We designed an investigation into the hyperbinary numbers (itself a 19th century topic that predates Cantor’s work on cardinality) and developed the Tree of Fractions, much in the style of Calkin and Wilf. We asked teachers to make observations, ask questions, and convince each other of the veracity of their claims.
]]>First off, there’s Equations ≠ Math (Or: Equation layout as a print artifact) (archive.org). This somewhat of a continuation (and hopefully a refinement) on #196.
You should also totally register for my upcoming workshop on equation rendering in ebooks at Ebookcraft in March!
]]>I’ve never been one for looking back at the end of a year. But since the last year was complex (and this one is set up to be equally so) I thought maybe I should motivate myself by looking ahead to the things I want to write about this year (including things in my actual schedule for 2018).
Ok, maybe stop here; it’s a lot already.
]]>Suppose that f is a transcendental entire function. In 2014, Rippon and Stallard showed that the union of the escaping set with infinity is always connected. In this paper we consider the related question of whether the union with infinity of the bounded orbit set, or the bungee set, can also be connected. We give sufficient conditions for these sets to be connected, and an example a transcendental entire function for which all three sets are simultaneously connected. This function lies, in fact, in the Speiser class.
It is known that for many transcendental entire functions the escaping set has a topological structure known as a spider’s web. We use our results to give a large class of functions in the EremenkoLyubich class for which the escaping set is not a spider’s web. Finally we give a novel topological criterion for certain sets to be a spider’s web.
]]>
The FatouJulia iteration theory of rational and transcendental entire functions has recently been extended to quasiregular maps in more than two real dimensions. Our goal in this paper is similar; we extend the iteration theory of analytic selfmaps of the punctured plane to quasiregular selfmaps of punctured space.
We define the Julia set as the set of points for which the complement of the forward orbit of any neighbourhood of the point is a finite set. We show that the Julia set is nonempty, and shares many properties with the classical Julia set of an analytic function. These properties are stronger than those known to hold for the Julia set of a general quasiregular map of space.
We define the quasiFatou set as the complement of the Julia set, and generalise a result of Baker concerning the topological properties of the components of this set. A key tool in the proof of these results is a version of the fast escaping set. We generalise various results of MartiPete concerning this set, for example showing that the Julia set is equal to the boundary of the fast escaping set.
]]>
Abstract: In this talk I presented the notation and machinery of forcing, the statement of Martin’s axiom, and some wellknown applications in the area of Baire category and measure theory.
]]>Müller, S., & Sargsyan, G. (2018). HOD in inner models with Woodin cardinals.
We analyze the hereditarily ordinal definable sets $\operatorname{HOD}$ in the canonical inner model with $n$ Woodin cardinals $M_n(x,g)$ for a Turing cone of reals $x$, where $g$ is generic over $M_n(x)$ for the Lévy collapse up to its bottom inaccessible cardinal. We prove that assuming $\boldsymbol\Pi^1_{n+2}$determinacy, for a Turing cone of reals $x$, $\operatorname{HOD}^{M_n(x,g)} = M_n(M_{\infty}, \Lambda),$ where $M_\infty$ is a direct limit of iterates of an initial segment of $M_{n+1}$ and $\Lambda$ is a partial iteration strategy for $M_{\infty}$. This implies that under the same hypothesis $\operatorname{HOD}^{M_n(x,g)}$ is a fine structural model and therefore satisfies $\operatorname{GCH}$. These results generalize to $\operatorname{HOD}^M$ for selfiterable canonical inner models $M$, for example $M_\omega$, the least mouse with $\omega$ Woodin cardinals, or initial segments of the least nontame mouse $M_{nt}$.
]]>The restriction to the study of only the definable large collections of sets is a limitation of firstorder set theory which prevents us from exploring some natural properties of settheoretic universes. For instance, consider the long standing open question whether Reinhardt cardinals are consistent with ${\rm ZF}$, which has been revisited only a few days ago in this article. The Reinhardt cardinal is the critical point of an elementary embedding $j:V\to V$. It is not difficult to show that there cannot be a definable elementary $j:V\to V$ in a model of ${\rm ZF}$, so the open question is about the existence of an undefinable such embedding. Other recent examples of the use of general classes comes from the study of inner model reflection principles. Motivated by a question of Neil Barton, Barton, Caicedo, Fuchs, Hamkins, and Reitz recently introduced and studied the Inner Model Reflection Principle stating that every firstorder formula reflects to a proper inner model [1]. The statement of the principle cannot be expressed in firstorder set theory because it requires quantifying over classes. Along similar lines, Friedman had previously introduced the Inner Model Hypothesis which states that if a firstorder sentence holds in an outer model (extension universe) of an inner model, then it already holds in some inner model [2]. For a long time it was not clear in what framework this principle could be formalized because it requires not only quantifying over classes but also referring to classes that are potentially outside the universe itself.
So how do we undertake a general study of classes? What is the framework in which we can have undefinable classes and where we can study the properties of classes in the same way we study sets? This framework is secondorder set theory, formalized in a twosorted logic with separate objects and quantifiers for sets and classes. Models of secondorder set theory are triples $\mathscr V=\langle V,\in,\mathcal C\rangle$ where $\mathcal C$ is the collection of classes of $\mathscr V$. One of the weakest reasonable axiomatizations of secondorder theory is the GödelBernays set theory ${\rm GBC}$ whose axioms consist of the ${\rm ZFC}$ axioms for sets, extensionality, replacement, and existence of global wellorder axioms for classes, together with a weak comprehension scheme stating that every firstorder formula defines a class. If a universe of set theory has a definable global wellorder, then it together with its definable classes is a model of ${\rm GBC}$. Indeed, ${\rm GBC}$ is equiconsistent with ${\rm ZFC}$ and has the same firstorder consequences as ${\rm ZFC}$. If we just add to ${\rm GBC}$ comprehension for $\Sigma^1_1$formulas (formulas with a single class existential quantifier), we get a much stronger theory with many desirable properties. The theory ${\rm GBC}$ + $\Sigma^1_1$Comprehension implies that that any two metaordinals (class wellorders) are comparable, that we can iterate the $L$ construction along any metaordinal, that there is an iterated truth predicate along any metaordinal, that determinacy holds for open class games, and that the class forcing theorem holds.
A truth predicate is a class of Gödel codes of firstorder formulas obeying Tarskian truth conditions. Tarski’s Theorem on the undefinablity of truth implies that a truth predicate cannot be definable and therefore ${\rm GBC}$, because it can have models with only the definable classes, cannot imply the existence of such a class. Indeed, the existence of a truth predicate class implies $\text{Con}({\rm ZFC})$, $\text{Con}(\text{Con}({\rm ZFC}))$ and much more. ${\rm GBC}$ + $\Sigma^1_1$Comprehension implies that there is a truth predicate for every structure $\langle V,\in, A\rangle$ for a class $A$. In particular, if $T_0$ is the truth predicate (for $\langle V,\in,A\rangle$), then we have a truth predicate $T_1$ for the structure $\langle V,\in, T_0,A\rangle$, that is we have truth for truth. How far can we iterate the truth operation? ${\rm GBC}$ + $\Sigma^1_1$Comprehension implies that we get an iterated truth predicate along any metaordinal.
By analogy with games on $X^\omega$, for a set $X$, where the players take turns playing elements from $X$ for $\omega$many steps, in the secondorder context we can consider games on ${\rm ORD}^\omega$. It turns out ${\rm GBC}$ + $\Sigma^1_1$Comprehension implies determinacy for all such open class games [3].
The strength of the forcing construction comes from the Forcing Theorem which states that the forcing relation (for a fixed firstorder formula) is definable. The analogue of the Forcing Theorem for class partial orders says that the forcing relation (for a fixed firstorder formula) is a class. The Class Forcing Theorem can fail in a model of ${\rm GBC}$ because there are class forcing notions from whose forcing relation for atomic formulas we can define a truth predicate. But ${\rm GBC}$ + $\Sigma^1_1$Comprehension implies the Class Forcing Theorem.
Surprisingly, in ${\rm GBC}$ + $\Sigma^1_1$Comprehension, we can even formalize Friedman’s Inner Model Hypothesis because the properties of outer models can be expressed via a strong logic, called $V$logic, whose proof system is expressible in this theory [4].
Indeed, it turns out that most of these principles are implied by a weaker natural theory ${\rm GBC}$ + ${\rm ETR}$ elementary transfinite recursion. The principle ${\rm ETR}$, which is an analogue of the Recursion Theorem in firstorder set theory, states that every firstorder definable recursion along a metaordinal has a solution. The principle ${\rm ETR}$ implies over ${\rm GBC}$ that we can iterate the $L$ construction along any metaordinal. Over ${\rm GBC}$, the principle ${\rm ETR}$ is equivalent to determinacy for clopen class games and to the existence of an iterated truth predicate along any metaordinal [3]. The Class Forcing Theorem is equivalent over ${\rm GBC}$, to the principle ${\rm ETR}_{{\rm ORD}}$, stating that we can perform recursions along ${\rm ORD}$ [5]. The amount of available ${\rm ETR}$ gives a natural hierarchy of secondorder set theories above ${\rm GBC}$ with ${\rm ETR}_\omega$ already implying the existence of a truth predicate.
The only principle we have considered so far which is known to be stronger than ${\rm ETR}$ is open determinacy, a result due to Sato [6]. Hamkins and Woodin showed recently that open determinacy implies that forcing does not add metaordinals, a natural analogue to the statement that forcing does not add ordinals (personal communication).
Of course, the amount of available comprehension itself gives a hierarchy of secondorder set theories culminating with the KelleyMorse set theory ${\rm KM}$, which consists of ${\rm GBC}$ together with the full comprehension scheme for all secondorder formulas. Beyond ${\rm KM}$ are theories which include choice principles for classes, such as the choice scheme and the dependent choice scheme. These theories have the advantage of biinterpretability with extensions of the wellunderstood firstorder set theory ${\rm ZFC}^_I$ (${\rm ZFC}$ without powerset and with the existence of the largest cardinal which is inaccessible). An even stronger principle, which endows classes with more setlike properties, is the existence of a canonically definable wellorder of the classes. The existence of a definable wellordering on classes makes it possible, for instance, to carry out the Boolean valued model forcing construction for class forcing notions (work in progress with Carolin Antos and SyDavid Friedman).
Are there natural secondorder settheoretic principles between ${\rm GBC}$ + $\Sigma^1_1$comprehension and ${\rm KM}?$ What natural principles lie beyond ${\rm KM}$ together with the choice scheme and the dependent choice scheme?
@ARTICLE{BartonCaicedoFuchsHamkinsReitz:Innermodelreflectionprinciples,
author = {Neil Barton and Andr\'es Eduardo Caicedo and Gunter Fuchs and Joel David Hamkins and Jonas Reitz},
title = {Innermodel reflection principles},
journal = {ArXiv eprints},
year = {2017},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {underreview},
source = {},
doi = {},
eprint = {1708.06669},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/innermodelreflectionprinciples},
}
@article {Friedman2006:InternalConsistencyAndIMH,
AUTHOR = {Friedman, SyDavid},
TITLE = {Internal consistency and the inner model hypothesis},
JOURNAL = {Bull. Symbolic Logic},
FJOURNAL = {Bulletin of Symbolic Logic},
VOLUME = {12},
YEAR = {2006},
NUMBER = {4},
PAGES = {591600},
ISSN = {10798986},
MRCLASS = {03E35 (03E45 03E55)},
MRNUMBER = {2283091 (2007j:03065)},
MRREVIEWER = {Qi Feng},
URL = {http://projecteuclid.org/getRecord?id=euclid.bsl/1164056808},
}
@INCOLLECTION{GitmanHamkins:OpenDeterminacyForClassGames,
author = {Victoria Gitman and Joel David Hamkins},
title = {Open determinacy for class games},
booktitle = {Foundations of Mathematics, Logic at Harvard, Essays in Honor of Hugh Woodin's 60th Birthday},
publisher = {American Mathematical Society},
year = {(expected) 2016},
editor = {Andr\'es E. Caicedo and James Cummings and Peter Koellner and Paul Larson},
volume = {},
number = {},
series = {Contemporary Mathematics},
type = {},
chapter = {},
pages = {},
address = {},
edition = {},
month = {},
note = {Newton Institute preprint ni15064},
url = {http://jdh.hamkins.org/opendeterminacyforclassgames},
eprint = {1509.01099},
archivePrefix = {arXiv},
primaryClass = {math.LO},
abstract = {},
keywords = {},
pdf= {http://boolesrings.org/victoriagitman/files/2016/09/Properclassgames.pdf},
}
@ARTICLE{AntosBartonFriedman:VLogic,
author = {Neil Barton and Carolin Antos and SyDavid Friedman},
title = {Universism and extensions of {V}},
note={Preprint},
eprint = {1708.05751},
journal = {ArXiv eprints},
}
@ARTICLE{GitmanHamkinsHolySchlichtWilliams:ForcingTheorem,
AUTHOR= {Victoria Gitman and Joel David Hamkins and Peter Holy and Philipp Schlicht and Kameryn Williams},
TITLE= {The exact strength of the class forcing theorem},
PDF={https://boolesrings.org/victoriagitman/files/2017/07/Forcingtheorem.pdf},
Note ={Submitted},
EPRINT ={1707.03700},
}
@ARTICLE{Sato:determinacy,
author = {Kentaro Sato},
title = {Inductive dichotomy: separation of open and clopen class determinacies},
note={Preprint},
}
]]>The impediment to action advances action. What stands in the way becomes the way.
Catalog description: Euclidean, nonEuclidean, and projective geometries from an axiomatic point of view.
]]>Catalog description: The real number system, completeness and compactness, sequences, continuity, foundations of the calculus.
]]>Don’t fear failure. Not failure, but low aim, is the crime. In great attempts it is glorious even to fail.
This quote appears on page 121 of Striking Thoughts: Bruce Lee’s Wisdom for Daily Living. For more great quotes, check out the Wikiquote page for Bruce Lee.
]]>The case for support document from my grant application gives details of this conjecture, its importance, and the strategies that I hope to employ to work on it.
Excitingly, the university has agreed to fund a PhD student as part of this research. I’ll post a brief description of what the PhD will focus on below. If you are interested, please get in touch!
]]>This programme of doctoral research is within the study of finite permutation group theory. Motivated by questions in model theory, about 20 years ago Cherlin introduced the notion of the relational complexity of a permutation group G; this is a positive integer which, roughly speaking, gives an indication of how easily the group G can act homogeneously on a relational structure. Cherlin’s conjecture concerns binary primitive permutation groups, i.e. primitive permutation groups which have relational complexity equal to 2. It is hoped that this conjecture might be proved in the next couple of years.
In light of this one naturally asks, next, whether we can classify groups with larger relational complexity, or whether we can calculate the relational complexity of important families of permutation groups. Calculating the relational complexity of a permutation group can be surprisingly tricky, so these sorts of questions can hide many mysteries!
In the process of working in this area, the student can expect to learn a great deal about the structure of finite simple groups (especially the simple classical groups) and, in particular, will study and make use of one of the most famous theorems in mathematics, the Classification of Finite Simple Groups.
Abstract: An essential question regarding the theory of inner models is the analysis of the class of all hereditarily ordinal definable sets $\operatorname{HOD}$ inside various inner models $M$ of the set theoretic universe $V$ under appropriate determinacy hypotheses. Examples for such inner models $M$ are $L(\mathbb{R})$, $L[x]$ and $M_n(x)$. Woodin showed that under determinacy hypotheses these models of the form $\operatorname{HOD}^M$ contain large cardinals, which motivates the question whether they are finestructural as for example the models $L(\mathbb{R})$, $L[x]$ and $M_n(x)$ are. A positive answer to this question would yield that they are models of $\operatorname{CH}, \Diamond$, and other combinatorial principles.
The first model which was analyzed in this sense was $\operatorname{HOD}^{L(\mathbb{R})}$ under the assumption that every set of reals in $L(\mathbb{R})$ is determined. In the 1990’s Steel and Woodin were able to show that $\operatorname{HOD}^{L(\mathbb{R})} = L[M_\infty, \Lambda]$, where $M_\infty$ is a direct limit of iterates of the canonical mouse $M_\omega$ and $\Lambda$ is a partial iteration strategy for $M_\infty$. Moreover Woodin obtained a similar result for the model $\operatorname{HOD}^{L[x,G]}$ assuming $\Delta^1_2$ determinacy, where $x$ is a real of sufficiently high Turing degree, $G$ is $\operatorname{Col}(\omega, {<}\kappa_x)$generic over $L[x]$ and $\kappa_x$ is the least inaccessible cardinal in $L[x]$.
In this talk I will give an overview of these results (including some background on inner model theory) and outline how they can be extended to the model $\operatorname{HOD}^{M_n(x,g)}$ assuming $\boldsymbol\Pi^1_{n+2}$ determinacy, where $x$ again is a real of sufficiently high Turing degree, $g$ is $\operatorname{Col}(\omega, {<}\kappa_x)$generic over $M_n(x)$ and $\kappa_x$ is the least inaccessible cardinal in $M_n(x)$.
This is joint work with Grigor Sargsyan.
]]>]]>The mathematician does not study pure mathematics because it is useful; he studies it because he delights in it, and he delights in it because it is beautiful.
Joint work with Ari Meir Brodsky.
Abstract. Schimmerling asked whether $\square^*_\lambda$ together with GCH entails the existence of a $\lambda^+$Souslin tree, for a singular cardinal $\lambda$. Here, we provide an affirmative answer under the additional assumption that there exists a nonreflecting stationary subset of $E^{\lambda^+}_{\neq cf(\lambda)}$.
As a bonus, the outcome $\lambda^+$Souslin tree is moreover free.
Downloads:
Abstract: In 2009 Roman Kossak and I showed that the classification problems for countable models of arithmetic (PA) is Borel complete, which means it is complex as possible. The proof is elementary modulo Gaifman’s construction of socalled canonical Imodels. Recently Sam Dworetzky, John Clemens, and I adapted the method to show that the classification problem for countable models of set theory (ZFC) is Borel complete too. In this talk I’ll give the background needed to state such results, and then give an outline of the two very similar proofs.
]]>Joint work with Gunter Fuchs.
Abstract. It is wellknown that the square principle $\square_\lambda$ entails the existence of a nonreflecting stationary subset of $\lambda^+$, whereas the weak square principle $\square^*_\lambda$ does not.
Here we show that if $\mu^{cf(\lambda)}<\lambda$ for all $\mu<\lambda$, then $\square^*_\lambda$ entails the existence of a nonreflecting stationary subset of $E^{\lambda^+}_{cf(\lambda)}$ in the forcing extension for adding a single Cohen subset of $\lambda^+$.
It follows that indestructible forms of simultaneous stationary reflection entail the failure of weak square. We demonstrate this by settling a question concerning the subcomplete forcing axiom (SCFA), proving that SCFA entails the failure of $\square^*_\lambda$ for every singular cardinal $\lambda$ of countable cofinality.
Downloads:
Six months after I had turned in my dissertation, I have finally received the approval on the damn thing. Continue reading...
]]>A student proposed to me the following strong form of König’s lemma:
Conjecture. Suppose that $G=(V,E)$ is a countable a graph, and there is a partition of $V$ into countably many pieces $V=\bigcup_{n<\omega}V_n$, such that:
Then there exists an infinite $K\subseteq V$ such that $[K]^2\subseteq E$.
In this post, I will quickly address this “conjecture”. Thus, if you prefer to think about it by yourself, read no more.
Refutation. Consider the graph $G=(\mathbb N,E)$ where $\{n,m\}\in E$ iff $n+m=1\pmod2$. It is easy to see that for every 3sized set $\{n,m,l\}$, we have $\{n,m,l\}^2\nsubseteq E$. On the other hand, letting $V_n:=\{2n,2n+1\}$ for all $n<\omega$ yields a partition satisfying the abovementioned properties.
]]>
I gave an invited talk at the 14th International Workshop on Set Theory in Luminy in Marseille, October 2017.
Talk Title: Distributive Aronszajn trees
Abstract: It is wellknown that that the statement “all $\aleph_1$Aronszajn trees are special” is consistent with ZFC (Baumgartner, Malitz, and Reinhardt), and even with ZFC+GCH (Jensen). In contrast, BenDavid and Shelah proved that, assuming GCH, for every singular cardinal $\lambda$: if there exists a $\lambda^+$Aronszajn tree, then there exists a nonspecial one. Furthermore:
Theorem (BenDavid and Shelah, 1986). Assume GCH and that $\lambda$ is singular cardinal. If there exists a special $\lambda^+$Aronszajn tree, then there exists a $\lambda$distributive $\lambda^+$Aronszajn tree.
This suggests that following stronger statement:
Conjecture. Assume GCH and that $\lambda$ is a singular cardinal.
If there exists a $\lambda^+$Aronszajn tree, then there exists one which is $\lambda$distributive.
The assumption that there exists a $\lambda^+$Aronszajn tree is a very mild squarelike hypothesis (that is, $\square(\lambda^+,\lambda)$). In order to bloom a $\lambda$distributive tree from it, there is a need for a toolbox, each tool taking an abstract squarelike sequence and producing a sequence which is slightly better than the original one. For this, we introduce the monoid of postprocessing functions and study how it acts on the class of abstract square sequences. We establish that, assuming GCH, the monoid contains some very powerful functions. We also prove that the monoid is closed under various mixing operations.
This allows us to prove a theorem which is just one step away from verifying the conjecture:
Theorem 1. Assume GCH and that $\lambda$ is a singular cardinal.
If $\square(\lambda^+,<\lambda)$ holds, then there exists a $\lambda$distributive $\lambda^+$Aronszajn tree.
Another proof, involving a 5steps chain of applications of postprocessing functions, is of the following theorem.
Theorem 2. Assume GCH. If $\lambda$ is a singular cardinal and $\square(\lambda^+)$ holds, then there exists a $\lambda^+$Souslin tree which is coherent mod finite.
This is joint work with Ari Brodsky.
Downloads:
Abstract:
Given a settheoretic property $\mathcal P$ characterized by the existence of elementary embeddings between some firstorder structures, let’s say that $\mathcal P$ holds virtually if the embeddings between structures from $V$ characterizing $\mathcal P$ exist somewhere in the generic multiverse. We showed with Schindler that virtual versions of supercompact, $C^{(n)}$extendible, $n$huge and rankintorank cardinals form a large cardinal hierarchy consistent with $V=L$. Included in the hierarchy are virtual versions of inconsistent large cardinal notions such as the existence of an elementary embedding $j:V_\lambda\to V_\lambda$ for $\lambda$ much larger than the supremum of the critical sequence. The Silver indiscernibles, under $0^\sharp$, which have a number of large cardinal properties in $L$, are also natural examples of virtual large cardinals. Virtual versions of forcing axioms, including ${\rm PFA}$, ${\rm SCFA}$, and resurrection axioms, have been studied by Schindler and Fuchs, who showed that they are equiconsistent with virtual large cardinals. We showed with Bagaria and Schindler that the virtual version of Vopěnka’s Principle is consistent with $V=L$. Bagaria had showed that Vopěnka’s Principle holds if and only if the universe has a proper class of $C^{(n)}$extendible cardinals for every $n\in\omega$. We almost generalized his result by showing that the virtual version is equiconsistent with the existence, for every $n\in\omega$, of a proper class of virtually $C^{(n)}$extendible cardinals. With Hamkins we showed that Bagaria’s result cannot generalize by constructing a model of virtual Vopěnka’s Principle in which there are no virtually extendible cardinals. The difference arises from the failure of Kunen’s Inconsistency in the virtual setting. In the talk, I will discuss a mixture of results about the virtual large cardinal hierarchy and virtual Vopěnka’s Principle.
Abstract: Borel complexity theory is the study of the relative complexity of classification problems in mathematics. At the heart of this subject is invariant descriptive set theory, which is the study of equivalence relations on standard Borel spaces and their invariant mappings. The key notion is that of Borel reducibility, which identifies when one classification is just as hard as another. Though the Borel reducibility ordering is wild, there are a number of wellstudied benchmarks against which to compare a given classification problem. In this talk we will introduce Borel complexity theory, present several concrete examples, and explore techniques and recent developments surrounding each.
]]>We survey the dynamics of functions in the EremenkoLyubich class, Among transcendental entire functions, those in this class have properties that make their dynamics markedly accessible to study. Many authors have worked in this field, and the dynamics of class functions is now particularly wellunderstood and welldeveloped. There are many striking and unexpected results. Several powerful tools and techniques have been developed to help progress this work. We consider the fundamentals of this field, review some of the most important results, techniques and ideas, and give steppingstones to deeper inquiry.
]]>
As I wrote last time, the usual way to describe MathML’s doublespec is this: Presentation MathML is for layout and Content MathML is for semantics.
Last time I wrote about how semantics are effectively absent from MathML on the web. Unfortunately, layout does not fare much better.
So at first the spec will tell you that’s absolutely not true:
Presentation markup […] is used to display mathematical expressions; and Content markup […] is used to convey mathematical meaning.
So you will naturally start by thinking Presentation MathML is what you’re after regarding equation layout (not mathematics).
The spec, however, throws you a curveball:
MathML presentation elements only recommend (i.e., do not require) specific ways of rendering; this is in order to allow for mediumdependent rendering and for individual preferences of style.
So Presentation MathML spec is about layout but not actually specifying how that should work.
This is obviously a problem when you want to see standardscompliant implementations in all major web browsers (even if it’s just 4 engines). Usually (say with CSS or SVG), you can assume that a standard ensures developers that they are able to get consistent results across systems. Of course any standard will have gaps and edge cases but then, at least, specs can be clarified and either fixed in both standards and implementations or a standard can be identified as problematic (and ideally a less inconsistent standard can replace it).
However, this is not some kind of accident and you can easily find many statements in the same vein throughout the spec. For example, the section for <mfrac>
says effectively nothing about the spacing between numerator, fraction line, and denominator.
Or you get gems like this one from <mscarries>
This means that the second row, even if it does not draw, visually uses some (undefined by this specification) amount of space when displayed.
In contrast, start with any random part of contemporary CSS, e.g., flex container to start down the rabbit hole that are the result of quite meticulous discussions of layout specifics.
In other words, Presentation MathML does not even want to give you the same (messy) path to improvements as we’re used to on the web (and we’re still ignoring the practical problem that the Working Group is dead in the water so no fixes can be made).
At this point you might be wondering how that could be possible. After all, ther are plenty of equation rendering enginens out there that handle MathML. How do you reconcile this?
I think it is fairly simple (yet no less problematic). Presentation MathML assumes an implementor already knows how equation layout is supposed to work, in fact reading the spec you will get the feeling that it assumes you already have an equation layout engine at your disposal and you are merely adding MathML support, interpreting it in your engine.
in other words, Presentation MathML does not specify layout but is an abstraction layer, an exchange format for equation layout engines, a format that a rendering engine can (easily) make sense of within its already existing system.
(And yes, you could troll MathML enthusiasts by saying that Chrome and Edge support all layout requirements of the MathML spec. But please don’t.)
Since I considered the value of Presentation MathML’s semantics in the previous post, it’s only prudent to double check the value of Content MathML for layout. Unsuprisingly, Content MathML really does not want to help either. The spec speaks quite clearly:
[…] encoding the underlying mathematical structure explicitly, without regard to how it is presented aurally or visually,
So no visual layout nowhere.
By the way, it seems easy to misunderstand this point in the spec. Of course we can render MathML content  lots of tools do. But what no tool can rely on is the MathML spec when it comes to deciding on how to render Content MathML content visually. As I already mentioned, few rendering engines are “MathMLbased” because they literally cannot be, they need to base their layout decisions on a more reliable source.
The other side of that coin is that you might disagree how to visually render Content MathML. In real life (at MathJax), we’ve actually had one or two complaints over the year how our ContenttoPresentation conversion is wrong
.
This is really just the core, the fundamental issue around MathML layout on the web. Even if you make the assumption that an equation layout engine should be added to browsers, there are more problems. And then we’re still not talking about the problems of the shoddy implementations in Gecko and WebKit. Let’s see if I’ll get around to that. For now, let’s continue the 10,000 ft view a bit longer.
]]>Peter Holy and Philipp Schlicht recently introduced a robust hierarchy of Ramseylike cardinals $\kappa$ using games in which player I plays an increasing sequence of $\kappa$models and player II responds by playing an increasing sequence of $M$ultrafilters for some cardinal $\alpha\leq\kappa$ many steps, with player II winning if she is able to continue finding the required filters [1]. The entire hierarchy sits below a measurable cardinal and intertwines with Ramsey cardinals, as well as the Ramseylike cardinals I introduced in [2]. The cardinals in the hierarchy can also be defined by the existence of the kinds of elementary embeddings characterizing Ramsey cardinals and other cardinals in that neighborhood. Before getting to their hierarchy and the filter games, we need some background.
Large cardinals $\kappa$ below a measurable cardinal tend to be characterized by the existence of certain elementary embeddings of weak $\kappa$models or $\kappa$models. A weak $\kappa$model is a transitive model of ${\rm ZFC}^$ of size $\kappa$ and height above $\kappa$, which we should think of as a miniuniverse of set theory; a $\kappa$model is additionally closed under $\lt\kappa$sequences. Given a weak $\kappa$model $M$, we call $U\subseteq P(\kappa)\cap M$ an $M$ultrafilter if the structure $\langle M,\in, U\rangle$ satisfies that $U$ is a normal ultrafilter. (Note that since an $M$ultrafilter is only $\lt\kappa$complete for sequences from $M$, the ultrapower by it need not be wellfounded.) Obviously, if the ultrapower of a weak $\kappa$model $M$ by an $M$ultrafilter on $\kappa$ is wellfounded, then we get an elementary embedding of $M$ into a transitive model $N$, and conversely if there is an elementary embedding $j:M\to N$ with $N$ transitive and critical point $\kappa$, then $U=\{A\in M\mid A\subseteq\kappa\text{ and }\kappa\in j(A)\}$ is an $M$ultrafilter with a wellfounded ultrapower. These types of elementary embeddings characterize, for instance, weakly compact cardinals. If $\kappa^{\lt\kappa}=\kappa$, then $\kappa$ is weakly compact whenever every $\kappa$model has an $M$ultrafilter on $\kappa$ (and hence a wellfounded ultrapower).
An $M$ultrafilter $U$, for a weak $\kappa$model $M$, is called weakly amenable if for every $X\in M$, which $M$ thinks has size $\kappa$, $X\cap U\in M$. Because a weakly amenable $M$ultrafilter is partially internal to $M$, we are able to define its iterates and iterate the ultrapower construction as we would do with a measure on $\kappa$. If $j:M\to N$ is the ultrapower by a weakly amenable $M$ultrafilter on $\kappa$, then $M$ and $N$ have the same subsets of $\kappa$, and conversely if $M$ and $N$ have the same subsets of $\kappa$ and $j:M\to N$ is an elementary embedding with critical point $\kappa$, then the induced $M$ultrafilter is weakly amenable. In a striking contrast with the characterization of weakly compact cardinals, it is inconsistent to assume that every $\kappa$model $M$ has a weakly amenable $M$ultrafilter! Looking at this from the perspective of the corresponding elementary embeddings $j:M\to N$, this happens because there is too much reflection between $M$ and $N$ for objects of size $\kappa$.
The existence of weakly amenable $M$ultrafilters for some weak $\kappa$models characterizes Ramsey cardinals. A cardinal $\kappa$ is Ramsey whenever every $A\subseteq\kappa$ is an element of a weak $\kappa$model $M$ which has a weakly amenable countably complete $M$ultrafilter. If we assume that every $A\subseteq\kappa$ is an element of a $\kappa$model $M$ which has such an $M$ultrafilter, then we get a stronger large cardinal notion, the strongly Ramsey cardinal. If we further assume that every $A\subseteq\kappa$ is an element of a $\kappa$model $M\prec H_{\kappa^+}$ for which there is such an $M$ultrafilter, then we get an even stronger notion, the super Ramsey cardinal. Both notions are still weaker than a measurable cardinal. If we instead weaken our requirements and assume that every $A\subseteq\kappa$ is an element of a weak $\kappa$model for which there is a weakly amenable $M$ultrafilter with a wellfounded ultrapower, we get a weakly Ramsey cardinal, which sits between ineffable and Ramsey cardinals. I introduced these notions and showed that a super Ramsey cardinal is a limit of strongly Ramsey cardinals, which is in turn a limit of Ramsey cardinals, which is in turn a limit of weakly Ramsey cardinals, which is in turn a limit of completely ineffable cardinals [2]. I also called weakly Ramsey cardinals $1$iterable because they are the first step in a hierarchy of $\alpha$iterable cardinals for $\alpha\leq\omega_1$, which all sit below a Ramsey cardinal (see [3] for definitions and properties). What happens if we consider intermediate versions between Ramsey and strongly Ramsey cardinals where we stratify the closure on the model $M$, considering models with $M^\alpha\subseteq M$ for cardinals $\alpha<\kappa$? What happens if we consider models $M\prec H_\theta$ for large $\theta$ and not just models $M\prec H_{\kappa^+}$?
Obviously we cannot have a weak $\kappa$model $M$ elementary in $H_\theta$ for $\theta>\kappa^+$. So let’s drop the requirement of transitivity from the definition of a weak $\kappa$model, but only require that $\kappa+1\subseteq M$. Now it makes sense to ask for a weak $\kappa$model $M\prec H_\theta$ for arbitrarily large $\theta$. Suppose $\alpha\leq\kappa$ is a regular cardinal. Holy and Schlicht defined that $\kappa$ is $\alpha$Ramsey if for every $A\subseteq\kappa$ and arbitrarily large regular $\theta>\kappa$, there is a weak $\kappa$model $M\prec H_\theta$, closed under $\lt\alpha$sequences, with $A\in M$ for which there is a weakly amenable $M$ultrafilter on $\kappa$ (in the lone case $\alpha=\omega$, add that the ultrapower must be wellfounded) [1]. It is not difficult to see that it is equivalent to require that the models exist for all regular $\theta>\kappa$. Also, for a fixed $\theta$, it suffices to have a single such weak $\kappa$model $M\prec H_\theta$, meaning that the requirement that every $A$ is an element of such a model is superfluous. An $\omega$Ramsey cardinal is a limit of weakly Ramsey cardinals, and I showed that it is weaker than a 2iterable cardinal, and hence much weaker than a Ramsey cardinal. An $\omega_1$Ramsey cardinal is a limit of Ramsey cardinals. A $\kappa$Ramsey cardinal is a limit of super Ramsey cardinals. I will say where the strongly Ramsey cardinals fit in below.
It turns out that the $\alpha$Ramsey cardinals have a game theoretic characterization! To motivate it, let’s consider the following natural strengthening of the characterization of weakly compact cardinals. Suppose that whenever $M$ is a weak $\kappa$model, $F$ is an $M$ultrafilter and $N$ is another weak $\kappa$model extending $M$, then we can find an $N$ultrafilter $\bar F\supseteq F$. What is the strength of this property? I showed that it is inconsistent. Roughly, it implies the existence of too many weakly amenable $M$ultrafilters, which we already saw leads to inconsistency (see [1] for proof). So here is instead a game version of extending models and filters formulated by Holy and Schlicht.
Let us say that a filter is any subset of $P(\kappa)$ with the property that the intersection of any finite number of its elements has size $\kappa$. We will say that a filter $F$ measures $A\subseteq \kappa$ if $A\in F$ or $\kappa\setminus A\in F$ and we will say that $F$ measures $X\subseteq P(\kappa)$ if $F$ measures all $A\in X$. If $M$ is a weak $\kappa$model, we will say that a filter $F$ is $M$normal if $F\cap M$ is an $M$ultrafilter.
Suppose $\kappa$ is weakly compact. Given an ordinal $\alpha\leq\kappa^+$ and a regular $\theta>\kappa$, consider the following twoplayer game of perfect information $G^\theta_\alpha(\kappa)$. Two players, the challenger and the judge, take turns to play $\subseteq$increasing sequences $\langle M_\gamma\mid \gamma<\alpha\rangle$ of $\kappa$models, and $\langle F_\gamma\mid\gamma<\alpha\rangle$ of filters on $\kappa$, such that the following hold for every $\gamma<\alpha$.
Let $M_\alpha=\bigcup_{\gamma<\alpha}M_\gamma$ and $F_\alpha=\bigcup_{\gamma<\alpha}F_\gamma$. If $F_\alpha$ is a $M_\alpha$normal filter, then the judge wins, and otherwise the challenger wins. Note that in order to have any hope of winning the judge must play a filter $F_\gamma$ at each stage such that $F_\gamma\cap M_\gamma$ is an $M_\gamma$ultrafilter.
Holy and Schlicht showed that if the challenger has a winning strategy in $G^\theta_\alpha(\kappa)$ for a single $\theta$, then the challenger has a winning strategy for all $\theta$, and similarly for the judge. Thus, we will say that $\kappa$ has the $\alpha$filter property if the challenger has no winning strategy in the game $G^\theta_\alpha(\kappa)$ for some (all) regular $\theta>\kappa$. [1]
Holy and Schlicht showed that for regular $\alpha>\omega$, $\kappa$ has the $\alpha$filter property if and only if $\kappa$ is $\alpha$Ramsey! Using the game characterization, they showed that $\kappa$ is $\alpha$Ramsey ($\alpha>\omega$) if and only if every $A\in H_{2^{\kappa^+}}$ is an element of a weak $\kappa$model $M\prec H_{2^{\kappa^+}}$, closed under $\lt\alpha$sequences, for which there is an $M$ultrafilter. [1] Thus, we actually only need a single $\theta=2^{\kappa^+}$! So instead of $H_{\kappa^+}$ as in the definition of super Ramsey cardinals, the natural stopping point is $H_{2^{\kappa^+}}$. With the new characterization, we can also show that a strongly Ramsey cardinal is a limit of $\alpha$Ramsey cardinals for every $\alpha<\kappa$.
So now we have in order of increasing strength: weakly Ramsey, $\omega$Ramsey, $\alpha$iterable for $2\leq\alpha\leq\omega_1$, Ramsey, $\alpha$Ramsey for $\omega_1\leq\alpha<\kappa$, strongly Ramsey, super Ramsey, $\kappa$Ramsey, measurable.
Why the restriction $\gamma>\omega$? I showed that an $\omega$Ramsey cardinal is a limit of cardinals with the $\omega$filter property (see [1] for proof). The problem arises because even if the judge wins the game $G^\theta_\omega(\kappa)$, the ultrapower of $M_\omega$ by $F_\omega$ need not be wellfounded. The same problem arises for any singular cardinal of cofinality $\omega$. The solution seems to be to consider a stronger version of the game for cardinals $\alpha$ of cofinality $\omega$, where it is required that the final filter $F_\alpha$ produces a wellfounded ultrapower. Let’s call this game $wfG^\theta_\alpha(\kappa)$. The wellfounded games don’t seem to behave as nicely as $G^\theta_\alpha(\kappa)$. For instance, it is not known whether having a winning strategy for a single $\theta$ is equivalent to having a winning strategy for all $\theta$. I conjecture that it is not the case. Still with the wellfounded games, the arguments now generalize to show that $\kappa$ is $\omega$Ramsey if and only if $\kappa$ has the wellfounded $\omega$filter property for every $\theta$.
Finally, what about $\alpha$Ramsey cardinals for singular $\alpha$? Well, since a weak $\kappa$model $M$ that is closed under $\lt\alpha$sequences for a singular $\alpha$ is also closed under $\lt\alpha^+$sequences, $\alpha$Ramsey for a singular $\alpha$ implies $\alpha^+$Ramsey. So instead Holy and Schlicht defined that $\kappa$ is $\alpha$Ramsey for a singular $\alpha$ if $\kappa$ has the wellfounded $\alpha$filter property (the wellfounded part is only needed for $\alpha$ of cofinality $\omega$) [1]. Now we have the $\alpha$Ramsey hierarchy for all cardinals $\alpha\leq\kappa$. Holy and Schlicht showed that this is a strict hierarchy of large cardinal notions: if $\kappa$ is $\alpha$Ramsey and $\beta<\alpha$, then $V_\kappa$ is a model of proper class many $\beta$Ramsey cardinals, and moreover if $\beta$ is regular, then $\kappa$ is indeed a limit of $\beta$Ramsey cardinals [1].
@ARTICLE{HolySchlicht:HierarchyRamseyLikeCardinals,
AUTHOR= {Peter Holy and Philipp Schlicht},
TITLE= {A hierarchy of {R}amseylike cardinals},
Note ={To appear in Fundamenta Mathematicae},
}
@ARTICLE {gitman:ramsey,
AUTHOR = {Victoria Gitman},
TITLE = {{R}amseylike cardinals},
JOURNAL = {The Journal of Symbolic Logic},
VOLUME = {76},
YEAR = {2011},
NUMBER = {2},
PAGES = {519540},
EPRINT={0801.4723},
PDF={http://boolesrings.org/victoriagitman/files/2011/08/ramseylikecardinals.pdf},
ISSN = {00224812},
CODEN = {JSYLA6},
MRCLASS = {03E55},
MRNUMBER = {2830415 (2012e:03110)},
MRREVIEWER = {Bernhard A. K{\"o}nig},
DOI = {10.2178/jsl/1305810762},
URL = {http://dx.doi.org/10.2178/jsl/1305810762},
}
@ARTICLE{gitman:welch,
AUTHOR= "Victoria Gitman and Philip D. Welch",
TITLE= "Ramseylike cardinals {II}",
JOURNAL = {The Journal of Symbolic Logic},
VOLUME = {76},
YEAR = {2011},
NUMBER = {2},
PAGES = {541560},
PDF={http://boolesrings.org/victoriagitman/files/2011/08/ramseylikecardinalsii.pdf},
EPRINT ={1104.4448},
ISSN = {00224812},
CODEN = {JSYLA6},
MRCLASS = {03E55},
MRNUMBER = {2830435 (2012e:03111)},
MRREVIEWER = {Bernhard A. K{\"o}nig},
DOI = {10.2178/jsl/1305810763},
URL = {http://dx.doi.org/10.2178/jsl/1305810763},
}
Here is the video: Continue reading...
]]>Abstract: Many classification problems in mathematics may be identified with an equivalence relation on a standard Borel space. In earlier talks we have been introduced to the notion of Borel reducibility of equivalence relations, as well as to some of the most important equivalence relations studied. In this talk we will introduce several natural classification problems and identify where they lie in the Borel reducibility order.
]]>Yet, the fact that mathematical objects are real is the daily experience of mathematicians (though few would ever claim this, because they are much too cautious). I’d like to try to explain this experience. Since I am not a philosopher, there will be no robust philosophical arguments. I will not discuss ontology. Try not to be disappointed.
Imagine you were an astronomer. (No, go on. Give it a go.) You point your telescope up in the air and – lo – a new star appears. You call a friend, and tell her the news. She points her telescope in the same place and – lo – the same star. You write up your discovery, and a team of astronomers in Belgium train their more powerful telescopes on the same spot, and describe the colour and size of the star. You have another look, and see they are correct. An international team in Chile use radioastronomy to discover that your star is actually two stars, orbiting around each other. It is later discovered that there is a large exoplanet orbiting one of these stars.
Now – I guess – it could be argued that there is no star. It could be argued that you invented it, and then let everyone else know how to do the same. The star is some sort of socially constructed illusion. In my view this is a purest nonsense. There is a real star, it is really out there. That, after all, is the belief of (most) astronomers. Otherwise, we might as well give up the whole astronomy thing altogether.
So I am getting to my point. Thanks for being patient.
My point is that this is also the daily experience of mathematicians. Let’s suppose I am studying transcendental dynamics (as I do), and I study a new set which seems of interest (well, you never know). I email a colleague, and they confirm the set looks as I said, and maybe they spot something else; perhaps it has dimension one, or is dense in the plane, or something technical like that. We write a paper. A team of Belgian mathematicians read our paper, and note that, in fact, our set has other interesting properties. They email us and we find that this is indeed the case. More papers follow, and then someone (in Chile, perhaps) observes that our set is actually the union of two interesting sets, and gives some further properties of each. When we look into it, we see that this is indeed the case. This is how (pure) maths is done.
Essentially this story (for it is a story; I have not discovered any sets of interest to Belgians) is no different to the story about the star. And it is very difficult not to believe the punchline is the same; the set exists ‘outside our heads’, just as the star exists ‘outside the heads of the astronomers’. (I’m not trying to claim mathematical proof here; I’m just trying to communicate how it feels to do mathematics).
A reallife example of this story is the famous Mandelbrot set. This was first discovered in the 1970s, when it was very difficult to draw a picture of it. But mathematician talked unto mathematician, and more and more properties were discovered. Technology has moved on, and now highly detailed pictures exist. It is a remarkable object: for example, the set is so intricate that if you try to draw a line around the edge, you will find that your ‘line’ is actually twodimensional. It is even more intricate than the coast of Norway. Nonetheless, all mathematicians would agree they have been studying ‘the same thing’ all this time.
So it seems undoubtedly true that mathematical objects exist. I am as confident in the existence of the Mandelbrot set, or the sine function, or Riemann surfaces of genus zero as I am in the existence of Belgium. When we study mathematical objects, we discover them – we do not invent them. There are thing that exist that are not material objects.
You may feel that this is silly, because if they exist, then where is their home? (It is probably not Belgium). How do we see them? What are they made of? These are a good questions.
]]>
Abstract: We identify the complexity of the classification problem for automorphisms of a given countable regularly branching tree up to conjugacy. We consider both the rooted and unrooted cases. Additionally, we calculate the complexity of the conjugacy problem in the case of automorphisms of several nonregularly branching trees.
]]>This one’s slightly tricky. And I also have a confession to make. In the first two parts I pretended I’ve written about MathML when I really only wrote with half of it in mind.
One problem of the MathML spec in general is that it’s really two, quite distinct specs: Presentation MathML and Content MathML.
Now the common description is: Presentation describes layout and Content describes semantics. I think one of the problems for MathML in general is that it is not that easy.
So obviously that’s wrong. After all there is Content MathML and it specifies an enormous amount of semantics. Such an enormous amount actually that you can express lambda calculus. You also get a whole bunch of fantastic elements (for <reals>
) and on top of that builtin, infinite extensibility via content dictionaries. So you can do quite literally everything in Content MathML.
So what’s the problem?
It’s the simplest and most practical problem: Content MathML plays no significant role in real world documents. You can find it in niche projects (such as NISTS’s handcrafted DLMF), you can find it hidden in commercial enclosures (such as Pearson’s assessment system where I wonder why you’d need its expressiveness), you can also get it by exportig it from computational tools (Maple, Mathematica etc.). But in real world documents, it’s nonexistent.
I can’t really tell you why that is. Perhaps like most formal abstractions of mathematical knowledge, it ignores the practicalities of humans communicating knowledge. Perhaps, when it comes to its computational prowess, it probably fails on the web because it cannot compete with the practicality of JavaScript or serverbased computation (à la Jupyter Notebooks).
I also have heard repeatedly that it’s simply too difficult to create. And from my limited experience with MathJax users it doesn’t help that the spec itself warns people that it encodes structure without regard to how it is presented aurally or visually
, i.e., it’s sometimes not clear how Content MathML should be rendered.
Ultimately, lack of content (pardon the pun) makes Content MathML of little relevance on the web. (An interesting but separate question might be whether the way Content MathML expresses semantics fits into the style that HTML has adopted in recent years; another time perhaps.)
But there’s actually a second problem for MathML and semantics on the web here: Presentation MathML.
It’s easy to think that Presentation MathML specifies at least some semantics. And if it specifies some, maybe it’s a good basis to build upon. After all, how semantic was HTML really, back in the day?
For example, there’s the <mfrac>
element and you might think it specifies a fraction. Unfortunately, you’d be wrong. The spec itself speaks of fractionlike objects such as binomial coefficients and Legendre symbol
which are about as far from fractions as you can think of. Of course you can find even more egregious examples in the wild such as plain vectors encoded with mfrac
. Similarly, <msqrt>
does not represent square root but root without index and it is used accordingly in the wild (while <mroot><mrow>...</mrow><none/></mroot>
constructions are practically unheard of).
The point is that you can’t complain about some kind of abuse of markup because Presentation MathML does not make this kind of a distinction.
Now for a long time, I thought there might just be enough semantics in Presentation MathML to get away with. Working with Volker Sorge and his speechruleengine and integrating SRE’s semantic analysis into MathJax meant a deep dive into what kind o structure you can find in Presentation MathML. And as amazing as its heuristics are, it becomes clear how brittle they remain and how quickly you find (real world) examples that break things. This isn’t to say you can’t guess the meaning of a large selection of real world content. It just makes it clear that you are working with a format void of semantic information. (And we’re not talking about tricking machine learning models here, just run of the mill content.)
When you get down to it, I would say that there are effectively only two elements in Presentation MathML that appear reliably semantic in the real world: <mn>
and <mroot>
. And even these examples are stretching it. For for the former, the spec suggests that <mn>twenty one</mn>
is sensible markup. For the latter, it seems to be mostly accidental that roots simply haven’t been sufficiently abused in the literature (yet) and thereby retain a unique place of being a visual layout feature that is used consistently to describe (many different concepts of) “rootness”. (For the record, there’s also <merror>
which is pretty solid, semantically speaking; just not very mathematical.)
There are other, more indirect signs of the failure of MathML to specify semantics. For example the absence of typical benefits of semantic content such as usable search engines or knowledge management tools. But that’s a very different problem to discuss.
Anyway, so MathML that specifies semantics could exist but does not. On to layout.
]]>One advantage of MathML on the web is that it’s XML, i.e., it looks a lot like HTML and SVG and does not require a lot of extra tooling (e.g., parsers). In addition since you can preserve its structure when converting to HTML or SVG, you can can hack MathML markup to improve the result on the web, e.g., by adding CSS or ARIA.
Still, being XML is obviously not enough to make anything a good web standard.
Obviously this depends a lot on what qualities you are after but I’ve found it to be a common misconception that MathML is somehow universally superior to other ways of marking up equations. That misconception is getting it backwards.
Like any exchange format, MathML’s design is more that of a least common denominator between document systems and, in particular, between visual rendering engines for equational content. By definition, this means it is the least expressive, least flexible, and least powerful format.
A good exchange format would of course be a great thing to have and it can still be very powerful if the ecosystem’s diversity is not too great. Unfortunately, that’s not the case for MathML where rendering engines for equational content exist and vary considerably between ancients like troff or TeX, modern word processors, computer algebra systems, and more.
So while it is easy to create MathML from other equation input formats it is effectively dumbed down in the process. Reversely, it is not easily interpreted in another system without significant loss of information. This is of course nothing special, just look at binary image formats or text processing. But this is a problem for MathML because it is designed for this purpose; however, it neither reaches the quality of, say, SVG as an exchange for vector graphics, nor does it provide reallife advantages over, say, subsets of LaTeX notation (e.g., in jats4reuse) or even ASCIIstyle notation.
A particular example of this loss of information is that importing MathML into other systems, while often possible, is rarely reusable. This is a bit like importing a binary image format into another editor; yes it works, but there are limits to how well you can edit the import without redoing the whole thing. To give a simple example, David Carlisle’s pmml2tex provides perfectly nice visual output in print but rather unusual TeX markup.
The fact that after 20 years there are virtually no rendering systems out there that use MathML internally indicates that MathML fails to provide a decent solution for another basic use case.
After these basic, to some degree social problems, let’s talk about core problems of the spec itself next.
]]>Abstract: An essential question regarding the theory of inner models is the analysis of the class of all hereditarily ordinal definable sets $\operatorname{HOD}$ inside various inner models $M$ of the set theoretic universe $V$ under appropriate determinacy hypotheses. Examples for such inner models $M$ are $L(\mathbb{R})$, $L[x]$ and $M_n(x)$. Woodin showed that under determinacy hypotheses these models of the form $\operatorname{HOD}^M$ contain large cardinals, which motivates the question whether they are finestructural as for example the models $L(\mathbb{R})$, $L[x]$ and $M_n(x)$ are. A positive answer to this question would yield that they are models of $\operatorname{CH}, \Diamond$, and other combinatorial principles.
The first model which was analyzed in this sense was $\operatorname{HOD}^{L(\mathbb{R})}$ under the assumption that every set of reals in $L(\mathbb{R})$ is determined. In the 1990’s Steel and Woodin were able to show that $\operatorname{HOD}^{L(\mathbb{R})} = L[M_\infty, \Lambda]$, where $M_\infty$ is a direct limit of iterates of the canonical mouse $M_\omega$ and $\Lambda$ is a partial iteration strategy for $M_\infty$. Moreover Woodin obtained a similar result for the model $\operatorname{HOD}^{L[x,G]}$ assuming $\Delta^1_2$ determinacy, where $x$ is a real of sufficiently high Turing degree, $G$ is $\operatorname{Col}(\omega, {<}\kappa_x)$generic over $L[x]$ and $\kappa_x$ is the least inaccessible cardinal in $L[x]$.
In this talk I will give an overview of these results and outline how they can be extended to the model $\operatorname{HOD}^{M_n(x,g)}$ assuming $\boldsymbol\Pi^1_{n+2}$ determinacy, where $x$ again is a real of sufficiently high Turing degree, $g$ is $\operatorname{Col}(\omega, {<}\kappa_x)$generic over $M_n(x)$ and $\kappa_x$ is the least inaccessible cutpoint in $M_n(x)$ which is a limit of cutpoints in $M_n(x)$.
This is joint work with Grigor Sargsyan.
This abstract will be published in the Bulletin of Symbolic Logic (BSL). My slides can be found here. A preprint containing these results will be uploaded on my webpage soon.
]]>And that was fine. All three options are roughly equivalent, in the sense that they present you the material in a very structured way (or they at least intend to). You don't reach the definition of \(\aleph_0\) because you defined what is equipotency and cardinality. You don't reach the definition of a derivative before you have some semblance of notion of continuity. Knowledge was built in a very structural way. Sometimes you use crutches (e.g. some naive understanding of the natural numbers before you formally introduce them later on as finite ordinals), but for the most part there is a method to the madness. Continue reading...
]]>After finishing MathML as a failed web standard last year, I’ve been meaning to write a followup to discuss fundamental issues I see with MathML as a web standard. I found it very difficult, even painful to do so. Over the past few years I realized that most people simply don’t know much about both MathML and modern web technology. I don’t claim I’m a great expert myself but running MathJax for the past 5 years has given me some ideas.
Caveat Emptor. The problems I hope to outline may seem to be a general rejection of MathML as a whole; that’s not what I’m after. It’d actually be silly to try to bash MathML because it is simply too successful. I also actually kind of like MathML, despite its many horrors; I think it was a great idea 20 years ago and it’s still useful to hack it to get to better things.
Primarily, what follows is the result of me trying to understand why MathML failed on the web. I think there are a few key reasons for its failure. My motivation is to form an opinion on whether MathML is salvageable as a web standard or fundamentally unfit to be part of today’s web technology (and should then best be deprecated).
The success outside of the web is an important factor as it limits how much MathML can realistic change. So let’s start there.
MathML is the dominant format for storing equations in XML document workflows today. It’s a reasonable assumption that the vast majority of equational content today is available in (or ready to convert to) MathML: virtually all STEM publishers use MathML in their workflows, major tools like Microsoft Word (favored throughout education) use formats intentionally close to MathML, and most other forms of equation input can be converted more or less easily.
MathML has a long history as a W3C standard and it’s natural to think that MathML’s success is somehow connected to the web’s success.
However, that’s not the case (except perhaps by making an ultimately empty promise). The<math>
tag was first proposed in HTML 3.0 in 1995 but was remove from HTML 3.2 in 1997. It was transformed into one of the first XML applications and MathML was born in 1998 and lived in XML/XHTML limbo for the next decade. Finally, MathML returned to HTML proper with HTML 5 in 2014.
It should seem obvious that because MathML was not part of HTML (or any other web standard implemented by browsers), it could not have succeeded because of the web’s success. Instead, it was MathML’s success outside of the web that allowed it to survive and eventually make it back into HTML5.
So there naturally was a disconnect. Unfortunately, even when MathML came back in HTML5, that disconnect remained effectively unchanged. A simple example is the timeline. MathML 3’s first public working draft was published in 2007, the year HTML WG was just being rechartered to bring together HTML5 (which took 7 years). The difference between the early working drafts of MathML 3 and the eventual REC (in 2010) seems to include little fundamental change (lots of details being hashed out but the core seems in place pretty early on). Only a handful of changes were made between 2010 and 2016 (when the Math Working Group shut down). It seems only mild hyperbole to say that MathML 3 was effectively done before the HTML5 was really getting started.
Overall, it seems clear from the various specs that the return to HTML5 had not much influence on MathML — or vice versa. For example, there is no hint of giving MathML the “CSS treatment” that HTML got (e.g., clarifying HTML layouts like tables via CSS) nor is there a sign that HTML and CSS ever considered what MathML brought to the table in terms of semantics and layout. This disconnect (and the lack of interest in overcoming it later on) is likely the root cause for MathML’s failure.
I think one of the reasons why this disconnect was not overcome is the success of MathML and where that success occurred.
If you speak to early adopters of MathML, you will notice that MathML’s success was due to its efficacy in print workflows (with rendering to binary images perhaps being a nice extra in the pitch). That’s what XML workflows were producing and while the web was a nice thing to hope for, if MathML hadn’t done a good job in print, it would not have gone anywhere in XMLland. This also means that MathML suffers from the general problem of equational content (shameless selfplug).
I suspect this success made the MathML community a bit blind to the fact that the web platform was moving away from any common ancestry there may have been, especially on the implementation level but perhaps more importantly in terms of being a rapidly growing technology being practiced by a similarly growing group of specialists (aka web developers).
A sign of this effect is that (especially among nonexperts) it seems many people confused the hopes of MathML in HTML5 with a promise and in extreme cases some sort of moral obligation for browser vendors to implement MathML support natively. In retrospect, I think there may have been a short window where things could have turned out differently (and I hope I’ll get to that idea later on). More likely, my brain is playing tricks on me because I shared that hope.
In any case I find the history to be rather odd, overall. A failed web standard became successful in print production and that success was so significant that it was reintroduced to HTML.
What I think is often missed when discussing MathML is how the success outside the web took its toll on the MathML specification. Its development was focused almost entirely on legacy (print) content and completely detached from the direction random twists and turns of the more successful web standards (first and foremost HTML and CSS). Still, MathML neither tried to align its own direction with the platform nor did it try to take inspiration or to influence those developments.
Finally, I think the particulars of print (and image) rendering of MathML has produced a crucial misconception about MathML: the fact that MathML works well in those settings does not imply that MathML works well as a web technology.
Next I’ll try to step a bit back and maybe talk about some of the basics of the spec.
]]>Joint work with Chris LambieHanson.
Abstract. We derive a forcing axiom from the conjunction of square and diamond, and present a few applications, primary among them being the existence of superSouslin trees.
It follows that for every uncountable cardinal $\lambda$, if $\lambda^{++}$ is not a Mahlo cardinal in Godel’s constructible universe, then $2^\lambda = \lambda^+$ entails the existence of a $\lambda^+$complete $\lambda^{++}$Souslin tree.
Downloads:
Abstract: We analyze $\operatorname{HOD}$ in the inner model $M_n(x,g)$ for reals $x$ of sufficiently high Turing degree and suitable generics $g$. Our analysis generalizes to other canonical minimal mice with Woodin and strong cardinals. This is joint work with Grigor Sargsyan.
Notes taken by Ralf Schindler during my talk can be found here. These notes include a sketch of the proof of our main result, the corresponding preprint will be uploaded on my webpage soon.
]]>When I look back at some of the proofs I wrote when I started work on my PhD, I realise how much I have learned. My supervisors – who were very gracious, very helpful, and very dedicated – used to cover my early work in red ink. I then learned how to write a proof through an iterative (and very painful) process, in which I would write something, receive the red ink, fix those problems, receive further red ink, and so on. I became very familiar with red ink. Very, very familiar.
In this note I’d like to comment on how one might spot problems oneself, rather than depend on one’s supervisors in this way. This is not a trivial task, but a really important one. Perhaps I can offer a few pointers which might be of help.
Let’s suppose you have proved a result. You’ve written it all up to your own satisfaction, and wish to share your achievement with your fellows. I began to make a list of the things you should do, but it was very long, exceedingly tedious, and all boiled down to the word check. Which is a bit boring. So let’s try the following, which is less prescriptive if possibly less allencompassing. It’s just three words. How hard could that be?
First forget. In developing your proof you, no doubt, came up with all sorts of ideas and intuitions and implications and pictures. You have to (somehow) now lay these all to one side. Your reader will not have any of this in front of them, so you have to be sure that none of your work now depends or uses anything other than the words in front of you. (Incidentally, the best way to do this is to put your proof to one side for a few months, and then come back to it. You’ll be astonished how terrible it will look).
Second focus. Focus on the words in front of you, and what they say. This is easier said than done; because you expect your words to say one thing, you will tend to interpret them in that way. Try not to. Look at what is written and nothing else.
Third check. Read what you have written, word by word, sentence by sentence, and ask yourself the question “why on earth does that follow?” Notice the negation; if you expect things to be wrong you are more likely to spot mistakes than if you expect them to be correct. In my personal experience they are probably incorrect.
I could probably make a list of common mistakes, but it really is hard to make that interesting. So I will highlight just three (three is a useful number here):
The word “clearly”: It is very easy to make the mistake of writing “clearly XYZ” when what you mean is “XYZ seems pretty darned obvious to me but I can’t quite work out why”. If you can’t work out why XYZ is true, chance is that is isn’t.
Things that are true but don’t actually follow: This is a very easy mistake to make; you write something like “Since X, then Y” and assume it is OK because Y really is true. But you are not asserting here that Y is true, and that is not what you need to check. You need to check that Y follows from X and nothing else!
Failure to satisfy all necessary conditions: If you use another result (maybe a book result, or a lemma of your own from earlier) you need to be sure that all the conditions are checked. This is especially true of a book result – if that says something like “If A, B, C, D and E, then F”, then there is no chance to use this result if only A, B, C and D are true.
Yes, this is all amazingly tedious. Yes, this is a very lengthy process. No, there is no alternative (apart from asking a friend to check). Yes, you will be a better mathematician when you can do all this. No, I do not claim to be able to do this all the time myself. Yes, I welcome feedback and other suggestions.
]]>Hamkins' multiverse is essentially taking a very illfounded model and closing it to forcing extensions, thus obtaining a multiverse which is more of a philosophical justification, for example every model is a countable model in another one, and every model is illfounded by the view of another model. The problem with this multiverse is that if we remove the requirement for genericity, then everything else can be satisfied by the same model. Namely, \(\{(M,E)\}\) would be an entire multiverse. That's quite silly. Moreover, we sort of give up on a concrete notion of natural numbers that way, and this seems a bit... off putting. Continue reading...
]]>When people speak about math content in the context of the web they usually mean equational content (or simply equations). That is, they don’t mean content in a mathematical field (which often enough does not qualify as equations), they simply mean something that looks like an equation.
Now you might argue that an equation in physics is still basically mathematical content but in reality both mathematician and physicist will frequently disagree with you (and each other, possibly explosively so). You quickly get to the edge when considering chemical equations and if you want to classify the nonsense notations in the life sciences you might question your sanity.
It’s not hard to understand why this is. For example, most typesetting tools with support for equations will have some kind of math mode for them. But I think it’s worth while differentiating the two so I’ll try my best to stick to equational content. On the one hand, the importance of math on the web is often exaggerated because it is really nonmathematical equational content that’s the majority (and even that is a blip on the radar). On the other hand, it does not help to confuse a field of study with what effectively comes down to a layout tradition.
Also, sorrynotsorry for misleading you with the title here.
The fundamental problem of equational content is that, well, that it’s simply pretty terrible all around. It’s convoluted,extremely compressed, archaic, and generally undecipherable. It destroys academic careers by the millions and it can often only be understood when you can see it written live (i.e., animated). At its best equations are like good abstract drawings, at worst (usually?) they’re deafening gibberish.
Stray thoughts.
One. I always thought Bret Victor’s (in)famous Kill math was largely wrong about the specifics of his criticism (for one, he seems to dismiss the incredible power of compression that differential equations exhibit  along with the obvious problems that stem from compression). But he is of course utterly right with his incredible work exploring how modern media like the web allow for a much richer expression of human thought, one that opens the content up to more people, often by adding means of interacting with it, especially means for untrained people (like tiny humans).
Two. Every once in a while I’ve wondered: what if Tim BernersLee had given the web some basic building blocks for equations. Just a fraction and a square root; maybe instead of image renditions of print equations we’d have immediately seen the same creativity applied to equations as there was with hacking general layout (1px GIF anyone?). Of course, that’s hopelessly romanticizing the evolution of the web. Why can’t I stop wondering.
Three. On and off (and I’ve come full circle on this several times) I’ve wondered whether math is ahead of other sciences on the web. I mean the <math>
tag was proposed in fricking HTML 3. So is math ahead? Maybe. But then why is scientific content so much more vibrant and transformative on the web compared to math?
The most obvious flaw of equational content is that it’s deeply rooted in print. Given the limitations of print technology, equational content has needed to adopt bad practices for such a long time that many people consider them good.
I’m not (just) thinking about the problem of general comprehension as it is too tainted by poorly trained practitioners on all levels. Sure, equational content is often more difficult to parse than necessary but that’s not different from poorly phrased prose.
The main problem is the tradition of abusing print technology to get more and more variations of notation squeezed into the medium. The constant abuse of sub and superscripts is a great example; if you need to add a variant of an object you’ve already introduced in your notation, just slap some sub/superscripts around it, et voilà, a new object.
The abuse of letters with different fonts is another horror in equational content. If you have ever run into a paper where a dozen variations of G
appear, denoting a convoluted set of somewhat related concepts, you’ll know this horror well. Unbelievably enough, Unicode has deemed this abuse of notation important enough that we now have such wonders as the Unicode point mathematical bold italic G in the Mathematical Alphanumeric Symbols
Block.
Another historic accident are stylistic separations. For example, in print it’s abhorred to make math content bold when the surrounding content is bold (e.g., in a heading) yet on the web people complain that an equation in a link doesn’t get the correct text decoration (what would that be??).
Obviously, there’s little point in criticizing the historic development of equational content. Given that print was mostly limited to (at best) grayscale with a limited character set, naturally people had to be creative. It is amazing what this accomplished.
The real problem comes up when pretending that this tradition should do more than vaguely inform a medium such as the web. The web so far developed without much influence from equational content. It has adopted a rather different approach to separating content and presentation and the traditions of equational content are essentially incompatible with the web’s approach.
I can find no argument for why the web stack should bend over backwards to accommodate these mostly quite bad traditions of equational content for print. This is perhaps similar to the situation of CSS paged media.
Obviously, it’s not like you shouldn’t be able to put traditional equational content on the web  you should (and you can very well today). But I’ve come to think it’s perfectly fine, in fact, it is appropriate that this continues to be a difficult problem. For example, traditional equational content is almost always inaccessible (without heuristic algorithms, i.e., guessing around); it’s basically a bunch of glyphs placed in a weird 2D patterns (like above and below a line which in turn is magically centered on some baseline and may or may not indicate it corresponds to the notion of a mathematical fraction). Pretending that this is a basis for accessible rendering on the web strikes me as foolish (or ridiculously zealous).
If you think that all equational content should be limited to the traditions of the print era, fine. I think humanity can do better on the web. Though I think we would need to acknowledge that the (print) traditions enshrined in equational content are flawed and should (and invariably will) be replaced with better concepts and narratives that are appropriate for this medium.
]]>@ARTICLE{GitmanHamkinsHolySchlichtWilliams:ForcingTheorem,
AUTHOR= {Victoria Gitman and Joel David Hamkins and Peter Holy and Philipp Schlicht and Kameryn Williams},
TITLE= {The exact strength of the class forcing theorem},
PDF={https://boolesrings.org/victoriagitman/files/2017/07/Forcingtheorem.pdf},
Note ={Submitted},
EPRINT ={1707.03700},
}
We shall characterize the exact strength of the class forcing theorem, which asserts that every class forcing notion $\mathbb P$ has a corresponding forcing relation $\Vdash_{\mathbb P}$ satisfying the relevant recursive definition. When there is such a forcing relation, then statements true in any corresponding forcing extension are forced and forced statements are true in those extensions.
Unlike the case of setsized forcing, where one may prove in ${\rm ZFC}$ that every set forcing notion $\mathbb P$ has its corresponding forcing relations, in the case of class forcing it is consistent with GödelBernays set theory ${\rm GBC}$ that there is a proper class forcing notion $\mathbb P$ lacking a corresponding forcing relation, even merely for the atomic formulas. For certain forcing notions, the existence of an atomic forcing relation implies ${\rm Con}({\rm ZFC})$ and much more (see [1]), and so the consistency strength of the class forcing theorem goes strictly beyond ${\rm GBC}$, if this theory is consistent. Nevertheless, the class forcing theorem is provable in stronger theories, such as KelleyMorse set theory. What is the exact strength of the class forcing theorem?
Our project here is to identify the exact strength of the class forcing theorem by situating it in the rich hierarchy of theories between ${\rm GBC}$ and ${\rm KM}$, displayed in part in the above diagram, with the class forcing theorem highlighted in blue. It turns out that the class forcing theorem is equivalent over ${\rm GBC}$ to an attractive collection of several other natural settheoretic assertions. So it is a robust axiomatic principle.
The main theorem is naturally part of the emerging subject we call the reverse mathematics of secondorder set theory, a higher analogue of the perhaps more familiar reverse mathematics of secondorder arithmetic. In this new research area, we are concerned with the hierarchy of secondorder set theories between ${\rm GBC}$ and ${\rm KM}$ and beyond, analyzing the strength of various assertions in secondorder set theory, such as the principle ${\rm ETR}$ of elementary transfinite recursion, the principle of $\Pi^1_1$comprehension or the principle of determinacy for clopen class games, and so on. We fit these settheoretic principles into the hierarchy of theories over the base theory ${\rm GBC}$. The main theorem of this article does exactly this with the class forcing theorem, by finding its exact strength in relation to nearby related theories in this hierarchy.
Specifically, extending the analysis of [1] and [2], we show in our main theorem that the class forcing theorem is equivalent over ${\rm GBC}$ to the principle of elementary transfinite recursion ${\rm ETR}_{\rm Ord}$ for transfinite class recursions of length ${\rm Ord}$; it is equivalent to the existence of a truth predicate for the infinitary language of set theory $\mathcal{L}_{{\rm Ord},\omega}(\in,A)$, with any fixed class parameter $A$; to the existence of a truth predicate in the more generous infinitary language $\mathcal{L}_{{\rm Ord},{\rm Ord}}(\in,A)$; to the existence of ${\rm Ord}$iterated truth predicates for the firstorder language $\mathcal{L}_{\omega,\omega}(\in,A)$; to the existence of setcomplete Boolean class completions of any separative class partial order; and to the principle of determinacy for clopen class games of rank at most ${\rm Ord}+1$. We shall prove several of the separations indicated in figure above, such as the fact that the class forcing theorem is strictly stronger in consistency strength than having ${\rm ETR}_\alpha$ simultaneously for all ordinals $\alpha$ and strictly weaker than ${\rm ETR}_{{\rm Ord}\cdot\omega}$. The principle ${\rm ETR}_\omega$ is already sufficient to produce truth predicates for firstorder truth, relative to any class parameter. Thus, our results locate the class forcing theorem somewhat finely in the hierarchy of secondorder set theories.
Main Theorem: The following are equivalent over GödelBernays set theory ${\rm GBC}$.
@article {PeterHolyRegulaKrapfPhilippLuckeAnaNjegomirPhilippSchlicht:classforcing1,
AUTHOR = {Peter Holy and Regula Krapf and Philipp L\"{u}cke and Ana Njegomir and Philipp Schlicht},
TITLE = {Class Forcing, the Forcing Theorem and Boolean Completions},
NOTE ={To appear in the Journal of Symbolic Logic}
}
@article {PeterHolyRegulaKrapfPhilippSchlicht:classforcing2,
AUTHOR = {Peter Holy and Regula Krapf and Philipp Schlicht},
TITLE = {Characterizations of Pretameness and the {O}rdcc},
NOTE ={Preprint}
}
I gave a 3lecture tutorial at the 6th European Set Theory Conference in Budapest, July 2017.
Title: Strong colorings and their applications.
Abstract. Consider the following questions.
It turns out that all of the above questions can be decided (in one way), provided that there exists a certain “strong coloring” (or “wild partition”) of a corresponding uncountable graph.
In this tutorial, we shall present some of the techniques involved in constructing such strong colorings, and demonstrate how partial orders/topological spaces/algebraic structures may be derived from these colorings.
Lecture 1 ** Lecture 2 ** Lecture 3
]]>
There is a nontrivial percentage of the population which have some sort of color vision deficiency. Myself included. Statistically, I believe, if you have 20 male participants, then one of them is likely to have some sort of color vision issues. Add this to the fairly imperfect color fidelity of most projectors, and you get something that can be problematic. Continue reading...
]]>Abstract: We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to wellknown tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stemlinked and show that every stemlinked tree forcing has the density property.
This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert.
]]>Still, the things you can do well, you obviously should. And yet, every once in a while, somebody throws you a curveball and you just have to shout: This is why we can’t have good things!
.
The other day on a client project, the QA specialist pointed out that the content was consistently using <em>
where it should be using <i>
. Can we fix that?
The semantics of these and related HTML5 tags is a bit subtle, but there is a difference and it should be easy to just replace one with the other, right? Right? Famous last words.
At first sight, this was easy. The HTML came out of some JATSlike XML, which was using <italic>
elements. So map to <i>
, right? But hold on, you’ll say, HTML5 reinterpreted <i>
to no longer indicate layout but semantics; it now indicates a change of voice. Unfortunately, JATS’s <italic>
is focused on the typographic aspects, so it does not really help. The again, it could help a little bit more because <italic>
allows for a toggle
attribute to indicate emphasis. Sadly, the actual XML did not provide that information.
Since the piece of the tool chain that turned <italic>
into <em>
was actually my doing, I was clearly at fault. However, I had my reasons. Namely, that all of this came from a LaTeX source and in this real world LaTeX content, \emph{}
and its brethren were the dominant source for <italic>
. So clearly that should be <em>
in the end?
Now of course, almost all LaTeX authors don’t give a damn beyond getting that PDF to look how they want it, so while they mostly use \emph{}
like macros, they mix it freely (and inconsistently) with \textit{}
and its brethren. So the conversion (written by an absolute expert) rightly says screw it, all I can say is it wants italics here
, thus merging them both together.
It’s my job to dig deeper than that so I took the time to look through the actual content available. Not the TeX, not the XML but the actual writing.
Lo and behold, the actual text use is pretty different: by far, most occurrences of <em>
happened in the context of quick, inline definitions. Invariably, you find these in introductions of mathematical research articles where you include commonly known definitions from a field so as not to cause bloat (because publishers and editorial boards continue to care more about page numbers than well documented research results).
A definition does not really fit either <i>
or <em>
. The closest you get in the spec, is an example of using <i>
to reference a past definition.
<p>The term <i>prose content</i> is defined above.</p>
To make matters worse, there is of course an entirely different element that fits perfectly:
The
<dfn>
element represents the defining instance of a term.
Perfect match for the vast majority of the content in question. So we should switch everything over, right?
The answer is, of course, no. Not because some content would end up with the wrong semantics (scroll to top) but because that was not the only use I found: almost without exception, the samples includes the use as a definition alongside the use as <em>
or <i>
.
And that is why we can’t have good things.
All of this is about as surprising as finding a handwritten table of contents in a Word document. TeX is for print layout and font styles are used for all manners of cruelty. The question I had to answer with my client was: can we do anything about it?
In the end, beauty lies in the eye of the beholder and semantics in the eyes of the reader. We did, in fact, switch to <i>
with the plan to expose more information from the original source regarding emphasis so we can gather more data on its usage. Fundamentally, this won’t help because it doesn’t solve the problem of inline definitions. Still, some analysis might reveal pragmatic improvements down the line.
In the end, it’s not hard to argue that a definition that is well known in the field and that is done inline in the introduction of an article is more like the kind of reference to a definition as in the above example from the spec (in fact, often enough it is done in the vicinity of a bibliographic reference). Of course, we’re still conflating \emph
and \textit
.
Now zealots idealists will argue that authors “just” have to learn to use semantic macros in TeX. After all, there are plenty of “semantic” LaTeX packages out there; just start writing good markup already!
Besides the lack of pragmatism, the only viable solution I can see would be a LaTeX package matching specifically HTML5 markup. After all, we have the tags and they have established definitions; any “semantics” beyond that will only cause issues down the line (what if a tag is introduced to HTML but with a slightly different meaning?). Even then, it doesn’t solve the social problem at the heart of so many publishing technology issues: who would make the effort and use it? It’s extra work and does nothing for print; why would an author do extra work when they think print rules?
I think only someone interested in creating HTML output would make the effort. And at that point you have to ask: Why would those authors bother with an archaic programming language like TeX to write HTML? They will find it invariably easier to just write HTML or their favorite lightweight markup for creating HTML, especially given the speed at which HTMLtoPDF solutions are improving). Building tools for LaTeX to solve this would just create extra work but help nobody. Just build better tools for writing HTML.
Doch das ist eine andere Geschichte und soll ein andermal erzählt werden.
]]>This past semester I taught the course for the second time. You can find the syllabus, list of problems, etc. for the Spring 2017 semester by going here. On the students’ final exam, I asked them which problem was their favorite from the semester. Below is the list of problems that they mentioned including the number of votes that each received. The level of difficulty of the problems covers the spectrum. Some of these are not easy. Have fun playing!
A while back I wrote a similar post that highlighted 15 fun problems from the first time I taught the course. You’ll notice that there is some overlap between the two lists.
]]>@ARTICLE{GitmanHamkins:GVP,
AUTHOR= {Victoria Gitman and Joel David Hamkins},
TITLE= {A model of the generic Vop\v enka principle in which the ordinals are not $\Delta_2$Mahlo},
PDF={https://boolesrings.org/victoriagitman/files/2017/06/GenericVopenkawithOrdnotMahlo.pdf},
Note ={To appear in the {A}rchive for {M}athematical {L}ogic},
EPRINT ={1706.00843},
}
The Vopěnka principle is the assertion that for every proper class of firstorder structures in a fixed language, one of the structures embeds elementarily into another. This principle can be formalized as a single secondorder statement in \GodelBernays settheory ${\rm GBC}$, and it has a variety of useful equivalent characterizations. For example, the Vopěnka principle holds precisely when for every class $A$, the universe has an $A$extendible cardinal, and it is also equivalent to the assertion that for every class $A$, there is a stationary proper class of $A$extendible cardinals [1]. In particular, the Vopěnka principle implies that ${\rm ORD}$ is Mahlo: every class club contains a regular cardinal and indeed, an extendible cardinal and more.
To define these terms, recall that a cardinal $\kappa$ is extendible, if for every $\lambda>\kappa$, there is an ordinal $\theta$ and an elementary embedding $j:V_\lambda\to V_\theta$ with critical point $\kappa$. It turns out that, in light of the Kunen inconsistency, this weak form of extendibility is equivalent to a stronger form, where one insists also that $\lambda>j(\kappa)$; but there is a subtle issue about this that will come up later in our treatment of the virtual forms of these axioms, where the virtual weak and virtual strong forms are no longer equivalent. Relativizing to a class parameter, a cardinal $\kappa$ is $A$extendible for a class $A$, if for every $\lambda>\kappa$, there is an elementary embedding
$$j:\langle V_\lambda, \in, A\cap V_\lambda\rangle\to \langle V_\theta,\in,A\cap V_\theta\rangle$$
with critical point $\kappa$, and again one may equivalently insist also that $\lambda<j(\kappa)$. Every such $A$extendible cardinal is therefore extendible and hence inaccessible, measurable, supercompact and more. These are amongst the largest large cardinals.
In the firstorder ${\rm ZFC}$ context, set theorists commonly consider a firstorder version of the Vopěnka principle, which we call the Vopěnka scheme, the scheme making the Vopěnka assertion of each definable class separately, allowing parameters. That is, the Vopěnka scheme asserts, of every formula $\varphi$, that for any parameter $p$, if $\{\,x\mid \varphi(x,p)\,\}$ is a proper class of firstorder structures in a common language, then one of those structures elementarily embeds into another.
The Vopěnka scheme is naturally stratified by the assertions ${\rm VP}(\Sigma_n)$, for the particular natural numbers $n$ in the metatheory, where ${\rm VP}(\Sigma_n)$ makes the Vopěnka assertion for all $\Sigma_n$definable classes. Using the definable $\Sigma_n$truth predicate, each assertion ${\rm VP}(\Sigma_n)$ can be expressed as a single firstorder statement in the language of set theory.
Hamkins [1] proved that the Vopěnka principle is not provably equivalent to the Vopěnka scheme, if consistent, although they are equiconsistent over ${\rm GBC}$ and furthermore, the Vopěnka principle is conservative over the Vopěnka scheme for firstorder assertions. That is, over ${\rm GBC}$ the two versions of the Vopěnka principle have exactly the same consequences in the firstorder language of set theory.
In this article, we are concerned with the virtual forms of the Vopěnka principles. The main idea of virtualization, due to Schindler, is to weaken elementaryembedding existence assertions to the assertion that such embeddings can be found in a forcing extension of the universe. Gitman and Schindler [2] emphasized that the remarkable cardinals, for example, instantiate the virtualized form of supercompactness via the Magidor characterization of supercompactness. This virtualization program has now been undertaken with various large cardinals, leading to fruitful new insights (see [2], [3]).
Carrying out the virtualization idea with the Vopěnka principles, we define the generic Vopěnka principle to be the secondorder assertion in ${\rm GBC}$ that for every proper class of firstorder structures in a common firstorder language, one of the structures admits, in some forcing extension of the universe, an elementary embedding into another. That is, the structures themselves are in the class in the ground model, but you may have to go to the forcing extension in order to find the elementary embedding.
Similarly, the generic Vopěnka scheme, introduced in [3], is the assertion (in ${\rm ZFC}$ or ${\rm GBC}$) that for every firstorder definable proper class of firstorder structures in a common firstorder language, one of the structures admits, in some forcing extension, an elementary embedding into another.
On the basis of their work in [3], Bagaria, Gitman and Schindler had asked the following question:
Question: If the generic \Vopenka\ scheme holds, then must there be a proper class of remarkable cardinals?
There seemed good reason to expect an affirmative answer, even assuming only ${\rm gVP}(\Sigma_2)$, based on strong analogies with the nongeneric case. Specifically, in the nongeneric context Bagaria had proved that ${\rm VP}(\Sigma_2)$ was equivalent to the existence of a proper class of supercompact cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form ${\rm gVP}(\Sigma_2)$ was equiconsistent with a proper class of remarkable cardinals, the virtual form of supercompactness. Similarly, higher up, in the nongeneric context Bagaria had proved that ${\rm VP}(\Sigma_{n+2})$ is equivalent to the existence of a proper class of $C^{(n)}$extendible cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form ${\rm gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$extendible cardinals.
But further, they achieved direct implications, with an interesting bifurcation feature that specifically suggested an affirmative answer to the above question. Namely, what they showed at the $\Sigma_2$level is that if there is a proper class of remarkable cardinals, then ${\rm gVP}(\Sigma_2)$ holds, and conversely if ${\rm gVP}(\Sigma_2)$ holds, then there is either a proper class of remarkable cardinals or a proper class of virtually rankintorank cardinals. And similarly, higher up, if there is a proper class of virtually $C^{(n)}$extendible cardinals, then ${\rm gVP}(\Sigma_{n+2})$ holds, and conversely, if ${\rm gVP}(\Sigma_{n+2})$ holds, then either there is a proper class of virtually $C^{(n)}$extendible cardinals or there is a proper class of virtually rankintorank cardinals. So in each case, the converse direction achieves a disjunction with the target cardinal and the virtually rankintorank cardinals. But since the consistency strength of the virtually rankintorank cardinals is strictly stronger than the generic Vopěnka principle itself, one can conclude on consistencystrength grounds that it isn’t always relevant, and for this reason, it seemed natural to inquire whether this second possibility in the bifurcation could simply be removed. That is, it seemed natural to expect an affirmative answer to their question, even assuming only ${\rm gVP}(\Sigma_2)$, since such an answer would resolve the bifurcation issue and make a tighter analogy with the corresponding results in the nongeneric/nonvirtual case.
In this article, however, we shall answer the question negatively. The details of our argument seem to suggest that a robust analogy with the nongeneric/nonvirtual principles is achieved not with the virtual $C^{(n)}$cardinals, but with a weakening of that property that drops the requirement that $\lambda<j(\kappa)$. This seems to offer an illuminating resolution of the bifurcation aspect of the results we mentioned from [3], because it provides outright virtual largecardinal equivalents of the stratified generic Vopěnka principles. Because the resulting virtual large cardinals are not necessarily remarkable, however, our main theorem shows that it is relatively consistent with even the full generic Vopěnka principle that there are no $\Sigma_2$reflecting cardinals and therefore no remarkable cardinals.
Main Theorem
@ARTICLE{Hamkins:VopenkaPrinciple,
author = {Joel David Hamkins},
title = {The Vop\v{e}nka principle is inequivalent to but conservative over the Vop\v{e}nka scheme},
journal = {},
year = {},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {},
source = {},
eprint = {1606.03778},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/vopenkaprinciplevopenkascheme},
pdf={http://boolesrings.org/victoriagitman/files/2016/07/Properclassgames.pdf},
}
@ARTICLE{GitmanSchindler:virtualCardinals,
AUTHOR= {Gitman, Victoria and Schindler, Ralf},
TITLE= {Virtual large cardinals},
Note ={To appear in the {P}roceedings of the {L}ogic {C}olloquium 2015},
pdf={https://boolesrings.org/victoriagitman/files/2018/02/virtualLargeCardinalsEdited.pdf},
}
@ARTICLE{BagariaGitmanSchindler:VopenkaPrinciple,
AUTHOR = {Bagaria, Joan and Gitman, Victoria and Schindler, Ralf},
TITLE = {Generic {V}op\v enka's {P}rinciple, remarkable cardinals, and the
weak {P}roper {F}orcing {A}xiom},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {56},
YEAR = {2017},
NUMBER = {12},
PAGES = {120},
ISSN = {09335846},
MRCLASS = {03E35 (03E55 03E57)},
MRNUMBER = {3598793},
DOI = {10.1007/s001530160511x},
URL = {http://dx.doi.org/10.1007/s001530160511x},
pdf ={http://boolesrings.org/victoriagitman/files/2016/02/GenericVopenkaPrinciples.pdf},
}
Title: Euclidean Ramsey Theory 2 (of 3).
Lecturer: David Conlon.
Date: November 25, 2016.
Main Topics: Ramsey implies spherical, an algebraic condition for spherical, partition regular equations, an analogous result for edge Ramsey.
Definitions: Spherical, partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
In the first lecture we defined the relevant terms and then established that all (nondegenerate) triangles are Ramsey. In this lecture we will compare the property of being spherical with being Ramsey. In this lecture we will show that Ramsey implies spherical (or more precisely, that non spherical sets cannot be Ramsey).
Definition. A set is spherical if there is an such that .
Typically will be finite, but this is not formally required.
The proofs are those of Erdos et Al, and go by establishing a tight algebraic condition for a set being spherical.
Let where and ; it is a line segment with three points equally spaced.
“The reason is you can take a `spherical shell’ colouring.” These shell colourings are very important.
This doesn’t work for `cube colourings’ (i.e. using a different norm) since by Dvoretsky’s Theorem, hyperplane slices of cubes basically look spherical.
Proof. Fix . Define the colouring by . (You’re taking spherical shells of radii .)
[Picture]
By the Cosine rule we get and . So we get .
Suppose that have the same colour. This means that there is an such that and and , where each .
Putting this into our cosine law info gives
which is a contradiction since the left is but the right is strictly between and .
Eventually we will relate the condition of a set being spherical with a tight algebraic condition. With this in mind, we examine when algebraic conditions can yield Ramsey witnesses. We start with a general discussion of partition regular equations.
For example,
Exercise. If the equation is translation invariant then you get a corresponding density result.
Use this to show that you always get a nontrivial solution.
First an example.
Example. .
We can homogenize this equation by replacing the variables. Use and . This gives the equation .
Basically, these are the only types of partition regular equations.
The number of colours is equal to the number of variables.
This is a strong result of the equation not being partition regular. You can’t have a monochromatic solution, you can’t even have all the paired variables agree!
The idea is to colour whether you are in a certain interval.
Proof. Fix . Colour with if for some integer .
If , then where .
So
Here the first sum is an even number, and the second is , a contradiction.
Now we increase the number of colours to deal with a more general equation.
Proof. Fix . By dividing by it suffices to consider .
Let be the () colouring from Lemma 1.
Define .
Now if , then .
So where .
If this happens for all , then we have a contradiction identical to the one in Lemma 1.
In the original paper there was a similar lemma but it had a worse bound on the number of colours. This improvement was observed by Strauss a little later.
Note that these equations are not susceptible to the “translation trick” since .
The following is the main technical lemma. The proof is purely algebraic.
For readability, we will write instead of . We will make use of the following useful fact:
Proof of . Assume that is spherical and satisfies the first equation. We will show the second equality fails.
Say has centre and radius .
For each we have:
So we must have for each . So by multiplying by and adding up we get
By using the special case of the useful identity, we get:
We know the first sum is by our above calculations, and by assumption we know
a contradiction.
Proof of . Assume is not spherical, and moreover that it is minimal (in the sense that removing any one point makes it spherical). In particular, is not a nondegenerate simplex. So there is a linear relation
Assume that . By minimality, is spherical, and is on a sphere with centre and radius .
Thus
So
here the second sum is , and the first, by minimality, is
which isn’t since the distances of and to are different.
We are now in a position to put everything together.
Proof. Assume is not spherical. So there are constants and a vector such that
and
Technical exercise. Any congruent copy of satisfies the same equations.
(Use the fact that congruence is formed by rotations and translations. The translations will spit out terms like .)
In every nonzero coordinate of use the colouring from Lemma 2, and set . This will give no monochromatic solution to
This is the end of this lecture’s material on pointRamsey. We shift gears a little now.
Instead of colouring points, we can colour pairs of points. This leads to the notion of edge Ramsey. We mention two results in this area.
Proof. Suppose the vertex set is not spherical. Colour the points, using , so that no copy of has a monochromatic vertex set.
Now colour the edge with .
Each edge has the same colour and must contain two distinct vertex colours. So the edge set is bipartite.
This gives us an analogous theorem to the theorem that Ramsey implies spherical.
The proof is a variation on what we’ve seen.
See lecture 1 for references.
]]>When one is ascending a difficult path uphill, it is a good idea to keep your eyes on the path as you move forward. However, it is not a bad idea to stop sometimes, look back, and appreciate the beauty of the ground you have already covered.Continue reading...]]>
A few years ago myself, Joel Hamkins, and Thomas Johnstone [1]
(see this post) showed that the class choice scheme is independent of KelleyMorse (${\rm KM}$) set theory in the strongest possible sense. The class choice scheme is the scheme asserting, for every secondorder formula $\varphi(x,X,A)$, that if for every set $x$, there is a class $X$ such that $\varphi(x,X,A)$ holds, then there is a collecting class $Y$ such that the $x$th slice $Y_x$ is some witness for $x$, namely $\varphi(x,Y_x,A)$ holds. We can also consider a weaker version of the choice scheme allowing only set many choices. We showed that it is relatively consistent to have a model of ${\rm KM}$ in which the choice scheme fails for a firstorder assertion and only $\omega$many choices. We also showed that ${\rm KM}$ together with the choice scheme for set many choices cannot prove the choice scheme even for firstorder assertions. The choice scheme can be viewed either as a collection principle or as an analogue of the axiom of choice for classes. With the later perspective, we can also consider the dependent choice scheme, which asserts, for every secondorder formula $\varphi(X,Y,A)$, that if for every $\alpha$sequence of classes (coded by a class) $X$, there is a class $Y$ such that $\varphi(X,Y,A)$ holds, then there is a single class $Z$ coding ${\rm ORD}$many dependent choices, namely for all ordinals $\alpha$, $\varphi(Z\upharpoonright\alpha,Z_\alpha,A)$ holds. Again, we can consider a weaker version of the dependent choice scheme where we only allow ordinal many choices. We conjectured that that the dependent choice scheme is independent of ${\rm KM}$ together with the choice scheme, but were not able to make further progress on the question.
Usually when trying to prove a result about models of secondorder set theory, it helps, as the first step, to understand analogous results for models of secondorder arithmetic. There are many affinities, but also some notable differences between the kinds of results you can obtain in the two fields. Both the choice scheme and the dependent choice scheme were first considered and extensively studied in the context of models of secondorder arithmetic (see [2], Chapter VII.6). The analogue of ${\rm KM}$ in secondorder arithmetic is the theory ${\rm Z}_2$, which consists of ${\rm PA}$ together with full secondorder comprehension. The theory ${\rm Z}_2$ implies the choice scheme for $\Sigma^1_2$formulas. This is true roughly because a model of ${\rm Z}_2$ can uniformly construct (a code for) $L_\alpha$ for every coded ordinal $\alpha$ and so it can pass to an $L_\alpha$ to select a unique witness for a $\Sigma^1_2$assertion, which are absolute by Shoenfield’s Absoluteness. The classic FefermanLévy model was used to show that the very next level $\Pi^1_2$choice scheme can fail in a model of ${\rm Z}_2$. Consider a model $\mathcal M$ of ${\rm Z}_2$ whose sets are the reals of the FefermanLévy model of ${\rm ZF}$. The model $\mathcal M$ can construct (a code for) $L_{\aleph_n}$ for every natural number $n$, but it cannot collect the codes because $\aleph_\omega$ is uncountable. The dependent choice scheme for $\Sigma^1_2$formulas also follows from ${\rm Z}_2$ (indeed from the choice scheme for $\Sigma^1_2$formulas together with induction for $\Sigma^1_2$formulas, see [2], Theorem VII.6.9.2). Simpson claimed in 1972 that he had a proof of the independence of the $\Pi^1_2$dependent choice scheme from ${\rm Z}_2$ together with the choice scheme but the proof was never published and has since been lost (I corresponded with Steve Simpson about it). So a few years ago Sy Friedman and myself set out to find a proof of this result.
The standard strategy for obtaining a model of ${\rm Z}_2$ together with the choice scheme but with a $\Pi^1_2$failure of the dependent choice scheme would be to construct a model of ${\rm ZF}$ in which ${\rm AC}_\omega$ holds, but ${\rm DC}$ fails for a $\Pi^1_2$definable relation on the reals. We then take our model of ${\rm Z}_2$ whose sets are the reals of the ${\rm ZF}$model, so that ${\rm AC}_\omega$ translates directly into the choice scheme holding. Such models of ${\rm ZF}$ are obtained as symmetric submodels of carefully chosen forcing extensions. The classical model of ${\rm AC}_\omega+\neg{\rm DC}$ (due to Jensen [3]) is a symmetric submodel of the forcing extension adding a collection of Cohen subsets of $\omega_1$ indexed by nodes of the tree ${}^{\lt\omega}\omega_1$ with countable conditions. By choosing the right collections of automorphisms to consider, we obtain a symmetric submodel, call it $N$, which has the tree of the Cohen subsets, but no branch through the tree, witnessing a violation of ${\rm DC}$. The countable closure of the poset allows us to prove that the model $N$ satisfies ${\rm AC}_\omega$. The obvious obstacle in using the classical model for our purposes was that the relation witnessing failure of ${\rm DC}$ is not on the reals. We were able to obtain a variation on the classical model where we forced to add a collection of Cohen reals indexed by nodes of the tree ${}^{\lt\omega}\omega_1$ with finite conditions. We use that the poset is ccc to again argue that ${\rm AC}_\omega$ holds in the desired symmetric submodel. The new model has a relation on the reals witnessing a failure of ${\rm DC}$, but it is not at all clear why even the domain of the relation, namely the Cohen reals making up the nodes of the generic tree, would be definable over the reals. The collection of all Cohen reals is of course $\Pi^1_2$definable, but there does not appear to be a good way of picking out those coming from the nodes of our tree.
In our construction we force with a tree iteration of Jensen’s forcing along the tree ${}^{\lt\omega}\omega_1$. Conditions in the forcing are finite subtrees of ${}^{\lt\omega}\omega_1$ whose $n$level nodes are conditions in the $n$length iteration of Jensen’s forcing, so that nodes on the higher levels extend those from the lower levels. The tree iteration adds, for every node on level $n$ of ${}^{\lt\omega}\omega_1$, an $n$length sequence of reals generic for the $n$length iteration of Jensen’s forcing with the sequences ordered by extension. Jensen’s forcing is a subposet of Sacks forcing in $L$ which has the ccc and the property that it adds a unique generic real over $L$ (see this post). The poset is constructed using $\diamondsuit$ to seal maximal antichains. Kanovei and Lyubetsky recently showed that Jensen’s forcing has an even stronger uniqueness of generic filters property [4]. They showed that a forcing extension of $L$ by the finitesupport $\omega$length product of Jensen’s forcing has precisely the $L$generic reals for Jensen’s forcing which appear on the coordinates of the generic filter for the product. We were able to show that a forcing extension of $L$ by the tree iteration of Jensen’s forcing has for a fixed $n$, precisely the generic $n$length sequences of reals for the $n$length iteration of Jensen’s forcing which make up the nodes of the generic tree. Since the collection of the generic $n$length sequences of reals for iterations of Jensen’s forcing is $\Pi^1_2$ and the ordering of the sequences is by extension, we succeed in producing a symmetric submodel whose associated model of secondorder arithmetic witnesses a $\Pi^1_2$failure of the dependent choice scheme. That our symmetric model satisfied ${\rm AC}_\omega$ followed because the tree iteration forcing has the ccc.
We are now working with Sy Friedman on transferring the arguments from secondorder arithmetic to the context of secondorder set theory. In particular, we hope to produce a subposet of $\kappa$Sacks forcing for an inaccessible $\kappa$ in $L$ mimicking the properties of Jensen’s forcing.
Slides to come!
@ARTICLE{GitmanHamkinsJohnstone:KMplus,
AUTHOR= {Victoria Gitman and Joel David Hamkins and Thomas Johnstone},
TITLE= {Kelley{M}orse set theory and choice principles for classes},
Note ={In preparation},
}
@book {simpson:secondorderArithmetic,
AUTHOR = {Simpson, Stephen G.},
TITLE = {Subsystems of second order arithmetic},
SERIES = {Perspectives in Logic},
EDITION = {Second},
PUBLISHER = {Cambridge University Press, Cambridge; Association for
Symbolic Logic, Poughkeepsie, NY},
YEAR = {2009},
PAGES = {xvi+444},
ISBN = {9780521884396},
MRCLASS = {03F35 (0302 03B30)},
MRNUMBER = {2517689 (2010e:03073)},
DOI = {10.1017/CBO9780511581007},
URL = {http://dx.doi.org/10.1017/CBO9780511581007},
}
@article {jensen:ACplusNotDC,
AUTHOR = {Jensen, Ronald B.},
TITLE = {Consistency results for models of {ZF}},
JOURNAL = {Notices {A}m. {M}ath. {S}oc.},
FJOURNAL = {Notices of the American Mathematical Society},
VOLUME = {14},
YEAR = {1967},
PAGES = {137},
}
@ARTICLE {kanovei:productOfJensenReals,
AUTHOR = {Kanovei, Vladimir and Lyubetsky, Vassily},
TITLE = {A countable definable set of reals containing no definable elements},
EPRINT ={1408.3901}}
This morning I woke up to see that my paper about the Bristol model was announced on arXiv. But unbeknownst to the common arXiv follower, this also marks the end of my thesis. The Hebrew University is kind enough to allow you to just stitch a bunch of your papers (along with an added introduction) and call it a thesis. And by "stitch" I mean literally. If they were published, you're even allowed to use the published .pdf (on the condition that no copyright infringement occurs). Continue reading...
]]>In 1970 G. R. MacLane asked if it is possible for a locally univalent function in the class to have an arc tract, and this question remains open despite several partial results. Here we significantly strengthen these results by introducing new techniques associated with the EremenkoLyubich class for the disc. Also, we adapt a recent powerful technique of C. J. Bishop in order to show that there is a function in the EremenkoLyubich class for the disc that is not in the class .
]]>
Joint work with Ari Meir Brodsky.
Abstract. BenDavid and Shelah proved that if $\lambda$ is a singular stronglimit cardinal and $2^\lambda=\lambda^+$, then $\square^*_\lambda$ entails the existence of a $\lambda$distributive $\lambda^+$Aronszajn tree. Here, it is proved that the same conclusion remains valid after replacing the hypothesis $\square^*_\lambda$ by $\square(\lambda^+,{<\lambda})$.
As $\square(\lambda^+,{<\lambda})$ does not impose a bound on the ordertype of the witnessing clubs, our construction is necessarily different from that of BenDavid and Shelah, and instead uses walks on ordinals augmented with club guessing.
A major component of this work is the study of postprocessing functions and their effect on square sequences. A byproduct of this study is the finding that for $\kappa$ regular uncountable, $\square(\kappa)$ entails the existence of a partition of $\kappa$ into $\kappa$ many fat sets. When contrasted with a classic model of Magidor, this shows that it is equiconsistent with the existence of a weakly compact cardinal that $\omega_2$ cannot be split into two fat sets.
Downloads:
I gave a plenary talk at the 2017 ASL North American Meeting in Boise, March 2017.
Talk Title: The current state of the Souslin problem.
Abstract: Recall that the real line is that unique separable, dense linear ordering with no endpoints in which every bounded set has a least upper bound.
A problem posed by Mikhail Souslin in 1920 asks whetherthe term separable in the above characterization may be weakened to ccc. (A linear order is said to be separable if it has a countable dense subset. It is said to be ccc if every pairwisedisjoint family of open intervals is countable.)
Amazingly enough, the resolution of this single problem lead to key discoveries in Set Theory: the notions of Aronszajn, Souslin and Kurepa trees, forcing axioms and the method of iterated forcing, Jensen’s diamond and square principles, and the theory of iteration without adding reals.
Souslin problem is equivalent to the existence of a partial order of size $\aleph_1$.
A generalization of this problem to the level of $\aleph_2$ has been identified in the early 1970’s, and is open ever since. In the last couple of years, a considerable progress has been made on the generalized Souslin problem and its relatives. In this talk, I shall describe the current state of this research.
Downloads:
Theorem. Suppose that \(\kappa\) is regular and uncountable, and \(\pi\colon\kappa\to\kappa\) is a bijection mapping stationary sets to stationary sets. Then there is a club \(C\subseteq\kappa\) such that \(\pi\restriction C=\operatorname{id}\). Continue reading...
]]>
This talk will be a very condensed version of the talk with a similar title I gave last spring at MOPA Seminar in CUNY.
Abstract:
A total computable function will produce the same output on the standard natural numbers regardless of which model of arithmetic it is evaluated in. But a (partial) computable function can be the empty function in the standard model $\mathbb N$, while turning into a total function in some nonstandard model. I will discuss some extreme instances of this phenomena investigated recently by Woodin and Hamkins showing that there are computable processes which can produce any desired output by going to the right nonstandard model. Hamkins showed that there is a single ${\rm TM}$ program $p$ (computing the empty function in $\mathbb N$) with the property that given any function $f:\mathbb N\to \mathbb N$, there is a nonstandard model $M_f\models{\rm PA}$ so that in $M_f$ $p$ computes $f$ on the standard part. Even more drastically, Woodin has shown that there is a single index $e$ (for the empty function in $\mathbb N$), for which ${\rm PA}$ proves that $W_e$ is finite, with the property that for any finite set $s$ of natural numbers, there is a model $M_s\models{\rm PA}$ in which $W_e=s$. It follows for instance, by the MRDP theorem, that there is a single Diophantine equation $p(n,\bar x)=0$ having no solutions in $\mathbb N$, for which ${\rm PA}$ proves that there are finitely many $n$ with a solution, and given any finite set $s$, we can pass to a nonstandard model in which $p(n,\bar x)=0$ has a solution if and only if $n\in s$.
Here are links to blog posts by myself and others on this topic:
@ARTICLE{GitmanSchindler:virtualCardinals,
AUTHOR= {Gitman, Victoria and Schindler, Ralf},
TITLE= {Virtual large cardinals},
Note ={To appear in the {P}roceedings of the {L}ogic {C}olloquium 2015},
pdf={https://boolesrings.org/victoriagitman/files/2018/02/virtualLargeCardinalsEdited.pdf},
}
Suppose $\mathcal A$ is a large cardinal notion that can be characterized by the existence of one or many elementary embeddings $j:V_\alpha\to V_\beta$ satisfying some list of properties. For instance, both extendible cardinals and ${\rm I3}$ cardinals meet these requirements. Recall that $\kappa$ is extendible if for every $\alpha>\kappa$, there is an elementary embedding $j:V_\alpha\to V_\beta$ with critical point $\kappa$ and $j(\kappa)>\alpha$, and recall also that $\kappa$ is ${\rm I3}$ if there is an elementary embedding $j:V_\lambda\to V_\lambda$ with critical point $\kappa<\lambda$. Let us say that a cardinal $\kappa$ is virtually $\mathcal A$ if the embeddings $j:V_\alpha\to V_\beta$ needed to witness $\mathcal A$ can be found in setgeneric extensions of the universe $V$; equivalently we can say that the embeddings exist in the generic multiverse of $V$. Indeed, it is not difficult to see that it suffices to only consider the collapse extensions. So we now have that $\kappa$ is virtually extendible if for every $\alpha>\kappa$, some setforcing extension has an elementary embedding $j:V^V_\alpha\to V^V_\beta$ with critical point $\kappa$ and $j(\kappa)>\alpha$, and we have that $\kappa$ is virtually ${\rm I3}$ if some setforcing extension has an elementary embedding $j:V_\lambda^V\to V_\lambda^V$ with critical point $\kappa$. The template of virtual large cardinals can be applied to several large cardinals notions in the neighborhood of a supercompact cardinal. We can even apply it to inconsistent large cardinal principles to obtain virtual large cardinals that are compatible with $V=L$.
The concept of virtual large cardinals is close in spirit to generic large cardinals, but is technically very different. Suppose $\mathcal A$ is a large cardinal notion characterized by the existence of elementary embeddings $j:V\to M$ satisfying some list of properties. Then we say that a cardinal $\kappa$ is generically $\mathcal A$ if the embeddings needed to witness $\mathcal A$ exist in setforcing extensions of $V$. More precisely, if the existence of $j:V\to M$ satisfying some properties witnesses $\mathcal A$, then we want a forcing extension $V[G]$ to have a definable $j:V\to M$ with these properties, where $M$ is an inner model of $V[G]$. So for example, $\kappa$ is generically supercompact if for every $\lambda>\kappa$, some setforcing extension $V[G]$ has an elementary embedding $j:V\to M$ with critical point $\kappa$ and $j”\lambda\in M$. If $\kappa$ is not actually $\lambda$supercompact, the model $M$ will not be contained in $V$. Generic large cardinals are either known to have the same consistency strength as their actual counterparts or are conjectured to have the same consistency strength based on currently available evidence. Most importantly, generic large cardinals need not be actually “large” since, for instance, $\omega_1$ can be generically supercompact.
In the case of virtual large cardinals, because we consider only setsized embeddings, the source and target of the embedding are both from $V$, and because the embedding exists in a forcing extension, there is no a priori reason why the target model would have any closure at all. The combination of these gives that virtual large cardinals are actual large cardinals that fit into the large cardinal hierarchy between ineffable cardinals and $0^\#$. If $0^\#$ exists, the Silver indiscernibles have (nearly) all the virtual large cardinal properties we consider in this article, and all these notions will be downward absolute to $L$.
The first virtual large cardinal notion, the remarkable cardinal, was introduced by Schindler in [1]. A cardinal $\kappa$ is remarkable if for every $\lambda>\kappa$, there is $\bar\lambda<\kappa$ such that in some setforcing extension there is an elementary embedding $j:V_{\bar\lambda}^V \to V_\lambda^V$ with $j(\text{crit}(j))=\kappa$. It turns out that remarkable cardinals are virtually supercompact because, as shown by Magidor [2], $\kappa$ is supercompact precisely when for every $\lambda>\kappa$, there is $\bar\lambda<\kappa$ and an elementary embedding $j:V_{\bar\lambda}\to V_\lambda$ with $j(\text{crit}(j))=\kappa$. Schindler showed that the existence of a remarkable cardinal is equiconsistent with the assertion that the theory of $L(\mathbb R)$ cannot be changed by proper forcing [1], and since then it has turned out that remarkable cardinals are equiconsistent to other natural assertions such as the thirdorder Harrington’s principle [3].
The idea behind the concept of virtual large cardinals of taking a property characterized by the existence of elementary embeddings of sets and defining a virtual version of the property by positing that the embeddings exist in the generic multiverse can be extended beyond large cardinals. In [4], together with Bagaria, we studied a virtual version of Vopěnka’s Principle (Generic Vopěnka’s Principle) and a virtual version of the Proper Forcing Axiom ${\rm PFA}$. Fuchs has generalized this approach to obtain virtual versions of other forcing axioms such as the forcing axiom for subcomplete forcing ${\rm SCFA}$ [5] and resurrection axioms [6]. Each of these virtual properties has turned out to be equiconsistent with some virtual large cardinal, which has so far been the main application of these ideas.
Our template for the definition of virtual large cardinals requires the large cardinal notion to be characterized by the existence of elementary embeddings $j:V_\alpha\to V_\beta$. This template is quite restrictive. Its main advantage is that it gives a hierarchy of large cardinal notions that mirrors the hierarchy of its actual counterparts, and the large cardinals have other desirable properties such as being downward absolute to $L$.
@article {schindler:remarkable1,
AUTHOR = {Schindler, RalfDieter},
TITLE = {Proper forcing and remarkable cardinals},
JOURNAL = {Bull. Symbolic Logic},
FJOURNAL = {The Bulletin of Symbolic Logic},
VOLUME = {6},
YEAR = {2000},
NUMBER = {2},
PAGES = {176184},
ISSN = {10798986},
MRCLASS = {03E40 (03E45 03E55)},
MRNUMBER = {1765054 (2001h:03096)},
MRREVIEWER = {A. Kanamori},
DOI = {10.2307/421205},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.2307/421205},
}
@article {magidor:supercompact,
AUTHOR = {Magidor, M.},
TITLE = {On the role of supercompact and extendible cardinals in logic},
JOURNAL = {Israel J. Math.},
FJOURNAL = {Israel Journal of Mathematics},
VOLUME = {10},
YEAR = {1971},
PAGES = {147157},
ISSN = {00212172},
MRCLASS = {02K35},
MRNUMBER = {0295904 (45 \#4966)},
MRREVIEWER = {J. L. Bell},
}
@article {ChengSchindler:Harrington,
AUTHOR = {Cheng, Yong and Schindler, Ralf},
TITLE = {Harrington's principle in higher order arithmetic},
JOURNAL = {J. Symb. Log.},
FJOURNAL = {Journal of Symbolic Logic},
VOLUME = {80},
YEAR = {2015},
NUMBER = {2},
PAGES = {477489},
ISSN = {00224812},
MRCLASS = {03E30 (03E55)},
MRNUMBER = {3377352},
MRREVIEWER = {A. Kanamori},
DOI = {10.1017/jsl.2014.31},
URL = {http://dx.doi.org/10.1017/jsl.2014.31},
}
@ARTICLE{BagariaGitmanSchindler:VopenkaPrinciple,
AUTHOR = {Bagaria, Joan and Gitman, Victoria and Schindler, Ralf},
TITLE = {Generic {V}op\v enka's {P}rinciple, remarkable cardinals, and the
weak {P}roper {F}orcing {A}xiom},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {56},
YEAR = {2017},
NUMBER = {12},
PAGES = {120},
ISSN = {09335846},
MRCLASS = {03E35 (03E55 03E57)},
MRNUMBER = {3598793},
DOI = {10.1007/s001530160511x},
URL = {http://dx.doi.org/10.1007/s001530160511x},
pdf ={http://boolesrings.org/victoriagitman/files/2016/02/GenericVopenkaPrinciples.pdf},
}
@ARTICLE{Fuchs:HierarchiesForcingAxiomsContinuumHypothesisSquarePrinciples,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of forcing axioms, the continuum hypothesis and square principles},
Note ={Preprint},
}
@ARTICLE{Fuchs:HierarchiesVirtualResurrectionAxioms,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of (virtual) resurrection axioms},
Note ={Preprint},
}
So the next order of business is finding a position for next year. So far nothing came up. But I'm open to hearing from the few readers of my blog if they know about something, or have some offers that might be suitable for me. Continue reading...
]]>I gave an invited talk at the Set Theory workshop in Obwerwolfach, February 2017.
Talk Title: Coloring vs. Chromatic.
Abstract: In a joint work with Chris LambieHanson, we study the interaction between compactness for the chromatic number (of graphs) and compactness for the coloring number.
Downloads:
Registration for the 2017 Southwestern Undergraduate Mathematics Research Conference (aka SUnMaRC) is now open! Northern Arizona University is hosting this year’s conference on March 31April 2, 2017. We are excited to announce Kathryn Bryant (Colorado College), Henry Segerman (Oklahoma State University), and Steve Wilson (NAU, emeritus) as our invited speakers.
The goal of the conference is to welcome undergraduates to the wonderful world of mathematics research, to develop and foster a rich social network between the mathematics students and faculty throughout the great Southwest, and to celebrate the accomplishments of our undergraduate students. We encourage undergraduate students from all years of study to participate and give presentations in any area of mathematics, including applications to other disciplines. However, while we do recommend giving a talk, it is not a requirement for conference participation. To register for the conference and to submit a title and abstract for a student presentation, visit the 2017 SunMaRC Registration page.
The conference began in 2004 as the Arizona Mathematics Undergraduate Conference. In 2008, the conference changed to SUnMaRC to recognize the participation of institutions throughout the southwest.
If you have any questions about this year’s SUnMaRC, please contact one of the conference organizers:
]]>Matti was a kind teacher, even if sometimes overpedantic. Continue reading...
]]>The following pictures are taken by Andrés Villaveces. Thank you Andrés!
]]>
One of my former students, Andrew Lebovitz, recently posted a link on Facebook to a Nature article that summarizes a paper, titled The classical origin of modern mathematics, which completed a comprehensive analysis of the MGP database. One of the interesting findings was that the individuals in the database fall into 84 distinct family trees with twothirds of the world’s mathematicians concentrated in just 24 of them.
After reading the Nature article, I was motivated to see if I could figure out whether I belonged to one of the 24 families. It wasn’t obvious to me how I would do this without manually clicking on my advisor (Richard M. Green), then my advisor’s advisor, etc. This was slightly more complicated than I expected because there were quite a few ancestors with 2 advisors, so I had to navigate down multiple paths. As I clicked around, I drew out my family tree in a notebook.
Here is what I discovered. My longest branch goes back to Nicolo Fontana Tartaglia (currently 14,428 descendants). My tree includes Isaac Newton, Galileo Galilei, and Marin Mersenne (who Mersenne primes were named after). Interestingly, no one on this path belongs to one of the 24 families mentioned in The classical origin of modern mathematics. Also, I was disappointed to find out that I wasn’t related to Leonhard Euler. However, I am a descendant of Henry Bracken, who is the head of one of the 24 families.
I posted some of this information on Facebook and asked if anyone knew how to automatically create a nice visualization of the directed graph corresponding to my family tree. Chris Drupieski replied and pointed out a program called Geneagrapher, which was built to do exactly what I was looking for. In particular, Geneagrapher gathers information for building math genealogy trees from the MGP, which is then stored in dot file format. This data can then be passed to Graphviz to generate a directed graph.
Here are the steps that I completed to get Geneagrapher up and running on my computer running MacOS 10.11. The Geneagrapher website suggests using easy_install
via Terminal, but this didn’t immediately work for me. It often seems that doing anything with Python on my Mac requires a few extra steps. After doing a little searching around, I found a post on Stack Overflow that solved my issue. At the command line, I typed the following:
sudo chown R <your_user>:wheel /Library/Python/2.7/sitepackages/
Of course, you should replace <your_user>
with your username. Note that using sudo
requires you to enter your password. Next, I installed Geneagrapher using the following:
easy_install http://www.davidalber.net/dist/geneagrapher/Geneagrapher0.2.1r2.tar.gz
In order to use Geneagrapher, you need to input a record number from MGP. Mine is 125763. At the command line, I typed:
ggrapher f ernst.dot a 125763
You can replace ernst
with whatever you’d like the output file to be called. The next step is to pass the dot file to Graphviz. If you don’t already have Graphviz installed, you can do so using Homebrew (which is also easy to install):
brew install graphviz
Following the Geneagrapher instructions, I typed the following to generate my family tree:
dot Tpng ernst.dot > ernst.png
Maybe it is worth mentioning that unless you specify otherwise, the dot and png files will be stored in your home directory. Below is my mathematical family tree created using Geneagrapher. As you can see, it took a while for my ancestors to leave the University of Cambridge.
]]>Nonmathematicians often tend to be Platonists "by default", so they will assume that every question has an answer and sometimes it's just that we don't know that answer. But it's out there. It's a fine approach, but it can somewhat fly in the face of independence if you are not trained to think about the difference between true and provable. Continue reading...
]]>Title: Dual Ramsey, the Gurarij space and the Poulsen simplex 1 (of 3).
Lecturer: Dana Bartošová.
Date: December 12, 2016.
Main Topics: Comparison of various Fraïssé settings, metric Fraïssé definitions and properties, KPT of metric structures, Thick sets
Definitions: continuous logic, metric Fraïssé properties, NAP (near amalgamation property), PP (Polish Property), ARP (Approximate Ramsey Property), Thick, Thick partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
Throughout the DocCourse we have primarily focused on Fraïssé limits of finite structures. As we saw in Solecki’s first lecture (not posted yet), it makes sense, and is useful, to consider Fraïssé limits in a broader context. Today we will discuss those other contexts.
Solecki’s first lecture discussed how to take projective Fraïssé limits. Panagiotopolous’ lecture (not posted yet) looked at a specific application of these projective limits. We will see how to take metric (direct) Fraïssé limits.
Discrete  Compact  Metric Structure  

Size  Countable  Separable  Separable, complete 
Limit  Fraïssé limit  Quotient of the projective limit  (direct or projective) Metric Fraïssé limit 
Homogeneity  (ultra)homogeneity  Projective approximate homogeneity  Approximate homogeneity (*) 
Automorphism group  nonarchimedian groups (closed subgroups of  homeomorphism groups  Polish Groups 
KPT, extremely amenable iff  RP  Dual Ramsey  Approximate RP (**) 
Metrizability of UMF iff  finite Ramsey degree  (***)  (Open) Compact RP? 
Where we’ve seen these  Classical  Solecki’s lectures  These lectures 
(*) – Exact homogeneity is often not possible.
(**) – In the projective setting this is fairly unexplored. These proofs are usually via direct (discrete) Ramsey, or through concentration of measure.
(***) – You have KPT before you take the quotient, but lose it after taking the quotient. e.g. UMF(prepseudo arc) is not metrizable (through RP). A question of Uspenskij asks about the UMF(pseudo arc).
In the context of Banach spaces, it makes sense to use continuous logic. This is where we instead of the usual valued logic we allow sentences to take on values in the interval . We also suitably adjust the logical constructives.
Classical logic  Continuous logic 

True  0 
False  1 
Now we define functions and relations. Let be a complete metric space. So will be given the sup metric.
Then functions and relations must satisfy the usual things that functions and relations satisfy in classical logic.
Finitely generated substructures  Limit  maps  Language  

Separable metric spaces  finite metric spaces  Separable Urysohn space  isometric embedding  just the distance 
Separable Banach spaces  finite dimensional Banach spaces (**)  Gurarij space  isometric linear embedding  
Separable Choquet spaces  finite dimensional simplices  Poulsen simplex  affine homeomorphisms (*)  Something that captures the convex structure 
(*) – An affine homeomorphism sends and sends extreme points to extreme points, then is extended affinely to the rest of the simplex. The metric here is not canonical.
(**) – Similar to the discrete case, to take a limit you only need a cofinal sequence. In this case we take .
In continuous logic the maps between models are isometric embeddings that preserves functions and relations.
In the classical Fraïssé setting we looked at homogeneity, HP, JEP and AP. These notions have suitable generalizations in the metric Fraïssé setting.
We say that is approximately ultrahomogeneous (AUH) if and for every morphism, and for all , there is a such that .
is the collection of finitely generated substructures of .
We now explain NAP and PP. The NAP is a striaghtforward generalization of AP.
such that
The PP measures how closely you can embed two metric spaces.
We say satisfies the Polish Property (PP) if is separable for all .
This gives us the following Fraïssé theorem for metric structures.
Recall that is the separable Urysohn space. It is the (unique) complete, separable metric space, universal for separable metric spaces and (exactly) ultrahomogeneous with respect to finite metric spaces.
Its age is the collection of finite metric spaces. It is a metric Fraïssé class.
Its automorphism group has a similar universal property.
See these notes for more information.
Recall the following fact about (classical) Fraïssé structures.
The following observation of Melleray is the corresponding fact for metric structures. It has a similar proof to the classical fact.
For every orbit closure in of a point add a relational symbol called .
The first relevant result is the following:
This proof uses the finite Ramsey theorem and concentration of measure.
The KPT theorem for metric structures is given by the following.
We define the approximate Ramsey Property.
(ARP):
there is a such that
such that
Here , and the fattening is using the embedding distance (which we haven't defined).
Recall that in the infinite case, rigidity was needed to have the embedding RP. That is why in finite metric spaces we added linear orders to get the RP. However, metric spaces do satisfy the ARP (by Pestov from extreme amenabilty of , without needing to add linear orders.
Also, by using the usual compactness arguments, we can assume that the witness to ARP is the full Fraïssé limit.
In the KPT correspondence, we saw a useful connection between the stabilizer of a set and collections of finite structures. See Lionel Ngyuen van The’s first DocCourse lecture.
Here we mention an analogous connection.
So we can reword the ARP for finite metric spaces, by transfering the colouring to a colouring .
Thickness is a group property that captures some Ramsey properties. This is desirable because we would like to be able to detect Ramsey type phenomena from the group itself, without having to know the underlying Fraïssé limit.
is thick partition regular iff there is a that is thick.
This is really just unwinding definitions. Then by general topological dynamics abstract nonsense we get:
Note that this is a theorem just about groups. This doesn’t use much of the structure of . Our goal is to prove extreme amenability without having to first prove Ramsey theorems.
In the next lectures we will examine the Gurarij space and prove the ARP for (i.e. Banach spaces).
(This is incomplete – Mike)
Abstract: Let $x$ be a real of sufficiently high Turing degree, let $\kappa_x$ be the least inaccessible cardinal in $L[x]$ and let $G$ be $Col(\omega, {<}\kappa_x)$generic over $L[x]$. Then Woodin has shown that $\operatorname{HOD}^{L[x,G]}$ is a core model, together with a fragment of its own iteration strategy.
Our plan is to extend this result to mice which have finitely many Woodin cardinals. We will introduce a direct limit system of mice due to Grigor Sargsyan and sketch a scenario to show the following result. Let $n \geq 1$ and let $x$ again be a real of sufficiently high Turing degree. Let $\kappa_x$ be the least inaccessible strong cutpoint cardinal of $M_n(x)$ such that $\kappa_x$ is a limit of strong cutpoint cardinals in $M_n(x)$ and let $g$ be $Col(\omega, {<}\kappa_x)$generic over $M_n(x)$. Then $\operatorname{HOD}^{M_n(x,g)}$ is again a core model, together with a fragment of its own iteration strategy.
This is joint work in progress with Grigor Sargsyan.
Many thanks to Richard again for the great pictures!
]]>Title: Bootcamp 1 – Informal meeting.
Lecturer: Jaroslav Nešetřil.
Date: September 20, 2016.
Main Topics: Overview over the topics of the DocCourse; classical result in Ramsey theory
Definitions: Arrow notation, Ramsey numbers, arithmetical progression
Bootcamp 1 – Bootcamp 2 – Bootcamp 3 – Bootcamp 4 – Bootcamp 5 – Bootcamp 6 – Bootcamp 7 – Bootcamp 8
The main scope of this lecture was to give a historical overview over classical results in Ramsey theory, including Ramsey’s theorem itself. Further the program of the DocCourse was presented, which can be found here.
The three books below are a main references for Ramsey theory in general and the Bootcamp lectures in particular. Jarik also passed around an original version of Ramsey’s paper, which is depicted on the conference poster.
In order to phrase Ramsey’s theorems we first introduce some standard notation:
Then Ramsey’s theorem states as follows:
A proof of Ramsey’s theorem can be found in the notes to David Fox’ lectures (Mike: Coming soon!), including some estimates for the corresponding Ramsey numbers:
By the pigeonhole principle we have . However already the situation for Ramsey number is much more complex, only estimates are known for .
Ramsey’s work did not result from pure interest in combinatorics, but was motivated by Hilbert’s Entscheidungsproblem, the problem of finding an algorithm that tells if every statement expressible in firstorder logic is provable (from a given set of axioms). The finite Ramsey theorem was only used as an auxiliary result in “On a Problem of Formal Logic.”, in order to prove that every formula of the form
is decidable.
We remark that by Gödel’s incompleteness Theorem, the Entscheidungsproblem in general is not decidable; by a result of Trakthenbrot already adding one additional quantifier alternation results in undecidable formulas.
In the same paper Ramsey also presented a proof for the following infinite version of his theorem:
The proof of the infinite Ramsey theorem requires the axiom of choice. There exists a slight strengthening of the Finite Ramsey theorem, which we will denote by FRT*. In FRT*, we additionally can assume that the minimum monochromatic set is bounded by the size of :
We are going to show that the infinite Ramsey theorem implies the strengthened version of the finite Ramsey theorem:
Note that, since the above proof of the FRT* uses the infinite Ramsey theorem, it requires also the axiom of choice. It can be shown that this assumption is indeed necessary: Paris and Harrington proved that FRT* is a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic. It was already known that such statements existed by Gödel’s first incompleteness theorem, however no examples of “natural” such theorems were known.
Their proof lead also to the notion of indiscernibles in mathematical logic, i.e. are objects which cannot be distinguished by any property or relation defined by a formula.
As mentioned above, Ramsey himself used his result only as an auxiliary result to prove statements about decidability. The Happy ending theorem is often considered as starting point for the development of Ramsey theory as a whole new branch of mathematics:
Ramsey’s theorem was preceded by several other results, which we nowadays consider to be part of Ramsey theory, although they were also not studied from a combinatorial point of view, when they were first proven. One example is Van der Waerden’s theorem:
In reproving a theorem of Dickson on a modular version of Fermat’s conjecture, Schur showed the following:
Hilbert’s cube lemma is probably the earliest result which can be viewed as a Ramseytype theorem (besides, of course, the pigeonhole principle). It was established in connection with investigations on the irreducibility of rational functions with integer coefficients.
Title: Bootcamp 2 (of 8)
Lecturer: Jaroslav Nešetřil.
Date: September 21, 2016.
Main Topics: The Rado graph, homogeneous structures, universal graphs
Definitions: Language, structures, homomorphisms, embeddings, homogeneity, universality, Rado graph (Random graph),…
Bootcamp 1 – Bootcamp 2 – Bootcamp 3 – Bootcamp 4 – Bootcamp 5 – Bootcamp 6 – Bootcamp 7 – Bootcamp 8
In this lecture we discussed some standard notions from model theory that will be used in the rest of the Bootcamp lectures. Further we discussed the Rado graph (also known as Random graph) as an example of a homogeneous structure.
Then an structure is a triple , where is called the domain of and the interpretation function. For we require that for every relational symbol (i.e. is an ary relation on ) and is a function from A$.
For simplicity, we usually don’t talk about the interpretation function and write and . If it is clear from the context, we sometimes abuse notation and write for both the symbol and its interpretation in a structure.
Constants can be regarded as unary singleton relations, or as 0ary functions. However, in the Bootcamp lectures, we are only going to discuss relational structures, i.e. structures whose language only consists of relational symbols.
Injective homomorphisms are called monomorphisms, injective strong homomorphisms are called embeddings, bijective embeddings are called isomorphisms. An isomorphism from a structure to itself is called an automorphism of .
We say is a substructure , if and the identity is an embedding of into . If there is an embedding of , we call the image a copy of in .
Erdös and Rényi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. We are going to discuss this graph and some of its properties in this section.
Suppose we have already constructed an isomorphism from a finite subset to . Then let be the first element of ; it gives us a partition of into the set of its neighbors and nonneighbors . By the extension property of , there is also a vertex in such that has an edge with all elements of and has no edge with elements of . By setting we extended the given isomorphism to .
To ensure that every vertex of is in the image of we alternate in the next step, finding a suitable preimage of the first element of . This can be done symmetrically by the extension property of .
Since both graphs and are countable, the union of this ascending sequence of finite isomorphisms is an isomorphism from to .
The technique used in proof above is known as backandforth argument or zigzag argument. This proof techniques appears also in other talks of the course, in particular in the proof of Fraïssé’s theorem in Bootcamp 5.
It is not difficult to show that there are graphs with the extension property. An explicit description of such a graph was given by Rado in 1964. The vertex set of the Rado graph are the natural numbers, where for there is an edge between and if and only if the binary representation of has a on its th position.
There is also a probabilistic characterization of such graphs by Erdös and Rényi, which preceded Rado’s construction. Let us denote by a random graph a probabilistic distribution over graphs, in which the probabilities for edges are distributed independently, with probability each (Note by Michael: In literature the term “random graph” sometimes also refers to graphs generated by some other random process).
Then the following holds:
By the above theorem the Rado graph is often also called the Random graph. The Rado graph has several other nice features, making it a highly symmetric structure:
Examples:
In the case of graphs a full classification of the homogeneous graphs is known: in the finite case this classification is due to Gardiner, in the countable case due to Lachlan and Woodrow.
We will hear more about homogeneous structures and a way of constructing them in Bootcamp 5.
Examples:
In this section we are going to show that not for every class of countable structure there is a universal structure. A counterexample for graphs was given by Füredi and Komjáth:
We remark that this result was proven in a more general setting (substitute by any finite, 2connected, but not complete graph); but here we only present a proof for .
(Michael: My notes on the proof of this lemma are not complete…)
Now let us take the hypergraph given by the above Lemma. Further let , be graphs on 7element vertex set, such that in the first 4 elements form a path and the last 3 a cycle; and in also the last 3 elements form a cycle and there is an edge from the first to the forth vertex.
For every function we then form a free graph on , by replacing every hyperedge in by if and by if . Note that the graph is welldefined and free by the properties of .
Now assume that there is a countable universal free graph . Then has to embed all the graphs of the form ; For every let be an embedding of into . Since is countable, there are two graphs , , such that and agree on the set . But since , there has to be a minimal integer , where they disagree. Then, by construction of the graphs and , the union has to contain a 4cycle. But this is a contradiction.
Title: Introduction to the KPT correspondence 3 (of 3).
Lecturer: Lionel Ngyuen Van Thé.
Date: November 18, 2016.
Main Topics:
Definitions: Expansion property,
Lecture 1 – Lecture 2 – Lecture 3
In the second lecture we saw that the Ramsey property of (a combinatorial property) ensures universality of a certain minimal flow (a dynamical property). Today we’ll look at going from a dynamical property (minimality) to a combinatorial property (the expansion property).
Recall that we proved the following in the second lecture:
Here
Last time we saw that precompactness of the expansion allows us to topologically identify
We also saw that is a subset of a large compact product
Our main question today will be “What combinatorial properties guarantee that is a minimal flow?” More precisely, what condition must an expansion satisfy so that is minimal.
We start by reminding you about the expansion property (which we looked at in Bootcamp 4 and Bootcamp 7).
We say that has the expansion property (EP) (relative to ) when such that (expansions of respectively), we have embeds in .
When has the Joint Embedding Property, then (EP) is equivalent to such that (an expansion of ), we have embeds in .
Here is the major theorem we will prove.
“You have to understand the purpose!” – Nešetřil.
“The difficulty is really translating into dynamical language what the combinatorics mean.” – Lionel
Before proving this theorem, we prove two propositions which will contain all the heavy lifting. For notational simplicity you may assume that is just a single relation .
“(2) is the correct finitization of (1).”
By (1), for all we have embeds into . So there is a finite such that
which is open in .
In this way forms an open cover of .
By compactness, there are finite such that .
Let be the finite substructure of supported by .
Claim: witnesses the (EP) for .
This is all that remains to finish the proof that .
This induces an embedding . By ultrahogeneity (for ) we can extend to an automorphism .
Then, for and we have
So setting for all we get .
Now , so there is an such that .
So
Thus embeds into .
We now prove . Fix witnessing the (EP).
Take an . Then, by the (EP),
So .
We can now combine this with the result from the second lecture (which tells us about universality) to get the following method for computing universal minimal flows.
This gives an explicit, combinatorial way to compute a universal minimal flow. You only need to find a precompact expansion of with (EP) and (RP). Often (RP) is used to prove (EP).
All of the universal minimal flows constructed in this way will be metrizable.
The following captures the uniqueness of a precompact expansion.
We saw in lecture 2 that the “smallness” of the universal minimal flow is dictated partly by the homogeneity and Ramsey properties of the group. The following theorem captures that notion.
Why metrizability? It is a reasonable “smallness” condition.
This was expanded by Zucker, and he was able to drop the condition, while capturing the Ramsey degree.
One way to interpret this result is that if you have a combinatorial property (3), then you get a precompact expansion with the (EP) and the (RP). This suggests (or at least seems to suggest) that precompact expansions are the relevant ones to consider.
Natural question (Tsankov 2009). Which satisfy these theorems? (Just knowing and not assuming (RP).)
Conjecture (Nguyen Van Thé 2012). When is precomapct.
This conjecture was shown to be false in 2015 by Evans using a Hrushovski construction. See his DocCourse lectures.
Conjecture (Bodirsky, Pinsker). This should be true for finite languages.
“What does the finite language mean topologically? Something about growth rate of number of structures of cardinality ? Related to amenability? Maybe the arity matters? This might require more examples of high arity.”
Research has gone in many directions from the original KPT paper.
Main references:
Other works cited (Mike: I have to fix some of these. This is obviously unfinished.)
Title: Introduction to the KPT correspondence 2 (of 3).
Lecturer: Lionel Ngyuen Van Thé.
Date: November 16, 2016.
Main Topics: Computing universal minimal flows, , why precompactness is important.
Definitions: Minimal flow, universal flow, Logic action, equivariant.
Lecture 1 – Lecture 2 – Lecture 3
Last time we looked at how the Ramsey property of a structure ensures that is extremely amenable.
Today we will look at what can be said about the dynamics of when is not Ramsey?
Last lecture we did not provide many examples of extremely amenable groups, so let us fix that now.
The underlying Ramsey principle here is the classical Ramsey theorem. This was the first known example of an extremely amenable group. Note that it comes seven years before the 2005 KPT paper.
The following examples were shown to be extremely amenable using the 2005 KPT correspondence, although the underlying Ramsey principles were already known.
Theorem (KPT, 2005). The folowing groups are extremely amenable. The needed Ramsey principle is in brackets.
In order to analyze what happens to when is not Ramsey, we will introduce the notion of a universal minimal flow, which at its heart is a canonical compact object we can associate to a group. The size (both topologically and in terms of cardinality) of a group’s universal minimal flow will be determined by the “amount of Ramsey” that the group has.
Here are two exercises to play around with these concepts.
For a fixed , the object that is universal in the class of minimal flows will be a canonical object we can associate to , called the universal minimal flow of . To make sense of this, we introduce the concept of universality and flow homomorphism.
Definition. Given flows and , a flow homomorphism is a map that is continuous and invariant.
A map is invariant if we have
These universal objects always exist, although the proof is nonconstructive.
Theorem. Let be a topological gorup. There is a minimal flow that is universal in the sense that for all minimal there is an onto flow homomorphism .
In addition, is unique (up to flow isomorphism). So is called the universal minimal flow of .
Typically will be hard to describe. The following facts show cases where they are easily understood.
Exercise.
Two other examples where is known.
The first known example of a nontrivial metrizable universal minimal flow is the following.
We will compute the universal minimal flow of . The original proof is due to GlasnerWeiss in 2002, but we will present proof that is easier to generalize. You should compare this with their original proof.
Proof. By an earlier exercise, is a minimal flow, so we need “only” show that it is universal. So let and let .
Step 1: Use extreme amenability of a smaller group.
Fix a linear ordering such that .
In this way we have that which is extremely amenable by Pestov’s theorem. Note that . So induces an action . By extreme amenability of , there is a fixed point .
Step 2: Use uniform spaces to extend the group action.
Now consider the map that sends . Since we have that only depends on . Thus
We also see that
So, in this way we can think of, .
Assume for the moment that can be continuously extended to a map on all of . In this case is a compact subspace of containing (the fixed point), hence . Since is minimal, . So we are done.
Claim. can be continuously extended to a map on all of .
Proof of claim. We would like to show first that is uniformly continuous. What does that even mean in the nonmetric setting? How do we capture the interplay between the topology of and the group ?
We can’t assume that has a metric, but it will always have a unique uniformization, which will act like a metric for the purposes of defining uniform continuity.
To extend continuously, if you are familiar with uniform spaces:
If you aren’t familiar with uniform spaces, then just pretend that has a metric and do the same as above.
This part shows why this type of argument doesn’t always work.
This proof works directly when you replace by and is replaced by a closed subgroup such that
Question: What does “ is precompact” mean combinatorially? Put another way, what do such look like?
Since we can think of as an expansion of where , where is possibly infinite.
If the parity of is denoted by , then
is compact.
Here are two exercises to help you understand the interplay of these objects.
A priori, gives the box topology which could be different than the product topology. However, precompactness guarantees that these are the same.
Exercise. Show that is precompact iff generates the product topology on , and every element of has only finitely many expansions in .
That is, is a precompact expansion of , hence the name.
In this case, we write
Recall that is minimal iff there is a flow homomorphism . Now for any minimal flow we take and see that .
Corollary. Under the same assumptions, any minimal subflow of is the universal minimal flow.
In particular, is metrizable.
In practice, computing this requires understanding what the minimal subflows of look like. This amounts to understanding when is minimal.
These are our overarching references
Here are the references to specific theorems we mentioned. (Mike: I’m missing a couple.)
Yesterday, however, I spent most of my day thinking about how weas a collective of set theoriststeach axiomatic set theory. About that usual course: axioms, ordinals, induction, wellfounded sets, reflection, \(V=L\) and the consistency of \(\GCH\) and \(\AC\), some basic combinatorics (clubs, Fodor's lemma, maybe Solovay or even Silver's theorem). Up to some rudimentary permutation. Continue reading...
]]>I’ve been really enjoying my new job at Time Service in Toledo. I’m about to finish my third month here, and I expect I’ll be staying with this job for quite a while. I find that working in business gives me a variety of interesting problems to solve, and although they’re not deep and abstract in the same way as math research problems, they still require a lot of creative thinking and give me challenges to work on over time and puzzles to chew on as I drift off to sleep, in my morning shower, etc., just like math research did. The whole operation of helping to run a business feels like a big optimization problem — how do I figure out the best way to use all of our company’s resources to the greatest effect?
I hope all my friends in the New York Logic community are doing well. Please keep in touch!
]]>I was very happy when the professor, Matania BenArtzi, allowed me to write a final paper about the usage of the axiom of choice in the course, instead of taking an exam. Continue reading...
]]>Below are 15 problems from the course. Originally I was only going to list 5, but it was hard enough to only pick 15. I attempted to showcase a variety of problems that utilize different ways of thinking. I’m intentionally not providing any solutions. Some of these problems are classics or variations on classics. Have fun playing!
If you want to see more problems from the course, go here.
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>It turns out that up to isomorphism, there are exactly 5 groups of order 8. Below are representatives from each isomorphism class:
The first three groups listed above are abelian while the last two are not. It’s a fairly straightforward exercise to prove that none of these groups are isomorphic to each other. It’s a bit more work to prove that the list is complete. The Fundamental Theorem of Finitely Generated Abelian Groups guarantees that we haven’t omitted any abelian groups of order 8. Handling the nonabelian case is trickier. If you want to know more about to prove that the classification above is correct, check out the Mathematics Stack Exchange post here, the GroupProps wiki page about groups of order 8, and the nice classification of all groups of order less or equal to 8 that is located here.
Since groups have binary operations at their core, we can represent a finite group using a table, called a group table. In order to help out minds recognize patterns in the table, we can color the entries in the table according to which group element occurs. Of course, if we rearrange the column and row headings of the table, we have to rearrange or recolor the entries of the table accordingly. Doing so may make some patterns more or less visually recognizable. Similar to the book Visual Group Theory by Nathan Carter (Bentley University), I utilize colored group tables in several chapters of An InquiryBased Approach to Abstract Algebra, which is an opensource abstract algebra book that I wrote to be used with an IBL approach to the subject.
While I was teaching out of Carter’s book during the summer of 2009, one of my students (Michelle Reagan) made five quilts that correspond to colored group tables for the five groups of order 8. Here are pictures of the quilts.
It’s a fun exercise to figure out which quilt corresponds to which group. I’ll leave it to you to think about.
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gammanonnegativity and realrootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group.
The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions.
The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, which survey more advanced topics, including some open problems in combinatorial topology.
This textbook will serve a resource for experts in the field as well as for graduate students and others hoping to learn about these topics for the first time.
Generally speaking, most of my research in pure mathematics falls in the category of algebraic combinatorics. However, I’ve had very little formal training in combinatorics. It turns out that I know quite a bit about Catalan combinatorics, but again, it’s not a subject that I’ve explicitly studied. Prior to opening the book, I knew next to nothing about Eulerian numbers, let alone Narayana numbers.
Right around the time I found out I would be teaching our graduate combinatorics class during the Fall 2016 semester, I learned about Kyle’s book. I was really looking forward to teaching the class because I figured that one of the best ways to fill in my lack of formal training in combinatorics was to teach a class about it. After thumbing through Kyle’s book (and thinking, “wow, I don’t really know any of this stuff!”), I decided that I could run the class as a sort of “topics course” focusing on Eulerian numbers and Catalan combinatorics while hitting many of the core ideas of enumerative combinatorics along the way. As a bonus, I would be forced to learn lots of cool things that relate to my research interests, many of which I probably should have know more about anyway.
I’m currently in week 5 of my Topics in Combinatorics graduate course in which we are closely following Kyle’s book. Despite the fact that we’ve barely covered two chapters, I’m absolutely in love with the book and the content. It’s so much fun! I have to admit that I don’t always know which specific topics are key ideas and which are just fun side stories, but I think that’s mostly true every time one teaches a course for the first time. One of the things I really like about the themes in the book is that connects with cutting edge research topics. We’re learning about “current events” in algebraic/enumerative combinatorics.
My only minor complaint is that I wish Kyle provided less detail in the hints/solutions for the exercises in the back of the book. On the other hand, there have been a couple times where I’ve thought, “geez, there’s no way I would have ever come up with that argument without significant guidance.”
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>As a side project, I hope to find some time to do a bit of research for MIRI. I’ve discussed MIRI research in a couple of recent posts here. I plan to continue updating this blog with stuff on MIRI research and other updates on my life. I’ll miss my colleagues in New York, and I hope we keep in touch. My students are welcome to keep in touch as well.
]]>Quantilization is a form of mild optimization where you tell an AI to choose something at random from (for instance) the top 10% of best solutions, rather than taking the best solution. This helps to get around the problem of an agent whose values are mostly aligned with yours but that does pathological things when it takes its values to the extreme. In this paper, we examine a similar process, but involving two (or more) agents rather than one.
For those of you who were also at the MSFP, you can read some additional discussion of the paper here. The main idea is that Connor is working on a simulation to help test the ideas in the paper. If you’re interested in helping with the simulation but don’t have access to the forum post linked above, get in touch with me.
]]>Their research has a fair amount of overlap with mathematical logic. I’d encourage any logicians who are interested in these sort of things to get involved. It’s a very good and important cause; the future of humanity is at stake. Unaligned artificial intelligence could destroy us all in a way that makes nuclear war and global warming seem tame in comparison.
Their technical research agenda is a good place to start for a technical perspective. The book Superintelligence by Nick Bostrom is a good starting point for a less technical introduction and to help understand why MIRI’s agenda is important and nontrivial.
One area of MIRI research that I find particularly interesting has to do with a version of Prisoner’s Dilemma played by computer programs that are allowed to read each others’ source code. This work makes use of a bounded version of Löb’s theorem. Actually, a fair bit of MIRI research relates to Löb’s theorem. Here is a good introduction.
Feel free to contact me if you’d like to know more about how to get involved with MIRI research. Or you can contact MIRI directly.
]]>Los niveles de los dos cursos seran un poco differentes, pero mucho de la material sera similar.
Las notas son aquí. Los subjetos son como sigue:
Esta material es más clasica, entonces hay muchas referencias posibles. Si no ha estado la teoría de grupos antes, recomiendo el libro de Fraleigh.
La mayoría de estas referencias estan un poco avanzadas. Yo he incluido dos referencias generales (por Tao y por Tao–Vu) que contienen mucho material fondamental — malafortunadamente, el libro de Tao y Vu no es disponible en una forma gratuita en la web.
Mi primera recomendación es las lecturas de Helfgott, “_Crecimiento y espansión en SL2″. _Primeramente, son en español(!) pero también comenzan a un nivel bastante fácil y, rapidamente, presentan un resultado muy importante de Helfgott sí mismo, sobre crecimiento en el grupo SL(2,p).
I recently had a chat with James Cummings about teaching. He said something that I knew long before, that being a good teacher requires a bit of theatricality. My best teacher from undergrad, Uri Onn, had told me when I started teaching, that being a good teacher is the same as being a good storyteller: you need to be able and mesmerize your audience and keep them on the edge of their seats, wanting more. Continue reading...
]]>However, we have a proof, a constructive proof that large cardinals are consistent. And they exist in an inner model of our universe. Continue reading...
]]>So, I'm fashionably late to the party (with some good excuse, see my previous post), but after the recent 200 terabytes proof for the coloring of Pythagorean triples, the same old questions are raised about whether or not at some point computers will be better than us in finding new theorems, and proving them too. Continue reading...
]]>If you don't follow arXiv very closely, I have posted a paper titled "Iterating Symmetric Extensions". This is going to be the first part of my dissertation. The paper is concerned with developing a general framework for iterating symmetric extensions, which oddly enough, is something that we didn't really know how to do until now. There is a refinement of the general framework to something I call "productive iterations" which impose some additional requirements, but allow greater freedom in the choice of filters used to interpret the names. There is an example of a classlength iteration, which effectively takes everything that was done in the paper and uses it to produce a classlength iteration—and thus a class length sequence of models—where slowly, but surely, KinnaWagner Principles fail more and more. This means that we are forcing "diagonally" away from the ordinals. So the models produced there will not be defined by their set of ordinals, and sets of sets of ordinals, and so on. Continue reading...
]]>But those people forget that \(0=1\) is also very true in the ring with a single element; or you know, just in any structure for a language including the two constant symbols \(0\) and \(1\), where both constants are interpreted to be the same object. And hey, who even said that \(0\) and \(1\) have to denote constants? Why not ternary relations, or some other thing? Continue reading...
]]>Enjoy! Continue reading...
]]>I'm visiting David Asperó in Norwich at the moment, and on Sunday, the 12th, I will return home. It seems that the pattern is that you work most of the day, then head for a few drinks and dinner. Mathematics is eligible for the first two beers, philosophy of mathematics for the next two, and mathematical education for the fifth beer. Then it's probably a good idea to stop. Also it is usually last call, so you kinda have to stop. Continue reading...
]]>We consider the iteration of quasiregular maps of transcendental type from to . In particular we study quasiFatou components, whichare defined as the connected components of the complement of the Julia set.
Many authors have studied the components of the Fatou set of a transcendental entire function, and our goal in this paper is to generalise some of these results to quasiFatou components. First, we study the number of complementary components of quasiFatou components, generalising, and slightly strengthening, a result of Kisaka and Shishikura. Second, we study the size of quasiFatou components that are bounded and have a bounded complementary component. We obtain results analogous to those of Zheng, and of Bergweiler, Rippon and Stallard. These are obtained using techniques which may be of interest even in the case of transcendental entire functions.
]]>Our objective is to determine which subsets of arise as escaping sets of continuous functions from to itself. We obtain partial answers to this problem, particularly in one dimension, and in the case of open sets. We give a number of examples to show that the situation in one dimension is quite different from the situation in higher dimensions. Our results demonstrate that this problem is both interesting and perhaps surprisingly complicated.
]]>We study the class of functions meromorphic outside a countable closed set of essential singularities. We show that if a function in , with at least one essential singularity, permutes with a nonconstant rational map , then is a Möbius map that is not conjugate to an irrational rotation. For a given function which is not a Möbius map, we show that the set of functions in that permute with is countably infinite. Finally, we show that there exist transcendental meromorphic functions such that, among functions meromorphic in the plane, permutes only with itself and with the identity map.
]]>You can find the video here: Continue reading...
]]>In particular a successful mathematical idea is polished with the dust of the many failed ideas that preceded it. Continue reading...
]]>You can find the article on the ESTS' website "Resources" page, or in the Papers section of my website. Continue reading...
]]>If you happen to be a student and a member of the Association for Symbolic Logic, you can apply for an ASL travel award. For more information as to how, please see here. There's just enough time to still submit your request! Continue reading...
]]>Some of you may have known him through MathOverflow as "Avshalom" where he often appeared in the comments with generally useful references, and some of you may have known him in real life as a teacher or a colleague, or a student. Some of you may have even knew him as Eoin Coleman. Continue reading...
]]>You can find that video right here: Continue reading...
]]>Not assuming the axiom of choice the definition of cofinality remains the same, if we restrict ourselves to ordinals and \(\aleph\) numbers. But why should we? There is a rich world out there, new colors that were not on the choicey rainbow from before. So anything which is inherently based on the ordering properties of the ordinals should not be considered as the definition of an ordinal. So first let's recall the two ways we can order cardinals without choice. Continue reading...
]]>So I raised a question in the comment, and got replies from two other people who kept repeating the age old silly arguments of what are the elements of \(\RR\times\RR\) or what are these or that elements. And supposedly the correct pedagogical answer is "It does not matter what are the elements of \(\RR\times\RR\)." With that I strongly agree, and when I taught my students about ordered pairs on the very first class of the semester, I made it very clear that there are other ways to define ordered pairs and that we only do that because we want to show that there is at least one way in which ordered pairs can be realized as sets; but ultimately we couldn't care less about what way they encode ordered pairs into sets, as long it is a "legal" way. Continue reading...
]]>So here is how I read a paper, and I'd like to ask you to think about how you read a paper, and why you read it this way. Continue reading...
]]>We construct a quasiregular map of transcendental type from to with a periodic domain in which all iterates tend locally uniformly to infinity. This is the first example of such behaviour in a dimension greater than two.
Our construction uses a general result regarding the extension of biLipschitz maps. In addition, we show that there is a quasiregular map of transcendental type from to which is equal to the identity map in a halfspace.
]]>So I figured, why not use this for explaining mathematical theorems. Continue reading...
]]>The case for support document from my grant application gives details of this conjecture, its importance, and the strategies that I hope to employ to work on it.
Excitingly, the university has agreed to fund a PhD student as part of this research. I just drafted a short description of what the PhD would be about, and I’ll post this below. (Note that this description might be edited a little over the next few days. In any case, it should give an idea of what the project will be about.) If you are interested, please get in touch!
]]>This programme of research is within the study of finite group theory (although some investigation of linear algebraic groups may also be involved). The aim is to prove, or partially prove, the Product Decomposition Conjecture which concerns “conjugategrowth” of subsets of a finite simple group: roughly speaking, given a finite nonabelian simple group G and a subset A in G of size at least 2, we would like to show that one can always write G as a product of “not many” conjugates of A.
This notion of conjugategrowth has interesting connections to many interesting areas of mathematics, including expander graphs, the product growth results of Helfgott et al, bases of permutation groups, word problems and more.
In the process of working on this conjecture, the student can expect to learn a great deal about the structure of finite simple groups (especially the simple classical groups) and, in particular, will study and make use of one of the most famous theorems in mathematics, the Classification of Finite Simple Groups.
We find a similar concept in Zelda's poem "Every man has a name" (לכל איש יש שם), which in Israel is closely associated with the Holocaust and with assigning numbers to people. But alas, we are all numbers in some database. Our ID numbers, employer number, the index under which you appear in the database. You are your phone number, and your bank account number. You are the aggregation of all these numbers. And more. Continue reading...
]]>Richard Feynman, who was this awesome guy who did a lot of cool things (and also some physics (but I won't hold it against him today)), has a famous threesteps algorithm for solving any problem.
In general, while I do find it entertaining to think about god, afterlife, or a concrete mathematical universe, I find more comfort in the uncertainty of existence than I do in the likelihood that my belief is wrong, or in the terrifying conviction that comes along with believing in something (and everyone else is wrong). Continue reading...
]]>I always preferred to be the master of my domain. The king of my castle. But literally, not the Seinfeld euphemisms sense. In any case. I've been thinking about a page where I can post short thoughts about math, life and otherwise. The blog is not suitable, since I'm not going to add a post each time I have a new thought. So instead I've started a blurbs page. Each blurb has a number, and an anchored link that you can use in case you want to share it. Continue reading...
]]>If you are not on this list, you better hurry up to this application form and register! Come on, what are you waiting for??? Continue reading...
]]>There is no registration fee, but please register your attendance or obtain any further details by contacting Nick Gill. All events are held in rooms G310 and G311. Morning tea, lunch and afternoon tea are included and complementary. There are limited funds available for dinner — please let us know if you would like to join us.
A list of titles and abstracts for all talks is now available.
09:30  coffee 
10:00 
Session 1: Combinatorics and cryptography</p>

12:00  lunch 
13:30 
Session 2: Numerically modelling the atmosphere</p>

15:30  coffee 
18:00  dinner 
09:30  coffee 
10:00 
Session 3: Operational Research</p>

12:00  lunch 
13:30 
Session 4: Group Theory</p>

15:30  coffee 
18:00  dinner 
The meeting is supported by an LMS Conference grant celebrating new appointments and the University of South Wales.
]]>But without the axiom of choice the world is indeed a strange place. This was posted as answer on math.SE earlier today. Continue reading...
]]>We have verified, in the meantime, that the same person impersonating me on Quora is the one who used Isa's name in those comments. Continue reading...
]]>But if I want to be sure that I can finish next year, I should probably omit one of the problems I originally wanted to solve; and keep that for later, unless it turns out to be particularly simple when I finish the rest. Continue reading...
]]>Mathematics will often dangle in front of you some ideas, and you will work them out, to find a mistake. Then you will go back to the beginning, find new ideas that she had in store, work those out and proceed only to find a mistake much later. Then you go back to the beginning, and you find yet another minor idea that was missing, and now when everything works you continue. But then you find another gap, and you have to go back to the beginning and hope to find yet another idea. And don't get me started on those ideas that you find not to work during all these searches. Continue reading...
]]>It occurred to me today that this is a very Kurtzian story, if we take the Brando interpretation of Mistah Kurtz (he dead) in Apocalypse Now! (the Redux version is one of my favorite movies, I guess). In the movie Harrison Ford plays a tape where Kurtz is describing a snail crawling along the straight edge of a razor, crawling slithering, this is his dream, this is his nightmare. Continue reading...
]]>Yesterday was the first day where you could say that the weather is characteristically spring; and today (as well tomorrow) we are expected for a daytime heatwave and a nighttime cold weather (e.g. BeerSheva is expecting a whopping 31 degrees centigrade during the day, and 13 during the night). Continue reading...
]]>So I am happy that I have only one course each day this semester. I am teaching two courses this semester. Precalculus (Math 200) meets on Tuesdays and Thursdays at 8AM, and Elementary Algebra (Math 96) meets on Mondays and Wednesdays at 9:15 AM. (Each class meets with me a total of five hours per week.) Then on Fridays I have the set theory seminar at 10AM at the Graduate Center, or occasionally a faculty seminar at LaGuardia at 9AM where we will prepare to teach a seminar for first year LaGuardia students. I think that will be cool, because I really enjoyed my first year seminar as an undergraduate student at Grinnell.
This morning schedule is a big change for me; I have been a total night owl for the last seven years at least, rarely getting up much before noon. But I think it will be good for my health to wake up more with the sun. It might be a rough adjustment period, but it will be worthwhile. As a bonus, if all goes well, I can leave work by mid to late afternoon most days and be able to go out in the city some weekday evenings for dinner or a show. (If all doesn’t go well, I’ll be buried in grading, course preparation, administrative work, etc. and rarely get out of here until late anyway. But I am optimistic that it will be better than that.) Another nice benefit to the schedule is that I can conveniently make myself available for 45 minutes worth of office hours four days per week, so that students have a better opportunity to see me.
The elementary algebra students seem like a good group. They really seemed to appreciate the activity of sharing their feelings towards math and their expectations for the course. The videos didn’t seem to be as effective; only a few students commented on them, but the initial discussion before the videos was quite fruitful. A few students told me that they hate math, but many, I think a majority though I didn’t count, came in with positive attitudes towards math. Now it is my responsibility to help them to maintain these positive attitudes and to work hard and succeed in the class. I’m up for the challenge.
]]>
http://www.ctpost.com/news/article/Hereswhyyoushouldstudyalgebra4710461.php
]]>
Here are a few terminological ideas that I doubt are going to be developed by anyone. But if you plan on doing something similar (or if my terminology inspires some proof) feel free to use these terms, and please let me know! Continue reading...
]]>Definition. Let \(V\) be a model of \(\ZFC\), and \(\PP\in V\) be a notion of forcing. We say that a cardinal \(\kappa\) is "colloopsed" by \(\PP\) (to \(\mu\)) if every \(V\)generic filter \(G\) adds a bijection from \(\mu\) onto \(\kappa\), but there is an intermediate \(N\subseteq V[G]\) satisfying \(\ZF\) in which there is no such bijection, but there is one for each \(\lambda\lt\kappa\). Continue reading...
]]>In case you forgot, \(\kappa\) is a huge cardinal if there is an elementary embedding \(j\colon V\to M\), where \(M\) is a transitive class containing all the ordinals, with \(\kappa\) critical, and \(M\) is closed under sequences of length \(j(\kappa)\). Continue reading...
]]>People often like to cite the paradoxical decomposition of the unit sphere given by BanachTarski. "Yes, it doesn't make any sense, therefore the axiom of choice needs to be omitted". Continue reading...
]]>But we can clearly see some various degrees of largeness by how much structure the existence of the cardinal imposes. Inaccessible cardinals prove there is a model for secondorder \(\ZFC\), and Ramsey cardinals imply \(V\neq L\). Strongly compact cardinals even imply that \(\forall A(V\neq L[A])\). Continue reading...
]]>Forcing is horrible. If you can think about it, you can encode it into generic objects. If you can't think about it, you can encode it into generic objects. If you think that you can't encode it into generic objects, then you are probably wrong, and you can still encode it into generic objects. Continue reading...
]]>Materiales:
En adición de esta página hay una otra página en Moodle con material de ayuda para esta curso. La página se llama Ayuda Algebra Lineal y está en la sección de Matemática aplicada de la escuela de matemática. La clave para matricularse es Ayuda2014 y solo la deben usar los estudiantes la primera vez que se matriculen.
Si tiene más preguntas, se puede
Well, of course that the answer is negative. If \(\cal U\) is a free ultrafilter on \(\omega\) then \(\{X\subseteq\mathcal P(\omega)\mid X\cap\omega\in\cal U\}\) is a free ultrafilter on \(\mathcal P(\omega)\). But that doesn't mean that the question should be trivialized. What Yair asked was actually slightly subtler than that: is it consistent that there are free ultrafilters on \(\omega\), but no uniform ultrafilters on the real numbers? Continue reading...
]]>Neil deGrasse Tyson pushed a lot on the point that we really push the planet to its limits, and we might be close to the point of no return from which there is only a terrible Venuslike fate to this planet. And that is an important issue, no doubt. Continue reading...
]]>But both these analogies would be wrong. They only take you so far, and not further. And if you wish to give a proper explanation to your listener, there will be no escape from the eventual logic and set theory of it all. I stopped, or at least I'm doing my best, using these analogies. I do, however, use the analogy of "How many roots does \(x^{42}2\) has?" as an example for everyday independence (none in \(\mathbb Q\), two in \(\mathbb R\) and many in \(\mathbb C\)). But this is to motivate a different part of the explanation: the use of models of set theory (e.g. "How can you add a real number??", well how can you add a root to a polynomial?) and the fact that we don't consider the universe per se. Of course, in a model of \(\ZFC\) we can always construct the rest of mathematics internally, but this is not the issue now. Just like we have a model of one theory, we can have a model for another. Continue reading...
]]>Plan del curso
Semana  Materiál  Evaluación 
1  Repaso de Álgebra Lineal I  L 11/8: Tarea 1 distribuido 
2 
Operadores lineales, matrices semejantes, Valores propios, polinomios característicos 
L 18/8: Tarea 1 devuelto J 21/8: Tarea 1 discutido 
3  Subespacios invariantes  L 25/8: Tarea 2 distribuido 
4 
Triangulación simultánea Diagonalización simultánea Dos demostraciones difíciles 
L 01/9: Tarea 2 devuelto J 04/9: Tarea 2 discutido 
5 
Sumas directas invariantes Descomposición prima 
L 8/9: Tarea 3 distribuido 
6 
No hay clases esta semana. Nótese que muchas personas tenían problemas con pregunta 8 de tarea 3. Para una discusión interesante sobre esta tema, vaya aquí. 
L 15/9: Feriado J 18/9: Tarea 3 devuelto y Examen parcial 1 
7 
Subespacios cíclicos Descomposición cíclica 
L 22/9: Tarea 4 distribuido 
8 
La forma racional Nuevos ejemplos de cuerpos</a> 
L 29/9: Tarea 4 devuelto J 02/10: Tareas 3 y 4 discutido 
9 
Formas y matrices Espacios producto interno 
L 06/10: Tarea 5 distribuido 
10 
Propiedades de productos internos El proceso GramSchmidt 
L 13/10: Tarea 5 devuelto J 16/10: Tarea 5 discutido 
11 
Proyecciones Complementos ortogonales 
L 20/10: Tarea 6 distribuido 
12 
Operadores unitarios Operadores ortogonales 
L 27/10: Tarea 6 devuelto J 30/10: Examen parcial 2 
13 
Operadores normales La ley de inercia de Sylvester 
L 03/11: Tarea 7 distribuido 
14  La clasificación de formas sesquilineales 
L 10/11: Tarea 7 devuelto J 13/11: Tarea 7 discutido 
15 
Formas cuadráticas Grupos de isometrías 
L 17/11: Tarea 8 distribuido 
16 
Secciones cónicas La teoría de relatividad especial 
L 24/11: Tarea 8 devuelto J 27/11: Tarea 8 discutido 
17  Exámen parcial 3 
Si tiene más preguntas, se puede
It's a short little proof that the classic downward LöwenheimSkolem theorem is equivalent to \(\DC\), and that for a wellordered \(\kappa\), the downward LöwenheimSkolem asserting the existence of models of cardinality \(\leq\kappa\) is in fact equivalent to the conjunction of \(\DC\) and \(\AC_\kappa\). Continue reading...
]]>Clearly, the theme is different now. I also changed the content of the Papers page. I removed the abstracts (for some reason I thought this is going to be a cool thing to have, but with time it grew to annoy me greatly). I will definitely post a few things there in the coming time, some notes and eventually some nice papers  I hope! Continue reading...
]]>Practical matters
Lecture notes
Exercises
I will provide full answers for the first set, thereafter answers will only be provided on request.
Background reading
No one text covers all of the material in this course. Principal texts are as follows:
Additional texts of interest:
I have ecopies of most of the texts listed above and can provide them on request.
]]>Background on expanders:
On the sumproduct phenomenon. The basic text is Tao and Vu “Additive Combinatorics”. Here are a few other links:
On growth in nonabelian groups:
Expanders from groups:
Sieving:
Property T: The first construction of expander graphs was by Margulis and used property T, a representation theoretic property that holds for certain discrete groups (SL_d(Z) with d>2 for instance).
I recently tried to figure out the consequence of some forcing in \(\ZF\). This has led me to the following statement: Continue reading...
]]>Finite objects can be characterized in full using firstorder logic. The fact that you can write down how many elements a set have, is a huge thing. For example, every finite structure of a firstorder logic language has a categorical axiomatization. If the language is finite, then the axiomatization is finite as well. Continue reading...
]]>One of the nicer things I'd done was to work with Thomas Johnstone on some preservation theorem related to forcing and choice principles (see also this announcement by Victoria Gitman). In order to clean up a bit the proof, I'll introduce a new definition which is going to slightly extend the ideas originally discussed in Italy. So without further jibber jabber, let's talk mathematics. Continue reading...
]]>In a recent question on math.SE some user has asked whether or not we always have a strict inequality. Everyone sufficiently familiar with the basics of independence results would know that it is consistent to have \(2^{\aleph_0}=2^{\aleph_1}=\aleph_2\), in which case taking \(\mathfrak{p=r}=\aleph_0,\ \mathfrak{q=s}=\aleph_1\) gives us equality. But it's also trivial to see that we can always pick cardinals whose difference is large enough to keep the inequality true. Continue reading...
]]>For a set theorist, at least a "classical" set theorist (working within the confines of \(\ZF\) and its extensions to \(\ZFC\) and so on), a choice principle can aptly be defined as "Sentence \(\varphi\) in the language of set theory which is provable from \(\ZFC\) but independent from \(\ZF\)". Indeed that is how I think of choice principles, and how I referred to them in my masters thesis (albeit I prefaced that definition by pointing out its naivety). Continue reading...
]]>The answer is positive. It does require the axiom of choice. The counterexample is due to Läuchli who constructed a model in which there was a vector space which was not finitely generated, but every proper subspace is finitely generated. Given such vector space it is obvious that no infinite set can be linearly independent. Continue reading...
]]>There are several changed from the printed and submitted version, but those are minor. The Papers page lists them. Continue reading...
]]>But a mathematician knows that a number is basically a notion which represents a quantity. We have so many numbers that I don't even know where to begin if I wanted to list them. Luckily most of the readers (I suppose) are mathematicians and so I don't have to. Continue reading...
]]>The paper challenges the hegemony of \(\ZFC\) as the choice set theory. It offers an alternative in the form of \(\newcommand{\ETCS}{\axiom{ETCS}}\newcommand{\ETCSR}{\axiom{ETCS+R}}\ETCS\), a categories based set theory. The problem with \(\ETCS\) is that it is slightly weaker than \(\ZFC\). But we also know how much weaker: it lacks the expressibility of the full replacement schema. In this case we can just add a replacement schemalike list of axioms to have \(\ETCSR\). Continue reading...
]]>It is a well known fact (in \(\ZFC\) at least) that if \(V\) is a vector space, and \(V^\ast\) is the algebraic dual of \(V\) then \(V\cong V^{\ast\ast}\) if and only if \(\dim V<\infty\). Continue reading...
]]>Should I make it about myself? about my life? about my academic status? How I about I tell cool stories from my life, perhaps inebriated adventures? army experiences? Maybe I should write about mathematics. Perhaps some nice proof or some nice theorem? Continue reading...
]]>