When people speak about math content in the context of the web they usually mean equational content (or simply equations). That is, they don’t mean content in a mathematical field (which often enough does not qualify as equations), they simply mean something that looks like an equation.
Now you might argue that an equation in physics is still basically mathematical content but in reality both mathematician and physicist will frequently disagree with you (and each other, possibly explosively so). You quickly get to the edge when considering chemical equations and if you want to classify the nonsense notations in the life sciences you might question your sanity.
It’s not hard to understand why this is. For example, most typesetting tools with support for equations will have some kind of math mode for them. But I think it’s worth while differentiating the two so I’ll try my best to stick to equational content. On the one hand, the importance of math on the web is often exaggerated because it is really nonmathematical equational content that’s the majority (and even that is a blip on the radar). On the other hand, it does not help to confuse a field of study with what effectively comes down to a layout tradition.
Also, sorrynotsorry for misleading you with the title here.
The fundamental problem of equational content is that, well, that it’s simply pretty terrible all around. It’s convoluted,extremely compressed, archaic, and generally undecipherable. It destroys academic careers by the millions and it can often only be understood when you can see it written live (i.e., animated). At its best equations are like good abstract drawings, at worst (usually?) they’re deafening gibberish.
Stray thoughts.
One. I always thought Bret Victor’s (in)famous Kill math was largely wrong about the specifics of his criticism (for one, he seems to dismiss the incredible power of compression that differential equations exhibit  along with the obvious problems that stem from compression). But he is of course utterly right with his incredible work exploring how modern media like the web allow for a much richer expression of human thought, one that opens the content up to more people, often by adding means of interacting with it, especially means for untrained people (like tiny humans).
Two. Every once in a while I’ve wondered: what if Tim BernersLee had given the web some basic building blocks for equations. Just a fraction and a square root; maybe instead of image renditions of print equations we’d have immediately seen the same creativity applied to equations as there was with hacking general layout (1px GIF anyone?). Of course, that’s hopelessly romanticizing the evolution of the web. Why can’t I stop wondering.
Three. On and off (and I’ve come full circle on this several times) I’ve wondered whether math is ahead of other sciences on the web. I mean the <math>
tag was proposed in fricking HTML 3. So is math ahead? Maybe. But then why is scientific content so much more vibrant and transformative on the web compared to math?
The most obvious flaw of equational content is that it’s deeply rooted in print. Given the limitations of print technology, equational content has needed to adopt bad practices for such a long time that many people consider them good.
I’m not (just) thinking about the problem of general comprehension as it is too tainted by poorly trained practitioners on all levels. Sure, equational content is often more difficult to parse than necessary but that’s not different from poorly phrased prose.
The main problem is the tradition of abusing print technology to get more and more variations of notation squeezed into the medium. The constant abuse of sub and superscripts is a great example; if you need to add a variant of an object you’ve already introduced in your notation, just slap some sub/superscripts around it, et voilà, a new object.
The abuse of letters with different fonts is another horror in equational content. If you have ever run into a paper where a dozen variations of G
appear, denoting a convoluted set of somewhat related concepts, you’ll know this horror well. Unbelievably enough, Unicode has deemed this abuse of notation important enough that we now have such wonders as the Unicode point mathematical bold italic G in the Mathematical Alphanumeric Symbols
Block.
Another historic accident are stylistic separations. For example, in print it’s abhorred to make math content bold when the surrounding content is bold (e.g., in a heading) yet on the web people complain that an equation in a link doesn’t get the correct text decoration (what would that be??).
Obviously, there’s little point in criticizing the historic development of equational content. Given that print was mostly limited to (at best) grayscale with a limited character set, naturally people had to be creative. It is amazing what this accomplished.
The real problem comes up when pretending that this tradition should do more than vaguely inform a medium such as the web. The web so far developed without much influence from equational content. It has adopted a rather different approach to separating content and presentation and the traditions of equational content are essentially incompatible with the web’s approach.
I can find no argument for why the web stack should bend over backwards to accommodate these mostly quite bad traditions of equational content for print. This is perhaps similar to the situation of CSS paged media.
Obviously, it’s not like you shouldn’t be able to put traditional equational content on the web  you should (and you can very well today). But I’ve come to think it’s perfectly fine, in fact, it is appropriate that this continues to be a difficult problem. For example, traditional equational content is almost always inaccessible (without heuristic algorithms, i.e., guessing around); it’s basically a bunch of glyphs placed in a weird 2D patterns (like above and below a line which in turn is magically centered on some baseline and may or may not indicate it corresponds to the notion of a mathematical fraction). Pretending that this is a basis for accessible rendering on the web strikes me as foolish (or ridiculously zealous).
If you think that all equational content should be limited to the traditions of the print era, fine. I think humanity can do better on the web. Though I think we would need to acknowledge that the (print) traditions enshrined in equational content are flawed and should (and invariably will) be replaced with better concepts and narratives that are appropriate for this medium.
]]>Abstract: We study the complexity of the classification problem for countable models of set theory (ZFC). We prove that the classification of arbitrary countable models of ZFC is Borel complete, meaning that it is as complex as it can conceivably be. We then give partial results concerning the classification of countable wellfounded models of ZFC.
]]>@ARTICLE{GitmanHamkinsHolySchlichtWilliams:ForcingTheorem,
AUTHOR= {Victoria Gitman and Joel David Hamkins and Peter Holy and Philipp Schichit and Kameryn Williams},
TITLE= {The exact strength of the class forcing theorem},
PDF={https://boolesrings.org/victoriagitman/files/2017/07/Forcingtheorem.pdf},
Note ={Submitted},
EPRINT ={1707.03700},
}
We shall characterize the exact strength of the class forcing theorem, which asserts that every class forcing notion $\mathbb P$ has a corresponding forcing relation $\Vdash_{\mathbb P}$ satisfying the relevant recursive definition. When there is such a forcing relation, then statements true in any corresponding forcing extension are forced and forced statements are true in those extensions.
Unlike the case of setsized forcing, where one may prove in ${\rm ZFC}$ that every set forcing notion $\mathbb P$ has its corresponding forcing relations, in the case of class forcing it is consistent with GödelBernays set theory ${\rm GBC}$ that there is a proper class forcing notion $\mathbb P$ lacking a corresponding forcing relation, even merely for the atomic formulas. For certain forcing notions, the existence of an atomic forcing relation implies ${\rm Con}({\rm ZFC})$ and much more (see [1]), and so the consistency strength of the class forcing theorem goes strictly beyond ${\rm GBC}$, if this theory is consistent. Nevertheless, the class forcing theorem is provable in stronger theories, such as KelleyMorse set theory. What is the exact strength of the class forcing theorem?
Our project here is to identify the exact strength of the class forcing theorem by situating it in the rich hierarchy of theories between ${\rm GBC}$ and ${\rm KM}$, displayed in part in the above diagram, with the class forcing theorem highlighted in blue. It turns out that the class forcing theorem is equivalent over ${\rm GBC}$ to an attractive collection of several other natural settheoretic assertions. So it is a robust axiomatic principle.
The main theorem is naturally part of the emerging subject we call the reverse mathematics of secondorder set theory, a higher analogue of the perhaps more familiar reverse mathematics of secondorder arithmetic. In this new research area, we are concerned with the hierarchy of secondorder set theories between ${\rm GBC}$ and ${\rm KM}$ and beyond, analyzing the strength of various assertions in secondorder set theory, such as the principle ${\rm ETR}$ of elementary transfinite recursion, the principle of $\Pi^1_1$comprehension or the principle of determinacy for clopen class games, and so on. We fit these settheoretic principles into the hierarchy of theories over the base theory ${\rm GBC}$. The main theorem of this article does exactly this with the class forcing theorem, by finding its exact strength in relation to nearby related theories in this hierarchy.
Specifically, extending the analysis of [1] and [2], we show in our main theorem that the class forcing theorem is equivalent over ${\rm GBC}$ to the principle of elementary transfinite recursion ${\rm ETR}_{\rm Ord}$ for transfinite class recursions of length ${\rm Ord}$; it is equivalent to the existence of a truth predicate for the infinitary language of set theory $\mathcal{L}_{{\rm Ord},\omega}(\in,A)$, with any fixed class parameter $A$; to the existence of a truth predicate in the more generous infinitary language $\mathcal{L}_{{\rm Ord},{\rm Ord}}(\in,A)$; to the existence of ${\rm Ord}$iterated truth predicates for the firstorder language $\mathcal{L}_{\omega,\omega}(\in,A)$; to the existence of setcomplete Boolean class completions of any separative class partial order; and to the principle of determinacy for clopen class games of rank at most ${\rm Ord}+1$. We shall prove several of the separations indicated in figure above, such as the fact that the class forcing theorem is strictly stronger in consistency strength than having ${\rm ETR}_\alpha$ simultaneously for all ordinals $\alpha$ and strictly weaker than ${\rm ETR}_{{\rm Ord}\cdot\omega}$. The principle ${\rm ETR}_\omega$ is already sufficient to produce truth predicates for firstorder truth, relative to any class parameter. Thus, our results locate the class forcing theorem somewhat finely in the hierarchy of secondorder set theories.
Main Theorem: The following are equivalent over GödelBernays set theory ${\rm GBC}$.
@article {PeterHolyRegulaKrapfPhilippLuckeAnaNjegomirPhilippSchlicht:classforcing1,
AUTHOR = {Peter Holy and Regula Krapf and Philipp L\"{u}cke and Ana Njegomir and Philipp Schlicht},
TITLE = {Class Forcing, the Forcing Theorem and Boolean Completions},
NOTE ={To appear in the Journal of Symbolic Logic}
}
@article {PeterHolyRegulaKrapfPhilippSchlicht:classforcing2,
AUTHOR = {Peter Holy and Regula Krapf and Philipp Schlicht},
TITLE = {Characterizations of Pretameness and the {O}rdcc},
NOTE ={Preprint}
}
Abstract: We introduce alternative definitions of density points in Cantor space (or Baire space) which coincide with the usual definition of density points for the uniform measure on ${}^{\omega}2$ up to a set of measure $0$, and which depend only on the ideal of measure $0$ sets but not on the measure itself. This allows us to define the density property for the ideals associated to tree forcings analogous to the Lebesgue density theorem for the uniform measure on ${}^{\omega}2$. The main results show that among the ideals associated to wellknown tree forcings, the density property holds for all such ccc forcings and fails for the remaining forcings. In fact we introduce the notion of being stemlinked and show that every stemlinked tree forcing has the density property.
This is joint work with Philipp Schlicht, David Schrittesser and Thilo Weinert.
]]>Abstract: We consider the classification problem for several classes of countable structures which are “vertextransitive”, meaning that the automorphism group acts transitively on the elements. (This is sometimes called homogeneous.) We show that the classification of countable vertextransitive digraphs and partial orders are Borel complete. We identify the complexity of the classification of countable vertextransitive linear orders. Finally we show that the classification of vertextransitive countable tournaments is properly above $E_0$ in complexity.
]]>I gave a 3lectures tutorial at the 6th European Set Theory Conference in Budapest, July 2017.
Title: Strong colorings and their applications.
Abstract. Consider the following questions.
It turns out that all of the above questions can be decided (in one way), provided that there exists a certain “strong coloring” (or “wild partition”) of a corresponding uncountable graph.
In this tutorial, we shall present some of the techniques involved in constructing such strong colorings, and demonstrate how partial orders/topological spaces/algebraic structures may be derived from these colorings.
Lecture 1 ** Lecture 2 ** Lecture 3
]]>
I am going to give a talk about the applications of functional endomorphic Laver tables to public key cryptography. In essence, the nonabelian group based cryptosystems extend to selfdistributive algebra based cryptosystems, and the functional endomorphic Laver tables are, as far as I can tell, a good platform for these cryptosystems.
ABSTRACT: We shall use the rankintorank cardinals to construct algebras which may be used as platforms for public key cryptosystems.
The wellknown cryptosystems in group based cryptography generalize to selfdistributive algebra based cryptosystems. In 2013, Kalka and Teicher have generalized the group based AnshelAnshel Goldfeld key exchange to a selfdistributive algebra based key exchange. Furthermore, the semigroup based KoLee key exchange extends in a trivial manner to a selfdistributive algebra based key exchange. In 2006, Patrick Dehornoy has established that selfdistributive algebras may be used to establish authentication systems.
The classical Laver tables are the unique algebras $A_{n}=(\{1,…,2^{n}1,2^{n}\},*_{n})$ such that $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$ and $x*_{n}1=x+1\mod 2^{n}$ for all $x,y,z\in A_{n}$. The classical Laver tables are uptoisomorphism the monogenerated subalgebras of the algebras of rankintorank embeddings modulo some ordinal. The classical Laver tables (and similar structures) may be used to recursively construct functional endomorphic Laver tables which are selfdistributive algebras of an arbitrary arity. These functional endomorphic Laver tables appear to be secure platforms for selfdistributive algebra based cryptosystems.
The functional endomorphic Laver table based cryptosystems should be resistant to attacks from adversaries who have access to quantum computers. The functional endomorphic Laver table based cryptosystems will be the first realworld application of large cardinals!
]]>If you have noticed, this post had very little content to it. Why does a contentfree post get so much more attention than a post which actually has content to it? It seems like the people here are attracted to contentless clickbait posts much more than they are to actual content. This is really pissing me off. Your attraction to clickbait articles and false and hateful rumors is disgusting and deplorable. You should be interested in new ideas instead of immediately rejecting them out of hatred. People have been ruining my reputation since they are more interested in stupid rumors than in truth.
I realize that some of my posts are quite technical, but some of them are not. Some of them just announce some new program that I have written which you can launch in your browser where the only thing you have to do is read a couple directions and click a few buttons and produce a few pictures that you can hang on your wall. You do not even have to do anything.
]]>Still, the things you can do well, you obviously should. And yet, every once in a while, somebody throws you a curveball and you just have to shout: This is why we can’t have good things!
.
The other day on a client project, the QA specialist pointed out that the content was consistently using <em>
where it should be using <i>
. Can we fix that?
The semantics of these and related HTML5 tags is a bit subtle, but there is a difference and it should be easy to just replace one with the other, right? Right? Famous last words.
At first sight, this was easy. The HTML came out of some JATSlike XML, which was using <italic>
elements. So map to <i>
, right? But hold on, you’ll say, HTML5 reinterpreted <i>
to no longer indicate layout but semantics; it now indicates a change of voice. Unfortunately, JATS’s <italic>
is focused on the typographic aspects, so it does not really help. The again, it could help a little bit more because <italic>
allows for a toggle
attribute to indicate emphasis. Sadly, the actual XML did not provide that information.
Since the piece of the tool chain that turned <italic>
into <em>
was actually my doing, I was clearly at fault. However, I had my reasons. Namely, that all of this came from a LaTeX source and in this real world LaTeX content, \emph{}
and its brethren were the dominant source for <italic>
. So clearly that should be <em>
in the end?
Now of course, almost all LaTeX authors don’t give a damn beyond getting that PDF to look how they want it, so while they mostly use \emph{}
like macros, they mix it freely (and inconsistently) with \textit{}
and its brethren. So the conversion (written by an absolute expert) rightly says screw it, all I can say is it wants italics here
, thus merging them both together.
It’s my job to dig deeper than that so I took the time to look through the actual content available. Not the TeX, not the XML but the actual writing.
Lo and behold, the actual text use is pretty different: by far, most occurrences of <em>
happened in the context of quick, inline definitions. Invariably, you find these in introductions of mathematical research articles where you include commonly known definitions from a field so as not to cause bloat (because publishers and editorial boards continue to care more about page numbers than well documented research results).
A definition does not really fit either <i>
or <em>
. The closest you get in the spec, is an example of using <i>
to reference a past definition.
<p>The term <i>prose content</i> is defined above.</p>
To make matters worse, there is of course an entirely different element that fits perfectly:
The
<dfn>
element represents the defining instance of a term.
Perfect match for the vast majority of the content in question. So we should switch everything over, right?
The answer is, of course, no. Not because some content would end up with the wrong semantics (scroll to top) but because that was not the only use I found: almost without exception, the samples includes the use as a definition alongside the use as <em>
or <i>
.
And that is why we can’t have good things.
All of this is about as surprising as finding a handwritten table of contents in a Word document. TeX is for print layout and font styles are used for all manners of cruelty. The question I had to answer with my client was: can we do anything about it?
In the end, beauty lies in the eye of the beholder and semantics in the eyes of the reader. We did, in fact, switch to <i>
with the plan to expose more information from the original source regarding emphasis so we can gather more data on its usage. Fundamentally, this won’t help because it doesn’t solve the problem of inline definitions. Still, some analysis might reveal pragmatic improvements down the line.
In the end, it’s not hard to argue that a definition that is well known in the field and that is done inline in the introduction of an article is more like the kind of reference to a definition as in the above example from the spec (in fact, often enough it is done in the vicinity of a bibliographic reference). Of course, we’re still conflating \emph
and \textit
.
Now zealots idealists will argue that authors “just” have to learn to use semantic macros in TeX. After all, there are plenty of “semantic” LaTeX packages out there; just start writing good markup already!
Besides the lack of pragmatism, the only viable solution I can see would be a LaTeX package matching specifically HTML5 markup. After all, we have the tags and they have established definitions; any “semantics” beyond that will only cause issues down the line (what if a tag is introduced to HTML but with a slightly different meaning?). Even then, it doesn’t solve the social problem at the heart of so many publishing technology issues: who would make the effort and use it? It’s extra work and does nothing for print; why would an author do extra work when they think print rules?
I think only someone interested in creating HTML output would make the effort. And at that point you have to ask: Why would those authors bother with an archaic programming language like TeX to write HTML? They will find it invariably easier to just write HTML or their favorite lightweight markup for creating HTML, especially given the speed at which HTMLtoPDF solutions are improving). Building tools for LaTeX to solve this would just create extra work but help nobody. Just build better tools for writing HTML.
Doch das ist eine andere Geschichte und soll ein andermal erzählt werden.
]]>This past semester I taught the course for the second time. You can find the syllabus, list of problems, etc. for the Spring 2017 semester by going here. On the students’ final exam, I asked them which problem was their favorite from the semester. Below is the list of problems that they mentioned including the number of votes that each received. The level of difficulty of the problems covers the spectrum. Some of these are not easy. Have fun playing!
A while back I wrote a similar post that highlighted 15 fun problems from the first time I taught the course. You’ll notice that there is some overlap between the two lists.
]]>I believe that unless most of the world’s governments wage a cyber war against cryptocurrencies, in the somewhat near future cryptocurrencies will for the most part replace fiat currencies or at least compete with government fiat currencies. Governments today have an incentive to produce more fiat currencies since governments are able to fund their programs by printing more of their own money (producing more money is a bit more subtle than simply taxing people). However, there is no motivation for anyone to exponentially inflate any particular cryptocurrency (I would never own any cryptocurrency which will continue to exponentially inflates its value). For example, there will never be any more than 21 million bitcoins. Since cryptocurrencies will not lose their value through an exponential inflation, people will gain more confidence in cryptocurrencies than they would with their fiat currencies. Furthermore, cryptocurrencies also have the advantage that they are not connected to any one government and are supposed to be decentralized.
Hopefully the world will transition from fiat currencies to digital currencies smoothly though. Cryptocurrencies are still very new and quite experimental, but cryptocurrencies have the potential to disrupt the global economy. There are currently many problems and questions about cryptocurrencies which people need to solve. One of the main issues with cryptocurrencies is that the proofofwork problem for cryptocurrencies costs a lot of energy and resources and these proofofwork problems produce nothing of value other than securing the cryptocurrency. If instead the proofofwork problems for cryptocurrencies produce something of value other than security, the public image of cryptocurrencies will be improved, and as a consequence, governments and other organizations will be less willing to attack or hinder cryptocurrencies. In this post, I will give another attempt of producing useful proofofwork problems for cryptocurrencies.
In my previous post on cryptocurrencies, I suggested that one could achieve both security and also obtain useful proofsofwork by employing many different kinds of problems into the proofofwork scheme instead of a single kind of problem into such a scheme. However, while I think such a proposal is possible, it will be difficult for the cryptocurrency community to accept and implement such a proposal for a couple of reasons. First of all, in order for my proposal to work, one needs to find many different kinds of proofofwork problems which are suitable for securing cryptocurrency blockchains (this is not an easy task). Second of all, even if all the proofofwork problems are selected, the implementation of the proposal will be quite complex since one will have to produce protocols to remove broken problems along with a system that can work together with all the different kinds of problems. Let me therefore propose a type of proofofwork problem which satisfies all of the nice properties that hashbased proofofwork problems satisfy but which will spur the development of reversible computers.
I am seriously considering creating a cryptocurrency whose proofofwork problem will spur the development of reversible computers, and this post should be considered a preliminary outline of how such a proofofwork currency would work. The next time I post about reversible computers spurred by cryptocurrencies, I will likely announce my new cryptocurrency, a whitepaper, and other pertinent information.
What are reversible computers?
Reversible computers are theoretical superefficient classical computers which use very little energy because they can in some sense recycle the energy used in the computation. As analogies, an electric motor can recover the kinetic energy used to power a vehicle using regenerative braking by running the electric motor in reverse, and when people recycle aluminum cans the aluminum is used to make new cans. In a similar manner, a reversible computer can theoretically in a sense regenerate the energy used in a computation by running the computation in reverse.
A reversible computer is a computer whose logic gates are all bijective and hence have the same number of input bits as output bits. For example, the AND and OR gates are irreversible gates since the Boolean operations AND and OR are not bijective. The NOT gate is a reversible gate since the NOT function is bijective. The Toffoli gate and Fredkin gates are the functions $T$ and $F$ respectively defined by
$$T:\{0,1\}^{3}\rightarrow\{0,1\}^{3},T(x,y,z)=(x,y,(x\wedge y)\oplus z)$$
and
$$F:\{0,1\}^{3}\rightarrow\{0,1\}^{3},F(0,y,z)=(0,y,z),F(1,y,z)=(1,z,y).$$
The Toffoli gate and the Fredkin gate are both universal reversible gates in the sense that Toffoli gates alone can simulate any circuit and Fredkin gates can also simulate any circuit.
While reversible computation is a special case of irreversible computation, all forms of classical computation can be somewhat efficiently simulated by reversible circuits. Furthermore, when reversible computers are finally constructed, one should be able to employ a mix of reversibility and irreversibility to optimize efficiency in a partially reversible circuit.
Reversible computation is an improvement over classical computation since reversible computation is potentially many times more efficient than classical computation. Landauer’s principle states that erasing a bit always costs $k\cdot T\cdot\ln(2)$ energy where $T$ is the temperature and $k$ is the Boltzmann constant. Here the Boltzmann constant is $1.38064852\cdot 10^{−23}$ joules per Kelvin. At the room temperature of 300 K, Landauer’s principle requires $2.8\cdot 10^{−21}$ joules for every bit erased. The efficiency of irreversible computation is limited by Landauer’s principle since irreversible computation requires one to erase many bits. On the other hand, there is no limit to the efficiency of reversible computation since reversible computation does not require anyone to erase any bit of information. Since reversible computers are potentially more efficient than classical computers, reversible computers do not generate as much heat and hence reversible computers can potentially run at much faster speeds than ordinary classical computers.
While the hardware for reversible computation has not been developed, we currently have software that could run on these reversible computers. For example, the programming language Janus is a reversible programming language. One can therefore produce, test, and run much reversible software even though the superefficient reversible computers currently do not exist.
The proofofwork problem
Let me now give a description of the proofofwork problem. Suppose that
The function $f$ can be computed by a reversible circuit which we shall denote by $C$ with 132 different layers. The 128 layers in the circuit $C$ which are used to compute the functions $f_{1},…,f_{128}$ shall be called rounds. The circuit $C$ consists of 10880 Toffolilike gates along with 576 CNOT gates (The circuit $C$ has a total of 11456 gates).
Since the randomizing function $f$ is randomly generated, one will be able to replace the function $f$ with a new randomly generated function $f$ periodically if one wants to.
Let $\alpha$ be an adjustable 256 bit number. Let $k$ be a 128 bit hash of the current block in the blockchain. Then the proofofwork problem is to find a 128 bit nonce $\mathbf{x}$ such that $f(k\#\mathbf{x})\leq\alpha$. The number $\alpha$ will be periodically adjusted so that the expected value of the amount of time it takes to obtain a new block in the blockchain remains constant.
Efficiency considerations
It is very easy to check whether a solution to our proofofwork problem is correct or not but it is difficult to obtain a correct solution to our proofofwork problem. Furthermore, the randomizing function $f$ in our proofofwork problem can be just as easily computed on a reversible computer as it can on an irreversible computer. If reversible gates are only slightly more efficient than irreversible gates, then using a reversible computer to solve a proofofwork problem will only be profitable if the randomizing function is specifically designed to be computed by a reversible circuit with no ancilla, no garbage bits, and which cannot be computed any faster by using an irreversible circuit instead. Standard cryptographic hash functions are not built to be run on reversible computers since they require one to either uncompute or to stockpile garbage bits that you will have to erase anyways.
Security considerations
The security of the randomizing permutation $f$ described above has not yet been thoroughly analyzed. I do not know of any established cryptographic randomizing permutation $f:\{0,1\}^{n}\rightarrow\{0,1\}^{n}$ that is written simply as a composition of reversible gates. I therefore had to construct a cryptographic randomizing permutation specifically as a proofofwork problem for cryptocurrencies.
There are several reasons why I have chosen to use mostly randomness to construct the function $f$ instead of intentionally designing each gate of the function $f$. The circuit $C$ has depth 132 which is quite low. Perhaps I can slightly increase the security or the efficiency of the proofofwork problem by designing each particular gate without any randomness but I do not believe that I can increase the security or efficiency by much. If the function $f$ is constructed once using random or pseudorandom information, then one can set up a system to automatically replace the function $f$ with a new function generated in the same manner (periodically changing the function). A randomly constructed function $f$ may even provide better security than a function $f$ constructed without randomness because it seems like since the function $f$ is constructed randomly, the function $f$ would be difficult to analyze. Another reason why I chose a randomly constructed circuit is that a randomly constructed circuit $f$ has a much simpler design than a function such as the SHA256 hash.
In the worst case that the randomizing function is suspected to be insecure, one would have to increase the number of rounds in the randomizing function $f$ from 128 to a larger number such as 192 or 256 so that the randomizing function $f$ would become secure (it is better if 128 rounds is suspected to be insecure before the cryptocurrency is launched since if we need to increase the number of rounds from 128 to 192, then that would require a hard fork which will annoy users and miners and devalue such a currency).
The security requirement for the randomizing permutation $f$ as a proofofwork problem is weaker than the security problem for a randomizing permutation when used to construct a cryptographic hash function or a symmetric encryptiondecryption system. For a cryptographic hash function or a symmetric encryption or decryption to remain secure when used for cryptographic purposes, such a function must remain secure until the end of time, so an adversary may potentially use an unlimited amount of resources in order to attack an individual instance of a cryptographic hash function or symmetric encryption/decryption system. Furthermore, in the case of symmetric cryptography, an adversary will have access to a large quantity of data encrypted with a particular key. However, the current blockchain data $k$ changes every few seconds, so it will be quite difficult to break the proofofwork problem.
Reversible computers are halfway inbetween classical computers and quantum computers
I believe that the large scale quantum computer will by far be the greatest technological advancement that has ever or will ever happen. When people invent the large scale quantum computer, I will have to say “we have made it.” Perhaps the greatest motivation for constructing reversible computers is that reversible computers will facilitate the construction of large scale quantum computers. There are many similarities between quantum computers and reversible computers. The unitary operations that make up the quantum logic gates are reversible operations, and a large portion of all quantum algorithms consists of just reversible computation. Both quantum computation and reversible computation must be adiabatic (no heat goes in or out) and isentropic (entropy remains constant). However, quantum computers are more complex and more difficult to construct than reversible computers since the quantum gates are unitary operators and the quantum states inside quantum computers must remain in a globally coherent superposition. Since quantum computers are more difficult to construct than reversible computers, a more modest goal would be to construct reversible computers instead of quantum computers. Therefore, since reversible computers will be easier to construct than large scale quantum computers, we should now focus on constructing reversible computers rather than large scale quantum computers. The technology and knowledge used to construct super efficient reversible computers will then likely be used to construct quantum computers.
An antiscientific attitude among the mathematics community
I have seen much unwarranted skepticism against Landauer’s principle and the possibility that reversible computation can be more efficient than classical irreversible computation could ever be. The only thing I have to say to you is that you are wrong, you are a science denier, you are a troll, and you are just like the old Catholic church who has persecuted Galileo.
]]>I was able to obtain pictures from the endomorphic Laver tables simply by giving the coordinate $(i,j)$ temperature and elevation
$t^{\sharp}(\mathfrak{l}_{1},\mathfrak{l}_{2},\mathfrak{l}_{3})(\mathbf{0}^{i}\mathbf{1}^{j})$. Obviously, the images that you can produce here only show a small portion of the functional endomorphic Laver tables and the functional endomorphic Laver table operations are too complicated to be completely represented visually.
@ARTICLE{GitmanHamkins:GVP,
AUTHOR= {Victoria Gitman and Joel David Hamkins},
TITLE= {A model of the generic Vop\v enka principle in which the ordinals are not $\Delta_2$Mahlo},
PDF={https://boolesrings.org/victoriagitman/files/2017/06/GenericVopenkawithOrdnotMahlo.pdf},
Note ={Submitted},
EPRINT ={1706.00843},
}
The Vopěnka principle is the assertion that for every proper class of firstorder structures in a fixed language, one of the structures embeds elementarily into another. This principle can be formalized as a single secondorder statement in \GodelBernays settheory ${\rm GBC}$, and it has a variety of useful equivalent characterizations. For example, the Vopěnka principle holds precisely when for every class $A$, the universe has an $A$extendible cardinal, and it is also equivalent to the assertion that for every class $A$, there is a stationary proper class of $A$extendible cardinals [1]. In particular, the Vopěnka principle implies that ${\rm ORD}$ is Mahlo: every class club contains a regular cardinal and indeed, an extendible cardinal and more.
To define these terms, recall that a cardinal $\kappa$ is extendible, if for every $\lambda>\kappa$, there is an ordinal $\theta$ and an elementary embedding $j:V_\lambda\to V_\theta$ with critical point $\kappa$. It turns out that, in light of the Kunen inconsistency, this weak form of extendibility is equivalent to a stronger form, where one insists also that $\lambda>j(\kappa)$; but there is a subtle issue about this that will come up later in our treatment of the virtual forms of these axioms, where the virtual weak and virtual strong forms are no longer equivalent. Relativizing to a class parameter, a cardinal $\kappa$ is $A$extendible for a class $A$, if for every $\lambda>\kappa$, there is an elementary embedding
$$j:\langle V_\lambda, \in, A\cap V_\lambda\rangle\to \langle V_\theta,\in,A\cap V_\theta\rangle$$
with critical point $\kappa$, and again one may equivalently insist also that $\lambda<j(\kappa)$. Every such $A$extendible cardinal is therefore extendible and hence inaccessible, measurable, supercompact and more. These are amongst the largest large cardinals.
In the firstorder ${\rm ZFC}$ context, set theorists commonly consider a firstorder version of the Vopěnka principle, which we call the Vopěnka scheme, the scheme making the Vopěnka assertion of each definable class separately, allowing parameters. That is, the Vopěnka scheme asserts, of every formula $\varphi$, that for any parameter $p$, if $\{\,x\mid \varphi(x,p)\,\}$ is a proper class of firstorder structures in a common language, then one of those structures elementarily embeds into another.
The Vopěnka scheme is naturally stratified by the assertions ${\rm VP}(\Sigma_n)$, for the particular natural numbers $n$ in the metatheory, where ${\rm VP}(\Sigma_n)$ makes the Vopěnka assertion for all $\Sigma_n$definable classes. Using the definable $\Sigma_n$truth predicate, each assertion ${\rm VP}(\Sigma_n)$ can be expressed as a single firstorder statement in the language of set theory.
Hamkins [1] proved that the Vopěnka principle is not provably equivalent to the Vopěnka scheme, if consistent, although they are equiconsistent over ${\rm GBC}$ and furthermore, the Vopěnka principle is conservative over the Vopěnka scheme for firstorder assertions. That is, over ${\rm GBC}$ the two versions of the Vopěnka principle have exactly the same consequences in the firstorder language of set theory.
In this article, we are concerned with the virtual forms of the Vopěnka principles. The main idea of virtualization, due to Schindler, is to weaken elementaryembedding existence assertions to the assertion that such embeddings can be found in a forcing extension of the universe. Gitman and Schindler [2] emphasized that the remarkable cardinals, for example, instantiate the virtualized form of supercompactness via the Magidor characterization of supercompactness. This virtualization program has now been undertaken with various large cardinals, leading to fruitful new insights (see [2], [3]).
Carrying out the virtualization idea with the Vopěnka principles, we define the generic Vopěnka principle to be the secondorder assertion in ${\rm GBC}$ that for every proper class of firstorder structures in a common firstorder language, one of the structures admits, in some forcing extension of the universe, an elementary embedding into another. That is, the structures themselves are in the class in the ground model, but you may have to go to the forcing extension in order to find the elementary embedding.
Similarly, the generic Vopěnka scheme, introduced in [3], is the assertion (in ${\rm ZFC}$ or ${\rm GBC}$) that for every firstorder definable proper class of firstorder structures in a common firstorder language, one of the structures admits, in some forcing extension, an elementary embedding into another.
On the basis of their work in [3], Bagaria, Gitman and Schindler had asked the following question:
Question: If the generic \Vopenka\ scheme holds, then must there be a proper class of remarkable cardinals?
There seemed good reason to expect an affirmative answer, even assuming only ${\rm gVP}(\Sigma_2)$, based on strong analogies with the nongeneric case. Specifically, in the nongeneric context Bagaria had proved that ${\rm VP}(\Sigma_2)$ was equivalent to the existence of a proper class of supercompact cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form ${\rm gVP}(\Sigma_2)$ was equiconsistent with a proper class of remarkable cardinals, the virtual form of supercompactness. Similarly, higher up, in the nongeneric context Bagaria had proved that ${\rm VP}(\Sigma_{n+2})$ is equivalent to the existence of a proper class of $C^{(n)}$extendible cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form ${\rm gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$extendible cardinals.
But further, they achieved direct implications, with an interesting bifurcation feature that specifically suggested an affirmative answer to the above question. Namely, what they showed at the $\Sigma_2$level is that if there is a proper class of remarkable cardinals, then ${\rm gVP}(\Sigma_2)$ holds, and conversely if ${\rm gVP}(\Sigma_2)$ holds, then there is either a proper class of remarkable cardinals or a proper class of virtually rankintorank cardinals. And similarly, higher up, if there is a proper class of virtually $C^{(n)}$extendible cardinals, then ${\rm gVP}(\Sigma_{n+2})$ holds, and conversely, if ${\rm gVP}(\Sigma_{n+2})$ holds, then either there is a proper class of virtually $C^{(n)}$extendible cardinals or there is a proper class of virtually rankintorank cardinals. So in each case, the converse direction achieves a disjunction with the target cardinal and the virtually rankintorank cardinals. But since the consistency strength of the virtually rankintorank cardinals is strictly stronger than the generic Vopěnka principle itself, one can conclude on consistencystrength grounds that it isn’t always relevant, and for this reason, it seemed natural to inquire whether this second possibility in the bifurcation could simply be removed. That is, it seemed natural to expect an affirmative answer to their question, even assuming only ${\rm gVP}(\Sigma_2)$, since such an answer would resolve the bifurcation issue and make a tighter analogy with the corresponding results in the nongeneric/nonvirtual case.
In this article, however, we shall answer the question negatively. The details of our argument seem to suggest that a robust analogy with the nongeneric/nonvirtual principles is achieved not with the virtual $C^{(n)}$cardinals, but with a weakening of that property that drops the requirement that $\lambda<j(\kappa)$. This seems to offer an illuminating resolution of the bifurcation aspect of the results we mentioned from [3], because it provides outright virtual largecardinal equivalents of the stratified generic Vopěnka principles. Because the resulting virtual large cardinals are not necessarily remarkable, however, our main theorem shows that it is relatively consistent with even the full generic Vopěnka principle that there are no $\Sigma_2$reflecting cardinals and therefore no remarkable cardinals.
Main Theorem
@ARTICLE{Hamkins:VopenkaPrinciple,
author = {Joel David Hamkins},
title = {The Vop\v{e}nka principle is inequivalent to but conservative over the Vop\v{e}nka scheme},
journal = {},
year = {},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript under review},
abstract = {},
keywords = {},
source = {},
eprint = {1606.03778},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/vopenkaprinciplevopenkascheme},
pdf={http://boolesrings.org/victoriagitman/files/2016/07/Properclassgames.pdf},
}
@ARTICLE{GitmanSchindler:virtualCardinals,
AUTHOR= {Gitman, Victoria and Schindler, Ralf},
TITLE= {Virtual large cardinals},
Note ={Submitted},
pdf={https://boolesrings.org/victoriagitman/files/2017/03/virtualLargeCardinals.pdf},
}
@ARTICLE{BagariaGitmanSchindler:VopenkaPrinciple,
AUTHOR = {Bagaria, Joan and Gitman, Victoria and Schindler, Ralf},
TITLE = {Generic {V}op\v enka's {P}rinciple, remarkable cardinals, and the
weak {P}roper {F}orcing {A}xiom},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {56},
YEAR = {2017},
NUMBER = {12},
PAGES = {120},
ISSN = {09335846},
MRCLASS = {03E35 (03E55 03E57)},
MRNUMBER = {3598793},
DOI = {10.1007/s001530160511x},
URL = {http://dx.doi.org/10.1007/s001530160511x},
pdf ={http://boolesrings.org/victoriagitman/files/2016/02/GenericVopenkaPrinciples.pdf},
}
As tradition decrees, we shall begin our show by taking a closer look at our number.
146 is an octahedral number (and thus a figurate number).
Even more amazingly 146 is an untouchable number which means it cannot be expressed as the sum of all the proper divisors of any positive integer (including itself). Can you guess how many untouchable numbers there are? Of course, infinitely many and, of course, this was first proved by Paul Erdős. But did you know that the only known proof that 5 is the only odd untouchable number depends on a stronger version of the Goldbach conjecture? Amazing!
Now that you’ve warmed up, let us enter the magnificent, magnetic madness of the mathematical blogging carnival.
If you have any affinity to football (the real kind, not the funny American stuff), then start off with Nira Chamberlain who reviews the mathematical simulation model he built for his favorite team  you know, like any normal awesome football fan would do.
Next, follow Sean and Jamidi to the depths of the chalkdust magazine where they spoke with one of the great mathematical storytellers, Marcus du Sautoy.
Beware now, lest you be pulled into the enchanted world of The Mathemactivist who can draw a Hilbert Curve by hand.
See if you can spot the two mistakes!
Come now, and follow us to the trickster’s lair where Tom rocks math takes a closer look at three fun numbers to tell you things you didn’t realize you ever wanted to know. From here, follow us to the depth of the mathvault and let Scott Hartshorn lure you with an introduction to statistical significance after which all your papernerd needs will be met by Nick Higham, who looks at the benefits of dot grid paper (including, of course, a LaTeX template).
Before you leave, be sure to witness the spectacle of John Cook taming the Weibull distribution and connecting it with Benford’s law. And as an encore, John will take you far from the equation systems you solved in algebra when you were a kid to the “simple” generalization that can be solved using a Gröbner basis (which, as so many things in mathematics, were not actually discovered by Gröbner).
And if you still can’t get enough, be sure to check out the many fabulous results of Christian LawsonPerfect’s call for proofinatoot.
That’s it for the beautiful month of May!
Be sure to stop by next month’s Carnival, hosted by Lucy at Cambridge Mathematics. You should submit your favorite blog posts/videos/content from the month of June. If you’d like to host an upcoming show, please get in touch with Katie.
]]>A proofofwork problem is a computational problem that one must do to in order to produce new coins in the cryptocurrency and to maintain the security of the cryptocurrency. For example, in Bitcoin, the proofofwork problem consists of finding suitable strings which produce exceptionally low SHA256 hashes (the hashes must be low enough so that only one person produces a such hash every 10 minutes or so). In this post, I am going to outline how employing many different kinds of more mathematical proofofwork problems will provide much better security for cryptocurrencies than if one were to employ only one type of proofofwork problem and how some of the issues that arise with useful proofofwork problems are resolvable.
I believe that hash problems are popular proofofwork problems for cryptocurrencies not because they are the best but simply because they are simple and these hashbased proof of work problems have been the best proof of work problems in instances where a proofofwork was needed before cryptocurrencies came into being. For example, hashbased proofofwork problems are used to thwart denialofservice attacks and spam since for a normal user the proofofwork problems are not too difficult to solve but the problems will be too difficult for an attacker to compute. While the hashbased proofofwork problems are ideal to thwarting denialofservice attacks and spammers, they are not ideal as proofofwork schemes for cryptocurrencies.
Cryptocurrencies with a many kinds of proofofwork problems can satisfy very lax requirements compared to cryptocurrencies with only one type of proofofwork problem.
The security of cryptocurrencies depends on the fact that no single entity can solve more than 50 percent of the proofofwork problems. If a malicious entity solves more than 50 percent of the proofofwork problems, then such an entity can undermine the security of the entire cryptocurrency (such an attack is called a 51 percent attack). If a cryptocurrency has only one kind of proofofwork problem, then that proofofwork problem must satisfy very strict requirements. But if the same cryptocurrency has many different kinds of proofofwork problems, then those proofofwork problems are only required to satisfy relatively mild conditions, and it is therefore feasible for many useful proofofwork problems to satisfy those conditions (here temporarily assume that there is a good protocol for removing broken proofofwork problems). Let me list those conditions right here.
The conditions colored orange are always absolutely essential for any proofofwork problem on a cryptocurrency. The conditions colored green are essential when there is only one kind of proofofwork problem, but these conditions in green are not necessary when there are many different kinds of proofofwork problems for the cryptocurrency.
These problems need to be difficult to solve but it must be easy to verify the solution to these problems.
The solution to the problem should have other uses than simply securing the cryptocurrency. The solution should have practical applications as well, or at the very least, the solutions to the problem should advance mathematics or science. Right now, the proofofwork problem for Bitcoin is amounts to simply finding data whose SHA256 hash is exceptionally low.
For Bitcoin, the proofofwork problems need to be solved approximately every 10 minutes (with an exponential distribution), and in any cryptocurrency, the proofofwork problem must consistently be solved in about the same amount of time. As time goes on, the proofofwork problems need to get more difficult to accomodate for faster computers, quicker algorithms, more people working on the problem, and more specialized computer chips solving the problem. If $X$ is the distribution of the amount of time it takes to compute the proofofwork problem, then $X$ should have a consistent mean and the distributions $X,1/X$ should have low variance (we do not want to solve the problem for one block in one second and solve the problem for the next block in one hour).
The proofofwork problems need to be generated automatically based on the previous blocks in the blockchain. Cryptocurrencies are decentralized so there cannot be a central agency which assigns each individual proofofwork problem.
Progress freeness for search problems means that the probability that the first solution is found at time $t$ follows an exponential distribution. Progress freeness for problems whose objective is to optimize $f(x)$ means that for all $\alpha$ the probability of finding a solution where $f(x)\geq\alpha$ at time $t$ follows an exponential distribution. In other words, solutions are found randomly without regard of how long one has been working on obtaining a solution. Said differently, if Alice spends 30 minutes attempting to find a solution without finding a solution, then Alice will not have any advantage if she has not spent any of the 30 minutes searching for a solution. If there is only one kind of proofofwork problem for a cryptocurrency, and that proofofwork problem is far from progress free, then the entity who has the fastest computer will always win, and such an entity could launch a 51 percent attack. As an extreme example of why progressfreeness is needed, suppose that a problem always takes 1000 steps to solve. If Alice has a computer that can do 500 steps a second and Bob has a computer that can do 1000 steps a second, then Bob will solve the problem in 1 second and Alice will solve the problem in 2 seconds. Therefore Bob will always win. However, a lack of progress freeness is no longer a problem if in the cryptocurrency, there are many proofofwork problems running simultaneously. For example, in the above scenario, even if Bob is able to solve a particular problem better than everyone else, Bob will have to spend much of his computational power solving that particular problem, and Bob will not have enough resources to solve the other proofofwork problems, and therefore Bob will be unable to launch a 51 percent attack.
By optimizationfreeness, I mean that people need to be confident that in the future an entity will not be able to obtain a faster algorithm than we have today. The motivation behind optimization freeness is that if an entity has a secret algorithm which is much faster or better than all the other algorithms, then such an entity will be able to launch a 51 percent attack. However, a cryptocurrency employs many different kinds of proofofwork problems, then such an entity will be able to solve a particular problem better than all others but that entity will not be able to launch a 51 percent attack. In fact, if a cryptocurrency employs many different kinds of proofofwork problems, then optimization freeness will make the cryptocurrency less secure instead of more secure.
Precomputation refers to when an entity solves a proofofwork problem before the proofofwork problem for a particular block is determined.
There needs to be an unlimited supply of proofofwork problems to solve. This is only an issue if there is only one kind of proofofwork problem for the cryptocurrency. If there are many kinds of proofofwork problems in a cryptocurrency, then a particular type of proofofwork problem can simply be removed by an automatic procedure or when all of the particular instances of the problem have been solved.
The problemremovalprotocol
So for a cryptocurrency which employs many different kinds of proofofwork problems, the list of all problems which are currently being used in the cryptocurrency shall be called the problem schedule. If there are many different types of proofofwork problems in the problem schedule, there is a good chance that a few of these proofofwork problems should eventually be broken and should therefore be removed from the problem schedule. Since cryptocurrencies are meant to be as decentralized as possible, the process of removing broken proofofwork problems from he problem schedule needs to be done automatically without a change to the programming of the cryptocurrency. Let me therefore list three different ways that a proofofwork problem could be automatically removed from the problem schedule.
The proofofwork problem will be considered to be broken and removed from the problem schedule if an entity submits a formal proof that the proofofwork problem can be solved in polynomial time. The entity who submits the formal proof will win a predetermined number of blocks in the blockchain and hence new coins.
The proofofwork problem will be considered to be broken and removed form the problem schedule if an entity named Alice submits an algorithm that breaks the proofofwork problem. For search problems, the algorithm is simply expected to consistently find a fast and accurate solution even if the difficulty of the proofofwork problem is increased to very high levels. For optimization problems, the breaking algorithm must quickly and consistently give solutions which are at least as good as solutions obtained using other algorithms. In the case of optimization problems broken by a heuristic algorithm, the protocol for removing a problem from the problem schedule is more involved. Alice will first need to openly submit her algorithm for solving the proofofwork problem. Alice is then required to follow her own algorithm and obtain a solution which is at least as optimized as the solutions produced by all other entities (keep in mind that other entities may use Alice’s algorithm or attempt to improve upon Alice’s algorithm. Furthermore, the optimization process will last long enough for other entities to attempt to improve upon Alice’s algorithm). If Alice satisfies these requirements, then the problem is removed from the problem schedule, Alice is given a predetermined number of blocks in the blockchain, and Alice will be awarded plenty of coins.
For search problems, if the proofofwork problem is repeatedly quickly solved (possibly with multiple solutions) even when the difficulty of the problem increases without bounds, then the proofofwork problem will be automatically removed from the problem schedule. For optimization problems, if the best solution or best solutions are consistently obtained early within the timespan for solving the problem, then the proofofwork problem will automatically be removed from the problem schedule and after the problem is removed, no entity will be awarded any blocks in the blockchain as a prize.
The automatic removal system should preferrably be implemented in a way so that an entity with a secret algorithm can always provide the best solutions to this proofofwork problem simply by producing solutions which always win but which fall below the thresholdofremoval. However, the automatic removal system should also be implemented so that if the secret algorithm that breaks the system becomes publicly available, then the problem will easily be removed. In other words, an entity can cross the thresholdofremoval if he wants to but such an entity can just as easily remain just below the thresholdofremoval in order to obtain many coins.
The benefits of having many kinds of proofofwork problems
The most obvious benefit to our composite proofofwork system with many kinds of proofofwork problems is that our proofofwork system allows for a wide flexibility of problems and hence it allows for proofofwork problems which have useful benefits besides securing the cryptocurrency.
Useful proofofwork problems will obviously provide benefits since the solution to those proofofwork problems will have practical applications or at least advance mathematics. Useful proofofwork problems will also provide a benefit to the cryptocurrencies themselves since useful proofofwork problems will greatly help the public image of cryptocurrencies.
The public image of cryptocurrencies is one of the most important battles that advocates of cryptocurrencies must face. After all, cryptocurrencies directly threaten many of the powers that governments possess, and governments have the power to ban cryptocurrencies. Not only can governments ban cryptocurrencies, but powerful governments also have the power to undermine the security of cryptocurrencies by launching 51 percent attacks against them since governments have nearly unlimited resources. Cryptocurrencies therefore need to obtain and maintain a strong public image in order to avoid governmental backlash as much as possible.
Another reason for cryptocurrencies need to obtain a strong public image is that cryptocurrencies are still new and many people do not consider cryptocurrencies as legitimate currencies (those dinosaurs are wrong). People therefore need to accept cryptocurrencies as legitimate currency.
Today, many people view the process of mining cryptocurrencies and maintaining the blockchain for a cryptocurrency as being incredibly wasteful and bad for the environment, and these criticisms against cryptocurrencies are justified. Cryptocurrency advocates would respond by saying that cryptocurrency mining is useful for securing and decentralizing the blockchain and hence not wasteful, but in any case, it is much better and much more efficient for these proofofwork problems to have applications besides securing the blockchain (many cryptocurrencies have attempted to do away with proofofwork problems since proofofwork problems are seen as wasteful). I personally will not give cryptocurrencies my full endorsement until they are used to solve useful problems with mathematical, scientific, or practical merit.
Today people receive about 4 million dollars worth of bitcoins every day from mining these bitcoins. Unfortunately, in order to mine these bitcoins, people need to use much energy to power the computers that make these calculations. Imagine how much cryptocurrencies can be used to advance mathematics and science if the proofofwork problems are selected with scientific advancements in mind. Imagine how much these advancements to science could boost the public image of cryptocurrencies.
It is less obvious that our composite proofofwork system also strengthens the security of the cryptocurrency against 51 percent attacks. It is much more straightforward for an entity to launch a 51 percent attack against a cryptocurrency with only one type of proofofwork problem than it is to launch an attack against a cryptocurrency with many types of proofofwork problems. After all, to launch a 51 percent attack against a single proofofwork hashbased problem, one must simply spend a lot of money to produce a lot of ASICs for solving that problem and at any time the attacker can completely break the problem. On the other hand, to launch a 51 percent attack against many proofofwork problems, one must obtain many different types of ASICs in order to solve many different kinds of problems. Furthermore, in some cases, an attacker may not be able to solve some of these problems since the attacker may not have access to a secret algorithm for solving this problem, and an attacker will not have access to some specialized knowledge which will allow one to quickly solve the problem.
How our system could be implemented on established cryptocurrencies
Cryptocurrencies are supposed to be decentralized currencies. Since cryptocurrencies are supposed to be decentralized, there should be as few changes to cryptocurrencies as possible. This poses a problem for cryptocurrencies which are attempting to switch from useless proofofwork problems to useful proofofwork problems. My solution is to only make a change to the cryptocurrency protocol once. In order to minimize backlash and other chaos, the changes stated in the cryptocurrency protocol will be implemented gradually. Let me now give an example of how the change from useless proofofwork problems to useful proofofwork problems could take place. Let $t$ be the number of years before the change of the proofofwork protocol were to take place. When $t=4$, the cryptocurrency will begin selecting 144 different proofofwork problems to be used in the cryptocurrency. At time $t=1$, all 144 of the proofofwork problems will be selected and the changes to the cryptocurrency protocol will be finalized. At time $t=0$, the cryptocurrency protocol will be modified but these modifications will take effect over a process of 6 years. Every day an average of 144 blocks will be generated. Every month from time $t=0$ to time $t=6$, two out of these 144 proofofwork problems will be selected at random to be implemented into the problem schedule. At the $m$th month from $t=0$, every day approximately $1442m$ of the blocks on the blockchain will have a hashbased proofofwork problem while the $2m$ other blocks will be from the 144 proofofwork problems which have already been implemented into the problem schedule. At time $t=10$, a collection of new proofofwork problems will be selected to replace the problems which have already been broken or which will be broken in the future.
]]>Title: Euclidean Ramsey Theory 2 (of 3).
Lecturer: David Conlon.
Date: November 25, 2016.
Main Topics: Ramsey implies spherical, an algebraic condition for spherical, partition regular equations, an analogous result for edge Ramsey.
Definitions: Spherical, partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
In the first lecture we defined the relevant terms and then established that all (nondegenerate) triangles are Ramsey. In this lecture we will compare the property of being spherical with being Ramsey. In this lecture we will show that Ramsey implies spherical (or more precisely, that non spherical sets cannot be Ramsey).
Definition. A set $X$ is spherical if there is an $n$ such that $X \subseteq S^n$.
Typically $S$ will be finite, but this is not formally required.
The proofs are those of Erdos et Al, and go by establishing a tight algebraic condition for a set being spherical.
Let $L = \{x,y,z\}$ where $d(x,y) = d(y,z) = 1$ and $d(x,z) = 2$; it is a line segment with three points equally spaced.
“The reason is you can take a `spherical shell’ colouring.” These shell colourings are very important.
This doesn’t work for `cube colourings’ (i.e. using a different norm) since by Dvoretsky’s Theorem, hyperplane slices of cubes basically look spherical.
Proof. Fix $n$. Define the colouring $\chi : \mathbb{R}^n \rightarrow \{0,1,2,3\}$ by $\chi(x) = \lfloor x \cdot x \rfloor$. (You’re taking spherical shells of radii $\sqrt{n}$.)
[Picture]
By the Cosine rule we get $a^2 = b^2 + 1 – 2b\cos(\theta)$ and $c^2 = b^2 + 1 + 2b\cos(\theta)$. So we get $a^2 + c^2 = 2b^2 +2$.
Suppose that $x,y,z$ have the same colour. This means that there is an $i \in \{0,1,2,3\}$ such that $a^2 = 4k_1 + i + \epsilon_1$ and $b^2 = 4k_2 + i + \epsilon_2$ and $c^2 = 4k_3 + i + \epsilon_3$, where each $0 \leq \epsilon_j < 1$.
Putting this into our cosine law info gives $$4(k_1 + k_3 – 2k_2) 2 = 2\epsilon_2 – \epsilon_1 – \epsilon_3,$$ which is a contradiction since the left is $2 \mod 4$ but the right is strictly between $2$ and $2$.
Eventually we will relate the condition of a set being spherical with a tight algebraic condition. With this in mind, we examine when algebraic conditions can yield Ramsey witnesses. We start with a general discussion of partition regular equations.
For example,
Exercise. If the equation is translation invariant then you get a corresponding density result.
Use this to show that you always get a nontrivial solution.
First an example.
Example. $x + y = z + 1$.
We can homogenize this equation by replacing the variables. Use $x = x^\prime+1, y = y^\prime +1$ and $z = z^\prime+1$. This gives the equation $x^\prime + y^\prime = z^\prime$.
Basically, these are the only types of partition regular equations.
The number of colours is equal to the number of variables.
This is a strong result of the equation not being partition regular. You can’t have a monochromatic solution, you can’t even have all the paired variables agree!
The idea is to colour whether you are in a certain interval.
Proof. Fix $n$. Colour $x \in \mathbb{R}$ with $j$ if $x \in [2m + \frac{j}{n}, 2m + \frac{j+1}{n}]$ for some integer $m$.
If $\chi(x_i) = \chi(x^\prime_i)$, then $x_i – x^\prime_i = 2m_i + \epsilon_i$ where $\vert \epsilon_i \vert < \frac{1}{n}$.
So $$1 = \sum_{i=1}^n (x_i – x^\prime_i) = \sum_{i=1}^n 2m_i + \sum_{i=1}^n \epsilon_i.$$
Here the first sum is an even number, and the second is $< 1$, a contradiction.
Now we increase the number of colours to deal with a more general equation.
Proof. Fix $n$. By dividing by $b$ it suffices to consider $b = 1$.
Let $\chi$ be the ($2n$) colouring from Lemma 1.
Define $\chi^\prime(x) = (\chi(c_1 x), \chi(c_2 x), \ldots, \chi(c_n x))$.
Now if $\chi^\prime(x_i) = \chi^\prime(x^\prime_i)$, then $\chi(c_i x_i) = \chi(c_i x^\prime_i)$.
So $c_i(x_i – x_i^\prime) = 2m_i + \epsilon_i$ where $\vert \epsilon_i \vert < \frac{1}{n}$.
If this happens for all $i$, then we have a contradiction identical to the one in Lemma 1.
In the original paper there was a similar lemma but it had a worse bound on the number of colours. This improvement was observed by Strauss a little later.
Note that these equations are not susceptible to the “translation trick” since $(y_i + 1) – (y_i^\prime + 1) = y_i – y_i^\prime$.
The following is the main technical lemma. The proof is purely algebraic.
For readability, we will write $x$ instead of $\vec{x}$. We will make use of the following useful fact:
Proof of $\Leftarrow$. Assume that $X$ is spherical and satisfies the first equation. We will show the second equality fails.
Say $X$ has centre $w (\in \mathbb{R}^n)$ and radius $r$.
For each $i$ we have:
So we must have $(x_i x_0)^2 = 2(x_i – x_0)(x_0w)$ for each $i$. So by multiplying by $c_i$ and adding up we get $$\sum_{i=1}^t c_i (x_i – x_0)^2 = 2(x_0w)\sum_{i=1}^t c_i (x_ix_0) = 0.$$
By using the special case of the useful identity, we get: $$\sum_{i=1}^t c_i (x_i^2 – x_0^2) = \sum_{i=1}^t(x_ix_0)^2 – 2x_0 \sum_{i=1}^t c_i (x_0 – x_i).$$
We know the first sum is $0$ by our above calculations, and by assumption we know $$2x_0 \cdot \sum_{i=1}^t c_i (x_i – x_0) = 0,$$ a contradiction.
Proof of $\Rightarrow$. Assume $X$ is not spherical, and moreover that it is minimal (in the sense that removing any one point makes it spherical). In particular, $X$ is not a nondegenerate simplex. So there is a linear relation $$\sum_{i=1}^t c_i (x_i – x_0).$$
Assume that $c_t \neq 0$. By minimality, $\{x_0, \ldots, x_{t1}\}$ is spherical, and is on a sphere with centre $w$ and radius $r$.
Thus $$x_i^2 – x_0^2 = (x_i – w)^2 – (x_0 – w)^2 + 2x_i \cdot w – 2 x_0 \cdot w.$$
So $$\sum c_i (x_i^2 – x_0^2) = \sum c_i ((x_i – w)^2 – (x_0 – w)^2) + 2w \cdot \sum c_i (x_i – x_0),$$
here the second sum is $0$, and the first, by minimality, is $$c_t ((x_t – w)^2 – (x_0 – w)^2) \neq 0,$$ which isn’t $0$ since the distances of $x_t$ and $x_0$ to $w$ are different.
We are now in a position to put everything together.
Proof. Assume $X$ is not spherical. So there are constants $c_1, \ldots, c_t$ and a vector $\vec{b} \neq \vec{0}$ such that $$\sum c_i (\vec{x}_i – \vec{x}_0) = 0$$ and $$\sum c_i (\vec{x}_i^2 – \vec{x}_0^2) = \vec{b}.$$
Technical exercise. Any congruent copy of $X$ satisfies the same equations.
(Use the fact that congruence is formed by rotations and translations. The translations will spit out terms like $\star$.)
In every nonzero coordinate of $\vec{b}$ use the colouring $\chi$ from Lemma 2, and set $\chi^\prime(x) = \chi(x^2)$. This will give no monochromatic solution to $$\sum c_i (\vec{x}_i^2 – \vec{x}_0^2) = \vec{b}.$$
This is the end of this lecture’s material on pointRamsey. We shift gears a little now.
Instead of colouring points, we can colour pairs of points. This leads to the notion of edge Ramsey. We mention two results in this area.
Proof. Suppose the vertex set is not spherical. Colour the points, using $\chi$, so that no copy of $X$ has a monochromatic vertex set.
Now colour the edge $(x,y)$ with $\chi^\prime (x,y) = (\chi(x), \chi(y))$.
Each edge has the same colour and must contain two distinct vertex colours. So the edge set is bipartite.
This gives us an analogous theorem to the theorem that Ramsey implies spherical.
The proof is a variation on what we’ve seen.
See lecture 1 for references.
]]>Abstract: When one thinks of objects with a significant level of symmetry it is natural to expect there to be a simple classification. However, this leads to an interesting problem in that research has revealed the existence of highly symmetric objects which are very complex when considered within the framework of Borel complexity. The tension between these two seemingly contradictory notions leads to a wealth of natural questions which have yet to be answered.
Borel complexity theory is an area of logic where the relative complexities of classification problems are studied. Within this theory, we regard a classification problem as an equivalence relation on a Polish space. An example of such is the isomorphism relation on the class of countable groups. The notion of a Borel reduction allows one to compare complexities of various classification problems.
The central aim of this research is determine the Borel complexities of various classes of vertextransitive structures, or structures for which every pair or elements are equivalent under some element of its automorphism group. John Clemens has shown that the class of vertextransitive graphs has maximum possible complexity, namely Borel completeness. On the other hand, we show that the class of vertex transitive linear orderings does not.
We explore this phenomenon further by considering other natural classes of vertex transitive structures such as tournaments and partial orderings. In doing so, we discover that several other complexities arise for classes of vertextransitive structures.
]]>Abstract: Models of set theory are ubiquitous in modern day set theoretic research. There are many different constructions that produce countable models of ZFC via techniques such as forcing, ultraproducts, and compactness. The models that these techniques produce have many different characteristics; thus it is natural to ask whether or not models of ZFC are classifiable. We will answer this question by showing that models of ZFC are unclassifiable and have maximal complexity. The notions of complexity used in this thesis will be phrased in the language of Borel complexity theory.
In particular, we will show that the class of countable models of ZFC is Borel complete. Most of the models in the construction as it turns out are illfounded. Thus, we also investigate the sub problem of identifying the complexity for wellfounded models. We give partial results for the wellfounded case by identifying lower bounds on the complexity for these models in the Borel complexity hierarchy.
]]>
A few years ago myself, Joel Hamkins, and Thomas Johnstone [1]
(see this post) showed that the class choice scheme is independent of KelleyMorse (${\rm KM}$) set theory in the strongest possible sense. The class choice scheme is the scheme asserting, for every secondorder formula $\varphi(x,X,A)$, that if for every set $x$, there is a class $X$ such that $\varphi(x,X,A)$ holds, then there is a collecting class $Y$ such that the $x$th slice $Y_x$ is some witness for $x$, namely $\varphi(x,Y_x,A)$ holds. We can also consider a weaker version of the choice scheme allowing only set many choices. We showed that it is relatively consistent to have a model of ${\rm KM}$ in which the choice scheme fails for a firstorder assertion and only $\omega$many choices. We also showed that ${\rm KM}$ together with the choice scheme for set many choices cannot prove the choice scheme even for firstorder assertions. The choice scheme can be viewed either as a collection principle or as an analogue of the axiom of choice for classes. With the later perspective, we can also consider the dependent choice scheme, which asserts, for every secondorder formula $\varphi(X,Y,A)$, that if for every $\alpha$sequence of classes (coded by a class) $X$, there is a class $Y$ such that $\varphi(X,Y,A)$ holds, then there is a single class $Z$ coding ${\rm ORD}$many dependent choices, namely for all ordinals $\alpha$, $\varphi(Z\upharpoonright\alpha,Z_\alpha,A)$ holds. Again, we can consider a weaker version of the dependent choice scheme where we only allow ordinal many choices. We conjectured that that the dependent choice scheme is independent of ${\rm KM}$ together with the choice scheme, but were not able to make further progress on the question.
Usually when trying to prove a result about models of secondorder set theory, it helps, as the first step, to understand analogous results for models of secondorder arithmetic. There are many affinities, but also some notable differences between the kinds of results you can obtain in the two fields. Both the choice scheme and the dependent choice scheme were first considered and extensively studied in the context of models of secondorder arithmetic (see [2], Chapter VII.6). The analogue of ${\rm KM}$ in secondorder arithmetic is the theory ${\rm Z}_2$, which consists of ${\rm PA}$ together with full secondorder comprehension. The theory ${\rm Z}_2$ implies the choice scheme for $\Sigma^1_2$formulas. This is true roughly because a model of ${\rm Z}_2$ can uniformly construct (a code for) $L_\alpha$ for every coded ordinal $\alpha$ and so it can pass to an $L_\alpha$ to select a unique witness for a $\Sigma^1_2$assertion, which are absolute by Shoenfield’s Absoluteness. The classic FefermanLévy model was used to show that the very next level $\Pi^1_2$choice scheme can fail in a model of ${\rm Z}_2$. Consider a model $\mathcal M$ of ${\rm Z}_2$ whose sets are the reals of the FefermanLévy model of ${\rm ZF}$. The model $\mathcal M$ can construct (a code for) $L_{\aleph_n}$ for every natural number $n$, but it cannot collect the codes because $\aleph_\omega$ is uncountable. The dependent choice scheme for $\Sigma^1_2$formulas also follows from ${\rm Z}_2$ (indeed from the choice scheme for $\Sigma^1_2$formulas together with induction for $\Sigma^1_2$formulas, see [2], Theorem VII.6.9.2). Simpson claimed in 1972 that he had a proof of the independence of the $\Pi^1_2$dependent choice scheme from ${\rm Z}_2$ together with the choice scheme but the proof was never published and has since been lost (I corresponded with Steve Simpson about it). So a few years ago Sy Friedman and myself set out to find a proof of this result.
The standard strategy for obtaining a model of ${\rm Z}_2$ together with the choice scheme but with a $\Pi^1_2$failure of the dependent choice scheme would be to construct a model of ${\rm ZF}$ in which ${\rm AC}_\omega$ holds, but ${\rm DC}$ fails for a $\Pi^1_2$definable relation on the reals. We then take our model of ${\rm Z}_2$ whose sets are the reals of the ${\rm ZF}$model, so that ${\rm AC}_\omega$ translates directly into the choice scheme holding. Such models of ${\rm ZF}$ are obtained as symmetric submodels of carefully chosen forcing extensions. The classical model of ${\rm AC}_\omega+\neg{\rm DC}$ (due to Jensen [3]) is a symmetric submodel of the forcing extension adding a collection of Cohen subsets of $\omega_1$ indexed by nodes of the tree ${}^{\lt\omega}\omega_1$ with countable conditions. By choosing the right collections of automorphisms to consider, we obtain a symmetric submodel, call it $N$, which has the tree of the Cohen subsets, but no branch through the tree, witnessing a violation of ${\rm DC}$. The countable closure of the poset allows us to prove that the model $N$ satisfies ${\rm AC}_\omega$. The obvious obstacle in using the classical model for our purposes was that the relation witnessing failure of ${\rm DC}$ is not on the reals. We were able to obtain a variation on the classical model where we forced to add a collection of Cohen reals indexed by nodes of the tree ${}^{\lt\omega}\omega_1$ with finite conditions. We use that the poset is ccc to again argue that ${\rm AC}_\omega$ holds in the desired symmetric submodel. The new model has a relation on the reals witnessing a failure of ${\rm DC}$, but it is not at all clear why even the domain of the relation, namely the Cohen reals making up the nodes of the generic tree, would be definable over the reals. The collection of all Cohen reals is of course $\Pi^1_2$definable, but there does not appear to be a good way of picking out those coming from the nodes of our tree.
In our construction we force with a tree iteration of Jensen’s forcing along the tree ${}^{\lt\omega}\omega_1$. Conditions in the forcing are finite subtrees of ${}^{\lt\omega}\omega_1$ whose $n$level nodes are conditions in the $n$length iteration of Jensen’s forcing, so that nodes on the higher levels extend those from the lower levels. The tree iteration adds, for every node on level $n$ of ${}^{\lt\omega}\omega_1$, an $n$length sequence of reals generic for the $n$length iteration of Jensen’s forcing with the sequences ordered by extension. Jensen’s forcing is a subposet of Sacks forcing in $L$ which has the ccc and the property that it adds a unique generic real over $L$ (see this post). The poset is constructed using $\diamondsuit$ to seal maximal antichains. Kanovei and Lyubetsky recently showed that Jensen’s forcing has an even stronger uniqueness of generic filters property [4]. They showed that a forcing extension of $L$ by the finitesupport $\omega$length product of Jensen’s forcing has precisely the $L$generic reals for Jensen’s forcing which appear on the coordinates of the generic filter for the product. We were able to show that a forcing extension of $L$ by the tree iteration of Jensen’s forcing has for a fixed $n$, precisely the generic $n$length sequences of reals for the $n$length iteration of Jensen’s forcing which make up the nodes of the generic tree. Since the collection of the generic $n$length sequences of reals for iterations of Jensen’s forcing is $\Pi^1_2$ and the ordering of the sequences is by extension, we succeed in producing a symmetric submodel whose associated model of secondorder arithmetic witnesses a $\Pi^1_2$failure of the dependent choice scheme. That our symmetric model satisfied ${\rm AC}_\omega$ followed because the tree iteration forcing has the ccc.
We are now working with Sy Friedman on transferring the arguments from secondorder arithmetic to the context of secondorder set theory. In particular, we hope to produce a subposet of $\kappa$Sacks forcing for an inaccessible $\kappa$ in $L$ mimicking the properties of Jensen’s forcing.
Slides to come!
@ARTICLE{GitmanHamkinsJohnstone:KMplus,
AUTHOR= {Victoria Gitman and Joel David Hamkins and Thomas Johnstone},
TITLE= {Kelley{M}orse set theory and choice principles for classes},
Note ={In preparation},
}
@book {simpson:secondorderArithmetic,
AUTHOR = {Simpson, Stephen G.},
TITLE = {Subsystems of second order arithmetic},
SERIES = {Perspectives in Logic},
EDITION = {Second},
PUBLISHER = {Cambridge University Press, Cambridge; Association for
Symbolic Logic, Poughkeepsie, NY},
YEAR = {2009},
PAGES = {xvi+444},
ISBN = {9780521884396},
MRCLASS = {03F35 (0302 03B30)},
MRNUMBER = {2517689 (2010e:03073)},
DOI = {10.1017/CBO9780511581007},
URL = {http://dx.doi.org/10.1017/CBO9780511581007},
}
@article {jensen:ACplusNotDC,
AUTHOR = {Jensen, Ronald B.},
TITLE = {Consistency results for models of ${\rm ZF}$},
JOURNAL = {Notices {A}m. {M}ath. {S}oc.},
FJOURNAL = {Notices of the American Mathematical Society}
VOLUME = {14},
YEAR = {1967},
PAGES = {137},
}
@ARTICLE {kanovei:productOfJensenReals,
AUTHOR = {Kanovei, Vladimir and Lyubetsky, Vassily},
TITLE = {A countable definable set of reals containing no definable elements},
EPRINT ={1408.3901}}
Critical points
Suppose that $(X,*)$ is a Laverlike algebra. Then if $x,y\in X$, then define $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ if $x^{n}*y\in\mathrm{Li}(X)$ for some $n$. We say that $\mathrm{crit}(x)=\mathrm{crit}(y)$ if $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ and $\mathrm{crit}(y)\leq\mathrm{crit}(x)$. We shall call $\mathrm{crit}(x)$ the critical point of the element $x$. Let $\mathrm{crit}[X]=\{\mathrm{crit}(x)x\in X\}$. Then $\mathrm{crit}[X]$ is a linearly ordered set.
If $\alpha\in\mathrm{crit}[X]$, then there is some $r\in X$ with $\mathrm{crit}(r)=\alpha$ and $r*r\in\mathrm{Li}(X)$. Suppose now that $\alpha\in\mathrm{crit}[X],\mathrm{crit}(r)=\alpha,r*r=\mathrm{Li}(X)$. Then define a congruence $\equiv^{\alpha}$ on $X$ by letting $x\equiv^{\alpha}y$ if and only if $r*x=r*y$. Then the congruence $\equiv^{\alpha}$ does not depend on the choice of $\alpha$. The congruence $\equiv^{\alpha}$ is the smallest congruence on $X$ such that if $\mathrm{crit}(x)\geq\alpha$, then $y\in\mathrm{Li}(X)$ for some $y$ with $y\equiv^{\alpha}x$. If $x\in X$, then define the mapping $x^{\sharp}:\mathrm{crit}[X]\rightarrow\mathrm{crit}[X]$ by letting $x^{\sharp}(\mathrm{crit}(y))=\mathrm{crit}(x*y)$. The mapping $x^{\sharp}$ is monotone; if $\alpha\leq\beta$, then $x^{\sharp}(\alpha)\leq x^{\sharp}(\beta)$. Suppose that $(X,E,F)$ is a Laverlike endomorphic algebra. Then suppose that $\alpha\in\mathrm{crit}(\Gamma(X,E))$. Then define the congruence $\equiv^{\alpha}$ on $(X,E,F)$ by letting $x\equiv^{\alpha}y$ if and only if $f(x_{1},…,x_{n_{f}},x)=f(x_{1},…,x_{n_{f}},y)$ where $x_{1},…,x_{n_{f}}\in X$ and $\mathrm{crit}(f,x_{1},…,x_{n_{f}})=\alpha$ and $(f,x_{1},…,x_{n_{f}})*(f,x_{1},…,x_{n_{f}})\in\mathrm{Li}(\Gamma(X,E))$.
In the Laverlike LDsystems which we have looked at which are suitable for cryptographic purposes, $\mathrm{crit}[X]$ tends to be rather small (for cryptographic purposes, $X$ should have at least about 9 critical points, but the largest classical Laver table ever computed $A_{48}$ has 49 critical points).
Therefore if $\mathbf{y}\in X/\equiv^{\alpha}$ is the equivalence class of $y\in X$, then define $x*\mathbf{y}$ to be the equivalence class of $x*y$ in the quotient algebra $X/\equiv^{x^{\sharp}(\alpha)}$ and define $x\circ\mathbf{y}$ to be the equivalence class of $x\circ\mathbf{y}$ in $X/\equiv^{x^{\sharp}(\alpha)}$.
We may therefore define $\mathrm{crit}(\mathfrak{u}_{1},…,\mathfrak{u}_{n})=\mathrm{crit}(\mathfrak{u}_{1}(\varepsilon),…,\mathfrak{u}_{1}(\varepsilon))$.
Analysis of the KoLee key exchange for Laverlike LDsystems.
The security of the KoLee key exchange for Laverlike LDsystems depends on the difficulty of the following problem.
INPUT: $x,r,s,\alpha,\beta$ are known.
OBJECTIVE: Find $a’$ such that $r=a’\circ x$ (Problem A1) or find $\mathbf{b}\in X/\equiv^{\beta}$ such that $x\circ\mathbf{b}\equiv^{\alpha}s$ (Problem A2).
Note: In the above problem, we assume that $\alpha$ is known since $X$ usually has a limited number of critical points and $\alpha$ can either be solved or there are a limited number of possibilities for $\alpha$.
We take note that problem A is asymmetric in the roles of $a$ and $b$. In particular, Problem A2 seems to be an easier problem than problem A1 since $X/\equiv^{\beta}$ is simpler than $X$.
Constructing endomorphic Laver tables
Suppose that $(X,*)$ is a Laverlike LDsystem. If $t:X^{n}\rightarrow X$ is a mapping that satisfies the identity $x*t(x_{1},…,x_{n})=t(x*x_{1},…,x*x_{n})$, then define an operation $T:X^{n+1}\rightarrow X$ by letting $T(x_{1},…,x_{n},x)=t(x_{1},…,x_{n})*x$. Then the algebra $(X,T)$ is a Laverlike endomorphic algebra.
By the above Proposition, we will hold to the convention that $\mathrm{crit}(T,x_{1},…,x_{n})=\mathrm{crit}(t(x_{1},…,x_{n}))$.
If $x>0$, then define $(x)_{r}$ to be the unique integer with $1\leq(x)_{r}\leq r$ and where $(x)_{r}=x\mod\,r$. If $n\geq m$, then define a mapping $\pi_{n,m}:A_{n}\rightarrow A_{m}$ by letting $\pi_{n,m}(x)=(x)_{2^{m}}$. Define a linear ordering $\leq^{L}_{n}$ on the classical Laver table $A_{n}$ by induction on $n$ by letting $x<^{L}_{n+1}y$ if and only if $\pi_{n+1,n}(x)<^{L}_{n}\pi_{n+1,n}(y)$ or $\pi_{n+1,n}(x)=\pi_{n+1,n}(y),x < y$. For simplicity, we shall simply write $\leq^{L}$ for $\leq^{L}_{n}$. Then $y\leq^{L}z\rightarrow x*_{n}y\leq^{L}x*_{n}z$.
Suppose that $\mathbf{S}$ is a function such that
Then define a mapping $\mathbf{S}_{n}:A_{n}^{r}\rightarrow A_{n}$ for all $n$ by induction on $n$. Suppose that $\mathbf{S}_{n}$ has been defined already and suppose that $x_{1},…,x_{r}\in A_{n+1}$. Then let $v=\mathbf{S}_{n}(\pi_{n+1,n}(x_{1}),…,\pi_{n+1,n}(x_{r}))$. If $\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle=1$, then let $\mathbf{S}_{n+1}(x_{1},…,x_{r})$ be the unique element in $\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle$. If $\{v,v+2^{n}\}\cap\langle x_{1},…,x_{r}\rangle=2$, then let $\mathbf{S}_{n}(x_{1},…,x_{r})=v+2^{n}\cdot\mathbf{S}(n;x_{1},…,x_{r})$.
Then for all $n$, the operation $\mathbf{S}_{n}$ satisfies the identity $x*_{n}\mathbf{S}_{n}(x_{1},…,x_{r})=\mathbf{S}_{n}(x*_{n}x_{1},…,x*_{n}x_{r})$.
Analysis of the KoLee key exchange for functional endomorphic Laver tables.
For our analysis of the KoLee key exchange for functional endomorphic Laver tables, assume that $t:A_{n}^{2}\rightarrow A_{n}$ is a mapping that satisfies the identity $x*_{n}t(y,z)=t(x*_{n}y,x*_{n}z)$ and $T^{\bullet}:A_{n}^{3}\rightarrow A_{n}$ is defined by $T^{\bullet}(x,y,z)=t(x,y)*z$.
In the classical Laver table $A_{n}$, we have $\mathrm{crit}(x)\leq\mathrm{crit}(y)$ if and only if $\gcd(x,2^{n})\leq\gcd(y,2^{n})$. Therefore, for the classical Laver table $A_{n}$, we define $\mathrm{crit}(x)=\log_{2}(\gcd(x,2^{n}))$. In the classical Laver table $A_{n}$, we have $x\equiv^{m}y$ if and only if $x=y\mod 2^{m}$. If $x\in A_{n}$, then let $o_{n}(x)$ be the least natural number $m$ such that $x*_{n}2^{m}=2^{n}$. In other words, $o_{n}(x)$ is the least critical point $\alpha\in\mathrm{crit}[A_{n}]$ where in $A_{n}$, we have $x^{\sharp}(\alpha)=\max(\mathrm{crit}[A_{n}])$.
Let $\mathfrak{u}_{1},\mathfrak{u}_{2},\mathfrak{l}_{1},\mathfrak{l}_{2},\mathfrak{v}_{1},\mathfrak{v}_{2}\in\Diamond(X,T^{\bullet})$. Then in order for problem A for $(\mathfrak{u}_{1},\mathfrak{u}_{2}),(\mathfrak{l}_{1},\mathfrak{l}_{2}),(\mathfrak{v}_{1},\mathfrak{v}_{2})$ to be difficult, one needs for $o_{n}(t(\mathfrak{u}_{1}(\varepsilon),\mathfrak{u}_{2}(\varepsilon))\circ_{n}t(\mathfrak{l}_{1}(\varepsilon),\mathfrak{l}_{2}(\varepsilon)))$ to be as large as possible. We therefore recommend for $o_{n}(t(\mathfrak{u}_{1}(\varepsilon),\mathfrak{u}_{2}(\varepsilon))\circ_{n}t(\mathfrak{l}_{1}(\varepsilon),\mathfrak{l}_{2}(\varepsilon)))\geq 4$ for the functional endomorphic Laver table based KoLee key exchange to be secure. Unfortunately, $o_{n}(x\circ y)$ is usually very small $\leq 4$ except for highly specialized values of$ x,y$. If $x,y\in A_{n}$, then $o_{n}(x\circ_{n}y)\leq\min(o_{n}(x),o_{n}(y))$, so $o_{n}(x\circ_{n}y)$ is typically very small.
The following function shows that $o_{n}(x)$ is usually small (usually 2 or 4).
While $\{o_{n+1}(x),o_{n+1}(x)+2^{n}\}\subseteq\{o_{n}(x),o_{n}(x)+2^{n}\}$, and $o_{n+1}(x+2^{n})=o_{n}(x)$, the following data indicates that for $n\geq 10$ increasing $n$ has little effect on $o_{n}(x)$ since it is very rare for $o_{n+1}(x)=o_{n}(x)+1$ as $n$ gets large.
While one can show that $o_{n}(1)\rightarrow\infty$ using large cardinals, there is no known proof in ZFC that $o_{n}(1)\rightarrow\infty$ and it is known that if $o_{n}(1)\rightarrow\infty$, then $o_{n}(1)$ must diverge very slowly.
Increases $n$ past about 10 or so does not seem to increase the security of the functional endomorphic Laver table based cryptosystems. If $n$ is much larger than $10$, and if $f$ is a $k$ary term in $\Diamond(X,T^{\bullet})$ and $\mathfrak{l}_{1},…,\mathfrak{l}_{k}\in\Diamond(X,T^{\bullet})$, then $f(\mathfrak{l}_{1},…,\mathfrak{l}_{k})(\mathbf{x})$ is either very close to $2^{n}$, very close to $1$, or very close to $\mathfrak{l}_{i}(\mathbf{y})$ for some string $\mathbf{y}$. Said differently, increasing the size of $n$ past about 10 or so does not seem to make problem A (or any other cryptographic problem) much harder to solve.
]]>If you are interested in evaluating the security of endomorphic Laver table based cryptosystems, then please post a comment to this post or send me an email or otherwise contact me.
As a general principle, the nonabelian group based cryptosystems extend to selfdistributive algebra based cryptosystems. In particular, the KoLee key exchange, AnshelAnshelGoldfeld key exchange, and conjugacy based authentication system, which are some of the most important nonabelian group based cryptosystems, extend from group based cryptosystems to selfdistributive algebra based cryptosystems, and the endomorphic Laver tables seem to be good platforms for these selfdistributive algebra based cryptosystems.
It seems like endomorphic Laver tables based cryptosystems would even remain secure even against adversaries who have access to quantum computers. I do not believe that quantum computers pose any more of a threat to these cryptosystems than classical computers would since none of the techniques for constructing quantum algorithms (besides Grover’s algorithm which works in general) seem applicable to anything remotely related endomorphic Laver tables. Unless there is a fairly obvious attack, I do not foresee that we will be able to mathematically prove that there is an polynomial time classical or quantum algorithm that breaks these cryptosystems. Nevertheless, I am concerned that these cryptosystems may be susceptible to heuristic attacks.
I have only been able to efficiently compute the endomorphic Laver table operations for two months now (see my previous post for more information about the difficulties in computing the endomorphic Laver table operations and how it was overcome), so these algebras and cryptosystems are fairly unknown.
Endomorphic algebras
An LDsystem is an algebra $(X,*)$ that satisfies the selfdistributivity identity $x*(y*z)=(x*y)*(x*z)$. LDsystems arise naturally in set theory and in the theory of knots and braids. One can generalize the notion of selfdistributivity to algebras with multiple operations, operations of higher arity, and algebras where only some of the fundamental operations are selfdistributive.
Suppose that $(X,E,F)$ is an algebraic structure where each $f\in E$ has arity $n_{f}+1$. If $f\in E,a_{1},\dots,a_{n_{f}}\in X$, then define the mapping $L_{f,a_{1},\dots,a_{n_{f}}}$ by letting $L_{f,a_{1},\dots,a_{n_{f}}}(x)=f(a_{1},\dots,a_{n_{f}},x)$. Then we say that $(X,E,F)$ is a partially endomorphic algebra if the mapping $L_{f,a_{1},\dots,a_{n_{f}}}$ is an endomorphism of $(X,E,F)$ whenever $f\in E,a_{1},\dots,a_{n_{f}}\in X$. If $F=\emptyset$, then we shall call $(X,E,F)$ an endomorphic algebra (and we may write $(X,E)$ to denote the endomorphic algebra $(X,E,\emptyset)$). If $(X,t)$ is an endomorphic algebra and $t$ is an $n+1$ary operation, then we shall call $(X,t)$ an $n+1$ary selfdistributive algebra. The partially endomorphic algebras are the universal algebraic analogues of the LDsystems.
If $(X,E)$ is an endomorphic algebra, then let
$$\Gamma(X,E)=(\bigcup_{f\in E}\{f\}\times X^{n_{f}},*)$$
where the operation $*$ is defined by
$$(f,x_{1},\dots,x_{n_{f}})*(g,y_{1},\dots,y_{n_{g}})=(g,f(x_{1},\dots,x_{n_{f}},y_{1}),\dots,f(x_{1},\dots,x_{n_{f}},y_{n_{g}})).$$
Then $\Gamma(X,E)$ is an LDsystem.
Endomorphic algebra based cryptosystems
The following key exchange is a simplified version of the KoLee key exchange for semigroups.
Let $K=a\circ x\circ b$.
An eavesdropper listening in to the communications between Alice and Bob cannot compute $K$. The fullfledged KoLee key exchange would not work for Laverlike algebras since the associative operation on Laverlike algebras has very little commutativity.
In this paper, Kalka and Teicher have extended the AnshelAnshelGoldfeld key exchange to LDsystems. The following key exchange extends this cryptosystem by Kalka and Teicher to partially endomorphic algebras.
Let $K=f(\mathbf{a},g(\mathbf{b},a))$.
$$K=f(\mathbf{a},g(\mathbf{b},a))=f(\mathbf{a},g(b_{1},\dots,b_{n_{g}},a))$$
$$=g(f(\mathbf{a},b_{1}),\dots,f(\mathbf{a},b_{n_{g}}),f(\mathbf{a},a))$$
$$=g(f(\mathbf{a},t_{1}(y_{1},\dots,y_{n})),\dots,f(\mathbf{a},t_{n_{g}}(y_{1},\dots,y_{n})),p_{0})$$
$$=g(t_{1}(f(\mathbf{a},y_{1}),\dots,f(\mathbf{a},y_{n})),\dots,t_{n_{g}}(f(\mathbf{a},y_{1}),\dots,f(\mathbf{a},y_{n})),p_{0})$$
$$=g(t_{1}(u_{1},\dots,u_{n}),\dots,t_{n_{g}}(u_{1},\dots,u_{n}),p_{0}).$$
No third party will be able to compute the common key $K$.
Laver tables
The classical Laver table $A_{n}$ is the unique algebraic structure $(\{1,\dots,2^{n}\},*_{n})$ that satisfies the selfdistributivity identity $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$ and where $x*_{n}1=x+1\,\mod 2^{n}$ for $x\in A_{n}$. The operation $*_{n}$ can easily be computed from a 2.5 MB file of precomputed data as long as $n\leq 48$. We believe that it is feasible with current technology to compute $*_{n}$ when $n\leq 96$ though. The classical Laver tables alone are unsuitable as platforms for selfdistributive algebra based cryptosystems (the classical Laver tables will offer at most 96 bits of security and in practice such cryptosystems based on the classical Laver tables are effortlessly and instantly broken). The multigenic Laver tables are not suitable as platforms for such cryptosystems either since elements in multigenic Laver tables can easily be factored.
If $(X,*)$ is an LDsystem, then a subset $L\subseteq X$ is said to be a leftideal if $x*y\in L$ whenever $y\in L$. An element $x\in X$ is said to be a leftidentity if $x*y=y$ whenever $y\in X$. Let $\mathrm{Li}(X)$ denote the set of all leftidentities in the set $(X,*)$. We say that an LDsystem $(X,*)$ is Laverlike if
For example, the classical Laver tables $A_{n}$ are always Laverlike.
If $(X,*)$ is an LDsystem, then define the Fibonacci terms $t_{n}$ for $n\geq 1$ by letting $t_{1}(x,y)=y,t_{2}(x,y)=x$ and $t_{n+2}(x,y)=t_{n+1}(x,y)*t_{n}(x,y)$. If $(X,*)$ is a Laverlike LDsystem, then define an operation $\circ$ on $X\setminus\mathrm{Li}(X)$ by letting $x\circ y=t_{n+1}(x,y)$ whenever $t_{n}(x,y)\in\mathrm{Li}(X)$ (the operation $\circ$ does not depend on the choice of $n$). The operation $\circ$ is associative and hence suitable for the KoLee key exchange. The operation $\circ$ also satisfies the identity $(x\circ y)*z=x*(y*z)$.
A partially endomorphic algebra $(X,E,F)$ is said to be Laverlike if the hull $\Gamma(X,E)$ is Laverlike. Any Laverlike algebra may be used to induce a partially endomorphic Laver table, but for simplicity, we shall only define the functional endomorphic Laver tables induced by the endomorphic algebras with only one fundamental operation.
Suppose now that $(X,t^{\bullet})$ is a $n+1$ary Laverlike algebra. Let $\Diamond(X,t^{\bullet})$ be the algebra with underlying set consisting of all functions $\mathfrak{l}:\{1,\dots,n\}^{*}\rightarrow X\cup\{\#\}$ such that
If $\mathfrak{l}_{1},\dots,\mathfrak{l}_{n}\in\Diamond(X,t)$ and $(\mathfrak{l}_{1}(\varepsilon),\dots,\mathfrak{l}_{n}(\varepsilon))\not\in\Gamma((X,t^{\bullet}))$, then define $t_{x}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})\in\Diamond(X,t^{\bullet})$ by $t(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})(\varepsilon)=t^{\bullet}(\mathfrak{l}_{1}(\varepsilon),\dots,\mathfrak{l}_{n}(\varepsilon),x)$ and where $t(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})(\mathbf{x}i)=\mathfrak{l}_{i}(\mathbf{x})$.
Now define an operation $t^{\sharp}:\Diamond(X,t^{\bullet})^{n+1}\rightarrow\Diamond(X,t^{\bullet})$ by letting
$$=t^{\sharp}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},t_{x}(\mathfrak{u}_{1},\dots,\mathfrak{u}_{n}))$$
$$=t^{\sharp}(t^{\sharp}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{u}_{1}),\dots,t^{\sharp}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{u}_{n}),t_{x}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})).$$
Then the algebra $(\Diamond(X,t^{\bullet}),t^{\sharp})$ is an $n+1$ary selfdistributive algebra which we shall refer to as a functional endomorphic Laver table. Every functional endomorphic Laver table is a Laverlike endomorphic algebra. If one has an efficient algorithm for computing $\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l}$, then one also has a reasonably efficient algorithm for computing $t^{\sharp}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})$.
The KoLee key exchange for endomorphic Laver tables
One could apply the KoLee key exchange to the operation $\circ$ in $\Gamma(\Diamond(X,t^{\bullet}),t^{\sharp})$. However, since we are only able to compute $t^{\bullet}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})(\mathbf{x})$ instead of the entire function $t^{\bullet}(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n},\mathfrak{l})$, we will need to modify the KoLee key exchange so that the endomorphic Laver tables can be used as a platform for this cryptosystem. The selfdistributive AnshelAnshelGoldfeld key exchange can be modified in the same way.
Let
$$(\mathfrak{w}_{1},\dots,\mathfrak{w}_{n})$$
$$(\mathfrak{u}_{1},\dots,\mathfrak{u}_{n})\circ(\mathfrak{l}_{1},\dots,\mathfrak{l}_{n})\circ(\mathfrak{v}_{1},\dots,\mathfrak{v}_{n}).$$
Let $i\in\{1,\dots,n\},\mathbf{x}\in\{1,\dots,n\}^{*}$, and let $K=\mathfrak{w}_{i}(\mathbf{x})$.
Remark: The common key $K$ may only have a couple of bits of entropy so the common key $K$ could be found by an eavesdropping party by guessing the most common possibilities. The most obvious way to mitigate this problem is to perform the above key exchange multiple times in order to obtain a large enough key. Another probably more efficient way to mitigate this problem is to use $(\mathfrak{w}_{i}(\mathbf{x}_{0}),\dots,\mathfrak{w}_{i}(\mathbf{x}_{r}))$ as the common key instead where $(\mathbf{x}_{0},\dots,\mathbf{x}_{r})$ is an enumeration of the suffixes of $\mathbf{x}$.
To be continued
In the second post, I will give a detailed analysis of the KoLee key exchange for endomorphic Laver tables. Furthermore, at the same time, I will release an online JavaScript program for analyzing the KoLee key exchange for endomorphic Laver tables.
]]>Since we are able to compute the endomorphic Laver tables, it is plausible that the endomorphic Laver tables could be used as platforms for cryptosystems which are secure against classical attacks and even quantum attacks, but much more research still needs to be done on this topic.
The definition of Laverlike partially endomorphic algebras and partially endomorphic Laver tables
The definition of the partially endomorphic Laver tables is somewhat complicated. For simplicity, we shall only define the wellfounded partially endomorphic Laver tables induced from known partially endomorphic Laverlike algebras.
Suppose that $X$ is a set and $E,F$ are sets of operations on $X$. Suppose that each $\mathfrak{g}\in F$ has arity $n_{\mathfrak{g}}$ and each $f\in E$ has arity $n_{f}+1$. If $f\in E,a_{1},…,a_{n_{f}}\in X$ then let $L_{f,a_{1},…,a_{n_{f}}}:X\rightarrow X$ be the mapping defined by $L_{f,a_{1},…,a_{n_{f}}}(x)=f(a_{1},…,a_{n_{f}},x)$. Then we say that $(X,E,F)$ is an endomorphic algebra if each mapping $L_{f,a_{1},…,a_{n_{f}}}$ is an endomorphism of $(X,E,F)$. In other words, the partially endomorphic algebras are precisely the algebras where one has the notion of an inner endomorphism. If $F=\emptyset$ and $(X,E,F)$ is a partially endomorphic algebra, then we say that $(X,E)$ is an endomorphic algebra. We say that an algebra $(X,*)$ is an LDsystem (or leftdistributive algebra) if $(X,*)$ satisfies the identity $x*(y*z)=(x*y)*(x*z)$.
Suppose that $(X,*)$ is a leftdistributive algebra. Then we say that a subset $L\subseteq X$ is a leftideal if $x*y\in L$ whenever $y\in L$. We say that an element $x\in X$ is a leftidentity if $x*y=y$ for all $y\in X$. Let $\mathrm{Li}(X)$ denote the set of all leftidentities in $X$. A leftdistributive algebra $(X,*)$ is said to be Laverlike if $\mathrm{Li}(X)$ is a leftideal and whenever $x_{n}\in X$ for all $n\in\omega$ there is some $N$ where $x_{0}*…*x_{N}\in\mathrm{Li}(X)$.
If $(X,*)$ is Laverlike, then define $\preceq$ to be the smallest partial ordering on $X$ where $x\preceq x*y$ whenever $x\in X\setminus\mathrm{Li}(X)$. Then $(X,\preceq)$ is a dual wellfounded partially ordered set and the dual wellfoundedness of $(X,\preceq)$ is necessary for all inductive proofs involving the generalizations of Laver tables.
If $(X,E)$ is an endomorphic algebra, then define the hull $\Gamma(X,E)$ of $(X,E)$ to be the algebra with underlying set
$\bigcup_{f\in E}\{f\}\times X^{n_{f}}$ and binary operation $*$ defined by
$$(f,x_{1},…,x_{n_{f}})*(g,y_{1},…,y_{n_{g}})$$
$$=(g,f(x_{1},…,x_{n_{f}},y_{1}),…,f(x_{1},…,x_{n_{f}},y_{n_{g}})).$$ Then the hull $\Gamma(X,E)$ of an endomorphic algebra is always a leftdistributive algebra. We say that a partially endomorphic algebra $(X,E,F)$ is Laverlike if $\Gamma(X,E)$ is Laverlike.
Suppose that $\mathcal{V}=(V,(f^{\mathcal{V}})_{f\in E},(\mathfrak{g}^{\mathcal{V}})_{\mathfrak{g}\in F})$ is a Laverlike partially endomorphic algebra. Suppose furthemore that $X$ is a set of variables and $v_{x}\in V$ for all $x\in X$. For each $x\in X$ and $f\in E$, let $f_{x}$ be an $n_{f}$ary function symbol. Let $\mathcal{G}=\{f_{x}\mid f\in E,x\in X\}\cup F$. Let $\mathbf{T}_{\mathcal{G}}[X]$ be the set of all terms in the language $\mathcal{G}$ with variables in $X$. Let $L$ be the subset of $\mathbf{T}_{\mathcal{G}}[X]$ and let $\phi:L\rightarrow V$ be the mappings defined by induction on the complexity of the term $\ell\in\mathbf{T}_{\mathcal{G}}[X]$ according to the following rules:
For each $f\in E$, define an operation $f^{\sharp}$ on $L$ by
$f^{\sharp}(\ell_{1},…,\ell_{n},\ell)=\ell$ whenever $f_{x}(\ell_{1},…,\ell_{n})\in L$ and if
$f_{x}(\ell_{1},…,\ell_{n})\not\in L$ then
Then we shall write $\mathbf{L}((v_{x})_{x\in X},\mathcal{V})$ for the algebra $(L,(f^{\sharp})_{f\in E},F)$. The algebra $(L,(f^{\sharp})_{f\in E},F)$ is a Laverlike partially endomorphic algebra.
Endomorphic Laver tables are usually infinite
The classical Laver tables and multigenic Laver tables are always locally finite. Furthermore, every Laverlike LDsystem is locally finite. In particular, the quotient algebra of elementary embeddings $\mathcal{E}_{\lambda}/\equiv^{\gamma}$ is always locally finite. On the other hand, the partially endomorphic Laver tables with at least one fundamental operation of arity at least 3 that we have looked at are all infinite. Since the endomorphic Laver tables that we have looked at are all infinite, it is unclear as to how these endomorphic Laver tables could arise naturally from the algebras of rankintorank embeddings.
Endomorphic Laver tables are abundant
There are many ways to construct $N$ary operations $t:A_{n}^{N}\rightarrow A_{n}$ with $x*t(x_{1},…,x_{N})=t(x*x_{1},\ldots,x*x_{N})$. If $t$ satisfies the identity $x*t(x_{1},…,x_{N})=t(x*x_{1},\ldots,x*x_{N})$, then define a mapping $T:A_{n}^{N+1}\rightarrow A_{n}$ by letting $T(x_{1},…,x_{N},x)=t(x_{1},…,x_{N})*x$. Then $(X,T)$ is a Laverlike $N+1$ary algebra.
Every term $t$ in the language of LDsystems satisfies the identity $x*t(x_{1},…,x_{N})=t(x*x_{1},\ldots,x*x_{N})$.
Let $(x)_{r}$ be the unique natural number such that $1\leq(x)_{r}\leq r$ and $(x)_{r}=x\mod\,r$. Then define a mapping $\pi_{n,m}:A_{n}\rightarrow A_{m}$ by $\pi_{n,m}(x)=(x)_{2^{m}}$ whenever $n\geq m$. If $x,y\in A_{n}$, then say that $x\leq^{L}y$ if $x=y$ or if $m$ is the least natural number with $\pi_{n,m}(x)\neq\pi_{n,m}(y)$, then $\pi_{n,m}(x)<\pi_{n,m}(y).$ Then $\leq^{L}$ is a linear ordering with $y\leq^{L}z\rightarrow x*y\leq^{L}x*z$ whenever $x,y,z\in A_{n}$. Define an operation $T_{k,r}:A_{n}^{r}\rightarrow A_{n}$ by letting $T_{k,r}(x_{1},...,x_{r})=x_{\sigma(k)}$ where $\sigma$ is a permutation of $\{1,...,r\}$ with $x_{\sigma(1)}\leq^{L}...\leq^{L}x_{\sigma(n)}$. Then $A_{n}$ satisfies the identity $x*T_{k,r}(x_{1},...,x_{r})=T_{k,r}(x*x_{1},...,x*x_{r})$ as well. There are more complex techniques for constructing mappings $t:A_{n}^{N}\rightarrow A_{n}$ that satisfy the identity $x*t(x_{1},...,x_{N})=t(x*x_{1},\ldots,x*x_{N})$.
At least exponential growth in output length
Let $e$ be a variable. Let $\ell_{1}=e$. Let $\mathcal{V}=(A_{n},t^{\mathcal{V}})$ where $t^{\mathcal{V}}$ is the operation defined by $t^{\mathcal{V}}(x,y,z)=y*z$. Let $L=\mathbf{L}((1)_{r\in\{e\}},\mathcal{V})$. For $1\leq i<2^{n}$, let $\ell_{i+1}=t(\ell_{i},\ell_{i})$. Then the term $\ell_{i}$ has $2^{i1}$ instances of the variable $e$ and by induction one can show that in $L$, we have $t^{\sharp}(\ell_{r},\ell_{r},\ell_{s})=\ell_{r*_{n}s}$. Such large output is typical for endomorphic Laver table operations. Therefore the output of an endomorphic Laver table operation is often too large to be stored completely by a computer. Some computer experiments suggest that the output size of the endomorphic Laver table operations may be superexponential.
The trick to computing endomorphic Laver tables
In one sense, the endomorphic Laver table operations are not efficiently computable since our output grows at least exponentially, but we are able to find a sense in which the endomorphic Laver table operations are efficiently computable. The first idea to computing endomorphic Laver tables is not to compute the entire term $t^{\sharp}(\ell_{1},…,\ell_{n},\ell)$ but to recursively compute just a piece of information from the term $t^{\sharp}(\ell_{1},…,\ell_{n},\ell)$ or an approximation of the term $t^{\sharp}(\ell_{1},…,\ell_{n},\ell)$. We shall see that these approximations of the term $t^{\sharp}(\ell_{1},…,\ell_{n},\ell)$ are analogous to how floating point numbers approximate the real numbers. The second idea is to use the original endomorphic Laverlike algebra to extract the necessary information about the terms.
For simplicity of notation, assume that $(X,t^{\bullet})$ is an $n+1$ary Laverlike algebra.
Let $\Diamond(X,t^{\bullet})$ be the set of all functions $\mathfrak{l}:\{1,…,n\}^{*}\rightarrow X\cup\{\#\}$ that satisfy the following conditions:
Now if $\mathfrak{l}_{1},…,\mathfrak{l}_{n}\in\Diamond(X,t^{\bullet})^{n}$ and
$(\mathfrak{l}_{1}(\varepsilon),…,\mathfrak{l}_{n}(\varepsilon))\not\in\mathrm{Li}(\Gamma(X,t^{\bullet}))$, and
$x\in X$, then define $t_{x}(\mathfrak{l}_{1},…,\mathfrak{l}_{n}):\{1,…,n\}^{*}\rightarrow X\cup\{\#\}$ by letting
$t_{x}(\mathfrak{l}_{1},…,\mathfrak{l}_{n})(\varepsilon)=t^{\bullet}(\mathfrak{l}_{1}(\varepsilon),…,\mathfrak{l}_{n}(\varepsilon),x)$ and where $t_{x}(\mathfrak{l}_{1},…,\mathfrak{l}_{n})(\mathbf{x}i)=\mathfrak{l}_{i}(\mathbf{x})$ whenever $i\mathbf{x}\in\{1,…,n\}^{*}$.
Then there is a unique operation $t^{\sharp}:\Diamond(X,t^{\bullet})^{n+1}\rightarrow\Diamond(X,t^{\bullet})$ such that
Then the algebra $\Diamond(X,t^{\bullet})$ is isomorphic to a quotient of the endomorphic Laver table $\mathbf{L}((x)_{x\in X},(X,t^{\bullet}))$ by a very small congruence. Furthermore, it is easy to embed any desired endomorphic Laver table into an algebra of the form $\Diamond(X,t^{\bullet})$. Instead of computing the entire output $t^{\sharp}(\mathfrak{l}_{1},…,\mathfrak{l}_{n},\mathfrak{l})$, one could just compute $t^{\sharp}(\mathfrak{l}_{1},…,\mathfrak{l}_{n},\mathfrak{l})(\mathbf{x})$ for some string $\mathbf{x}$.
Endomorphic Laver tables can be computed quickly
Suppose that $(X,t^{\bullet})$ is an $r+1$ary Laverlike algebra. By our experiments, we have concluded that if $f(x_{1},…,x_{r})$ is a term whose only function symbol is $t^{\sharp}$, then in $\Diamond(X,t^{\bullet})$, the value of
$f(\mathfrak{l}_{1},…,\mathfrak{l}_{r})(\mathbf{x})$ is usually computable in a few seconds as long as $\mathbf{x}<500$ or so in the language JavaScript on a web browser. When $\mathbf{x}$ gets to long, I usually run out of memory or the computation fails from too deep recursion errors.
A limitless source of combinatorial complexity.
While the classical Laver tables and multigenic Laver tables collectively can be thought of as an unlimited source of combinatorial complexity, since every classical Laver table and multigenic Laver table is locally finite, each classical Laver table and multigenic Laver table must only have a limited amount of combinatorial complexity. However, each endomorphic Laver table seems to contain an unlimited amount of combinatorial complexity and there does not appear to be a formula for computing $t^{\bullet}(\mathfrak{l}_{1},…,\mathfrak{l}_{n},\mathfrak{l})(\mathbf{x})$ even for endomorphic Laver tables originally obtained from $A_{5}$. The output of an endomorphic Laver table operation is usually unpredictable, but among the complexity of the output of the endomorphic Laver tables, there is some order amidst the complexity. The output of endomorphic Laver table operations seem to exhibit a sort of periodicity which is similar to the periodicity found in the classical and multigenic Laver tables.
Possible applications to public key cryptography
The endomorphic Laver tables incorporate the following attributes which are necessary for public key cryptography.
I will put up a post soon about the various endomorphic Laver tables based cryptosystems along with a preliminary analysis of one of those cryptosystems. It is currently unclear as to whether endomorphic Laver table based cryptosystems are secure and much more investigation on endomorphic Laver tables is necessary before I can gain any confidence about the security of endomorphic Laver table based cryptosystems.
]]>In 1970 G. R. MacLane asked if it is possible for a locally univalent function in the class $\mathcal{A}$ to have an arc tract, and this question remains open despite several partial results. Here we significantly strengthen these results by introducing new techniques associated with the EremenkoLyubich class for the disc. Also, we adapt a recent powerful technique of C. J. Bishop in order to show that there is a function in the EremenkoLyubich class for the disc that is not in the class $\mathcal{A}$.
]]>
Joint work with Ari Meir Brodsky.
Abstract. BenDavid and Shelah proved that if $\lambda$ is a singular stronglimit cardinal and $2^\lambda=\lambda^+$, then $\square^*_\lambda$ entails the existence of a $\lambda$distributive $\lambda^+$Aronszajn tree. Here, it is proved that the same conclusion remains valid after replacing the hypothesis $\square^*_\lambda$ by $\square(\lambda^+,{<\lambda})$.
As $\square(\lambda^+,{<\lambda})$ does not impose a bound on the ordertype of the witnessing clubs, our construction is necessarily different from that of BenDavid and Shelah, and instead uses walks on ordinals augmented with club guessing.
A major component of this work is the study of postprocessing functions and their effect on square sequences. A byproduct of this study is the finding that for $\kappa$ regular uncountable, $\square(\kappa)$ entails the existence of a partition of $\kappa$ into $\kappa$ many fat sets. When contrasted with a classic model of Magidor, this shows that it is equiconsistent with the existence of a weakly compact cardinal that $\omega_2$ cannot be split into two fat sets.
Downloads:
A surprisingly large number of opensource software (OSS) projects is run by volunteers. And I don’t mean that “hello world” code you pushed to GitHub (which probably makes up 99% of all OSS repositories), I mean the many successful opensource projects that provide the fertile soil other (small and large) software projects are built on.
In other words, the majority of OSS is run by people privileged enough to spend hours on end to produce something that they then give a way for free. Whether or not OSS developers do it out of conviction, it’s often a problem when people end up using privilegebased OSS without realizing it.
The most obvious problem is that privilegebased OSS can essentially go away at any moment. You don’t have to look to extreme cases (leftpad, anyone?) to see this happen; projects simply slowly die. You might praise OSS for the fact that anyone can pick up the code and fork it if need be, but in reality dead, privilegebased OSS is more like an unfinished construction site; it’s easier to start from scratch and thus the cycle repeats.
However, this is so obvious, it’s not really a problem, I think. In any case it’s not what I mean.
There’s a lot to be said in favor of developing OSS out of conviction. It frequently helps people and adds diversity to the ecosystem. The trouble is that privilegebased OSS can be highly toxic.
One toxic variant is “Silicon Valley style OSS” where developers do not act out of conviction but more out of necessity to get ahead in a questionable job market (“GitHub is your resume”kindofthing). If your hipster company hires people only due to their volunteer OSS credentials, then you are effectively hiring them by their privilege, creating a toxic environment and reducing diversity.
Reversely, you have the toxicity of people relying on OSS software not being willing to contribute to the development of OSS because privileged people make it work. Just the other day I was talking with a potential client who described how they use pandoc in production. If you do this at scale, then you’re basing the integrity of your production workflow on how much John MacFarlane could procrastinate over the years.
For OSS developer, this can turn into a toxic reality because users often think they deserve access to the developer’s privilege. That is, they can become highly aggressive when they find a bug in the OSS software they’re using, especially when it impacts them. This gets extreme when we’re talking about companies and use of privilegebased OSS in production. Company employees quickly try to exert pressure on OSS projects to fix things – yet refuse to actually contribute to development any which way or even acknowledging the work that went into a piece of software that they themselves chose to build upon.
Obviously, there are other ways of doing OSS software development. There’s transparencydriven OSS (e.g., security related tools, browsers), there’s sharedburden OSS (e.g., joining forces to lower costs), there’s donationbased, crowdsourced, and bountydriven OSS and many others – Nadia Eghbal lists a few in her lemonadestand on GitHub. Also ask about governance models.
Long story short, if you’re using opensource software, especially in a professional context, make sure to check what model it’s based on. Also, don’t be toxic.
These thoughts were far from original.
This is where I have an issue with the "hire people for their side projects" mentality.
— Stewart ScottCurran (@stewartsc) May 25 2016
Wider scope
Overall, the mathematical community does not value open source mathematical software in proportion to its value, and doesn’t understand its importance to mathematical research and education. I would like to say that things have got a lot better over the last decade, but I don’t think they have. My personal experience is that much of the “next generation” of mathematicians who would have changed how the math community approaches open source software are now in industry, or soon will be, and hence they have no impact on academic mathematical culture. Every one of my Ph.D. students are now at Google/Facebook/etc.
Organisations in “the open space” are often community driven. Groups come together to solve a problem, and in a few cases they succeed. Most fail, and most fail pretty early. Those that survive the initial phase often experience massive growth, sometimes beyond the wildest dreams of those who started them. This brings some challenges. Sustainability is a big one: too many of these organisations lurch from grant to grant, depending on the largesse of philanthropists or government funders. Most of these eventually fail or stagnate. Some negotiate this transition by turning private and obtaining VC or Angel funds. Eventually most of these are sold off to incumbent players, and gradually lose the central thread of openness and just becoming part of the service background in their space. Nothing wrong with that but they’re no longer really part of the open community at the end of this process. But some organisations succeed and find a model: donations, memberships, advertising, fee for service have all been successful in different spaces. These can grow to be sizeable companies, ones that need professional staff and business discipline to manage complex operations, significant infrastructures, and substantial financial flows and reporting. No multimillion dollar a year organisation is going to run for very long on volunteer labour, at least not where those volunteers need to work for a living. Passion can also be a problem, as well as being a driver. Without that passion and without that community nothing gets done. Indeed without the passion many notforprofit organisations wouldn’t be able to attract staff at the rates that they can reasonably pay. The community is a core asset.
Still, there’s now a small but clear core within the CG together with a useful group of “lurkers”. I think this year we’re entering the productive stage for this community group.
The dominant interest of the core group (i.e., the people actually doing work) is accessibility. What surprised me somewhat was that the core group seems to be in agreement that MathML is not suitable for accessibility, not just because it is effectively deprecated on the web but also because of its inherent limitations. (If you care for nuance and read on, this doesn’t mean MathML isn’t a decent intermediary for creating accessible web content.)
My own focus has been on “deep labels” which will now tie nicely into our work at MathJax for our recent grant from the Simons Foundation. The idea is quite simple, really.
Thus I’ve been building and testing demos that work with what we’ve got – HTML and SVG enriched via ARIA.
While I’m currently building manual prototypes, obviously one eye is on our work on the speechruleengine, i.e., keeping automation of the process in mind. Similarly, I’ve been trying to think about potential improvements to standards that might give us much larger improvements / simplifications (but that’s for another post).
At the same time, while automated analysis of content will only improve, I think manual overrides will continue to be critical. Whether it’s to fix a poor result from the heuristics or whether it is to customize content (e.g., to match author preferences).
Obviously, I didn’t want to enrich the output but the input. Given that these demos work with MathJax, the natural starting point is MathML (since that’s MathJax’s internal format). But MathML isn’t really special or better than any other format; whatever input format your favorite tool uses, the same methods should be applicable (though some things will undoubtedly be harder/easier to do in other formats).
MathML in itself lacks the means to provide meaningful information to the accessibility tree; at most, it can present (pretty vague) layout information, combined with some misleading information on semantics (e.g., thinking that <mfrac>
always indicates some kind of fraction). But MathML has the benefit of being XML so we can easily add ARIA attributes without running into practical issues.
Here’s a very simple but typical example: a common notation for the derivative of a function is a dot above it. In MathML, this is usually realized as an <mover>
.
<math>
<mover>
<mi>x</mi>
<mo>˙<! ˙ ></mo>
</mover>
</math>
You might be tempted to think that the “real” solution would be some kind of semantic markup (e.g., using <diff>
) but in the real world, the content is what it is and you want to enhance it.
Now even the simplest MathML accessibility tool should have the sense to voice the Unicode content (“x, dot above”) but it might also try to convey the layout information of an mover
(“x with dot over it”). But it shouldn’t try anything beyond that because the markup does not provide more information than that. In reality, those few tools with decent heuristics will easily cause issues, e.g., any superscripted 2 is read as “squared”.
Unfortunately, a dot above can mean other things besides “derivative of”, depending on the context and content – if you ever run into a dot above an equal sign or a digit you’ll probably guess that the dot does not represent the concept of a derivative of (then again someone probably used it that way so have fun figuring that one out).
So it’s a mess.
Let’s use what ARIA has given us to make it less of a mess: a simple and efficient means of providing meaningful textual alternatives for visual presentation:
<math>
<mover arialabel="derivative of x">
<mi>x</mi>
<mo>˙<! ˙ ></mo>
</mover>
</math>
This is obviously a very simple example. The most immediate questions are probably:
I believe the answer to both is yes.
The main demo I built is work in progress. It is available on Codepen and I recently started versioning it as a gist.
The demo covers several examples that hopefully already cover many common situations and I’ll continue to work on them.
A lot of tweaking happened once I started to test this in screenreaders in earnest.
One of the first problems I ran into is what James Teh described in WoeARIA: it’s not always clear what AT should expose when we muck about by arialabeling things like this.
Inevitably, I also needed a common accessibility hack, “offscreen” rendering of content. As a simple but extremely important example, you need this when facing the fact that, in MathML’s <mfrac>
the fraction bar is only implicit and thus lacks an node we could attach a label to (arguably the biggest WTF collision between traditional math rendering aka print and web markup).
I currently favor a somewhat convoluted solution:
<mrow arialabel="screenreader only"><mpadded width="1em"><mphantom><mtext>M</mtext></mphantom></mpadded></mrow>
The main advantage is backward compatibility and reusability because this should render in any MathML renderer without (many) sideeffects. It also (in part) gets us around the “ARIAwoe” or the fact that an empty <span>
with arialabel
should be ignored.
So far I’ve tested NVDA, JAWS, VoiceOver, Orca, and ChromeVox in several browsers. Some recordings are already available in a dedicate playlist on MathJax’s YouTube channel. Since I didn’t want to add commentary, they are a bit difficult to follow so the summary below should be helpful.
arialabel
s completelyOSX El Capitan
Orca 3.20, Ubuntu 16.10
JAWS 17, Windows 7
ChromeVox v53
As you can see, the results are mixed. For each combination of AT+browser+OS, there’s some combination that works roughly as expected but that’s about it. SVG seems a clear winner despite VO’s reluctance; I need to exploretitle
/desc
a bit further (which has different support levels).
Still, I think the situation is already better than what MathML can give you today, in particular because the few significant issues are nothing particular to MathML or math, they’re just annoying SVG or HTML accessibility issues, many of which can be easily fixed (as opposed to implementing good math support based on MathML). The fact that MathML accessiblity tools fail to support arialabels is not surprising, of course, and a typical example of how MathML support (as little as it is) continues to fall further and further behind HTML and SVG. And that’s a good thing.
Now some might see this “fixed” enrichment as a step back compared to MathJax’s Accessibility Extensions (using speechruleengine on the client) because the extensions can provide numerous speech rules and verbosity settings as well as summary information. I would disagree. I’ve never been a fan of varying speech rules (just like I wouldn’t be a fan of AT rearranging a sentence). Also, speech rules mostly differ by newer ones being more refined than older ones.
Verbosity is simply a general accessibility problem and it should be dealt with in generality (as it already is, e.g., for punctuation). Summary information is a great problem but really a limitation of current web technology and something that’s just as needed for infographics or data visualization as it is for mathematics. We do not need isolated solutions here either.
Simple: more testing.
On the one hand, testing more AT combinations and evaluating other approaches. On the other hand, creating more and complex samples.
Others on the MathOnWeb CG have tried different approaches and so we will also work on getting feedback from the accessibility community in general, in particular figuring out how improved standards might help us.
For me personally, the goal is to develop a strategy for next year’s work at MathJax where we want the speechruleengine to add deep labels directly. I think that would solve the last major piece of the puzzle for math on the web in its current form. Then we can finally leave the legacy approaches with isolated standards and tools behind to focus on moving the web forward as a whole.
]]>I gave a plenary talk at the 2017 ASL North American Meeting in Boise, March 2017.
Talk Title: The current state of the Souslin problem.
Abstract: Recall that the real line is that unique separable, dense linear ordering with no endpoints in which every bounded set has a least upper bound.
A problem posed by Mikhail Souslin in 1920 asks whetherthe term separable in the above characterization may be weakened to ccc. (A linear order is said to be separable if it has a countable dense subset. It is said to be ccc if every pairwisedisjoint family of open intervals is countable.)
Amazingly enough, the resolution of this single problem lead to key discoveries in Set Theory: the notions of Aronszajn, Souslin and Kurepa trees, forcing axioms and the method of iterated forcing, Jensen’s diamond and square principles, and the theory of iteration without adding reals.
Souslin problem is equivalent to the existence of a partial order of size $\aleph_1$.
A generalization of this problem to the level of $\aleph_2$ has been identified in the early 1970’s, and is open ever since. In the last couple of years, a considerable progress has been made on the generalized Souslin problem and its relatives. In this talk, I shall describe the current state of this research.
Downloads:
From the mid 1990’s to about 2012, no results have been published on Laver tables or the quotient algebras of elementary embeddings. Nevertheless, set theorists have considered the algebras of elementary embeddings to be important enough that they have devoted Chapter 11 in the 24 chapter Handbook of Set Theory to the algebras of elementary embeddings.
Since I am the only one researching generalizations of Laver tables, and only 3 other people have published on Laver tables since the 1990’s, I have attempted to make the paper selfcontained and readable to a general mathematician.
Any comments or criticism either by email or on this post about the paper would be appreciated including typos and other errors.
Let me now summarize some of the results from the paper.
I was reminded of this old note yesterday. This snippet goes back to JMM 2016 when I had coffee with Izabella Łaba. Of course, Izabella is one of my favorite bloggers (starting all the way back when procrastination made us launch mathblogging.org – shout out to Felix, Fred, and Sam!) but she is also a kickass researcher who amongst the many great things she does happens to sit on the editorial board of the (then newly fandangled) arXiv overlay journal Discrete Analysis otherwise known as “that Tim Gowers journal thing”.
Discrete Analysis is probably the most relevant arXiv overlay journal in mathematics (ok, I admit I didn’t search around much for other noteworthy ones) and the gut reaction when it comes to arXiv overlay journals (and Discrete Analysis in particular) seems to be: “What if it fails?”. But like jumping in the Matrix, failure really wouldn’t mean anything.
Instead, I’ve been wondering more about “What if it succeeds?”. Of course that’s because I expect it to succeed but in either way I don’t think people think a lot about that. Arguably, I’m not awfully qualified but then again anyone can go through Kent Anderson’s list of 96 things Publishers Do. Most of these, I’m guessing, you don’t care about as an arXiv overlay journal so perhaps Cameron Neylon’s shorter list is more on point. Ultimately, I think, it is simple: what does a journal need to succeed? Highquality papers.
Quality comes in many forms but basically there are two areas: scientific quality and production quality. Scientific quality includes, at least, attracting papers the community will approve of, attracting authors that impress the community, and an editorial board that can spot the former and attract the latter. Of course, those are not at all separate but papers make journals influential, journals make authors influential etc pp. (And no, merit does not come into play, don’t be silly.) I can’t really judge it (not being a research mathematician anymore, let alone a discrete analysis person) but the editorial board looks to be full of influential, highprofile people and the first paper was Terry Tao’s solution to Erdős’s discrepancy problem; so it seems likely that part will work.
Production quality includes, at least, typography, copyediting, archiving, and marketing. Discrete Analaysis can probably make that work as well as they care because, as Gowers pointed out, they expect they won’t have to. That might seem arrogant to anyone with even a bit of knowledge from the trenches of academic publishing, but I think they’re probably right in expecting they won’t have to. I admit that is in part speculation, but I would expect that a high profile math journal can probably expect both their authors to have spent more time on their manuscript (more presubmission review from peers, more iterations from themselves as the result is “big” etc) and they can probably expect their editors to work harder (they actually give a damn about the paper they read b/c the result is interesting, they have themselves higher expectations thus provide more detailed reports, they have simply more experience and relevant skills etc). And marketing, well, it’s that Gowers journal thing, remember?
So this all looks great. Got the goods, can compete.
Except there are a few things that I think are terrible flaws; in no way fatal flaws (quite possibly the opposite) but ones with negative side effects that worry me.
To start with, overlay journals do the silly extreme libertarian thing of pretending the infrastructure they use doesn’t cost anything. Even if the costs of the current technology might be very small, overlay journals will have to stick to the cheapest available tech, ignoring (let alone helping) the transformation of scientific communication.
A more important problem is: can this scale? I don’t think it can (not much anyway). Research quality obviously doesn’t scale well – if everyone is a top journal, nobody is. Regarding production quality in “lesser” journals, I don’t think authors will invest much in their manuscripts and reviewers will be less likely to have the skills or invest extra time. It still might work if journals started to rely on a more iterative process where postpublication feedback leads to revisions. (I mean, traditionally published journal articles can be awful piles of unedited crap, why expect more from an overlay journal, amiright?) But on the one hand, the community would have to accept that, i.e., it would require a much more significant change in scientific culture, and on the other hand people would have to, well, read papers and give feedback – where the average number of readers for a math research paper is probably less than 1. Seems unlikely. So we might get elite journals that can get away with this model commercially but anyone else is screwed; not a fan.
The third problem I see is more severe as it relates to the structure of scientific communities: who watches the watchers? Years ago I wrote that my biggest problem with academic communities (and the greatest strength of its publishing system) lies in its power structure: the key to power lies with editorial boards which are predominantly aristocratic. Societydriven journals actually have democratic oversight for their editorial boards (as mild as its effect might be) and even commercial publishers have shareholder oversight, as “unscientific” as their interest may be. But overlay journals have nobody watching them. You might argue the free market will take care of it but it might just be that journals are clubs and that scholarly communication is more like general taxation.
And that combination worries me. The unique ability of elite overlay journals to succeed commercially (as in: providing a valuable product) combined with a lack of checks and balances might lead to an imbalance that cannot be corrected.
But what do I know. Maybe such journals will realize the risk associated with their success and take responsibility for their actions and their effect on the community at large. And then maye they will focus on innovation and on reproducibility of their model for average (“mediocre”) journals that the majority of researchers publish in. I’ve seen crazier things.
]]>
This talk will be a very condensed version of the talk with a similar title I gave last spring at MOPA Seminar in CUNY.
Abstract:
A total computable function will produce the same output on the standard natural numbers regardless of which model of arithmetic it is evaluated in. But a (partial) computable function can be the empty function in the standard model $\mathbb N$, while turning into a total function in some nonstandard model. I will discuss some extreme instances of this phenomena investigated recently by Woodin and Hamkins showing that there are computable processes which can produce any desired output by going to the right nonstandard model. Hamkins showed that there is a single ${\rm TM}$ program $p$ (computing the empty function in $\mathbb N$) with the property that given any function $f:\mathbb N\to \mathbb N$, there is a nonstandard model $M_f\models{\rm PA}$ so that in $M_f$ $p$ computes $f$ on the standard part. Even more drastically, Woodin has shown that there is a single index $e$ (for the empty function in $\mathbb N$), for which ${\rm PA}$ proves that $W_e$ is finite, with the property that for any finite set $s$ of natural numbers, there is a model $M_s\models{\rm PA}$ in which $W_e=s$. It follows for instance, by the MRDP theorem, that there is a single Diophantine equation $p(n,\bar x)=0$ having no solutions in $\mathbb N$, for which ${\rm PA}$ proves that there are finitely many $n$ with a solution, and given any finite set $s$, we can pass to a nonstandard model in which $p(n,\bar x)=0$ has a solution if and only if $n\in s$.
Here are links to blog posts by myself and others on this topic:
@ARTICLE{GitmanSchindler:virtualCardinals,
AUTHOR= {Gitman, Victoria and Schindler, Ralf},
TITLE= {Virtual large cardinals},
Note ={Submitted},
pdf={https://boolesrings.org/victoriagitman/files/2017/03/virtualLargeCardinals.pdf},
}
Suppose $\mathcal A$ is a large cardinal notion that can be characterized by the existence of one or many elementary embeddings $j:V_\alpha\to V_\beta$ satisfying some list of properties. For instance, both extendible cardinals and ${\rm I3}$ cardinals meet these requirements. Recall that $\kappa$ is extendible if for every $\alpha>\kappa$, there is an elementary embedding $j:V_\alpha\to V_\beta$ with critical point $\kappa$ and $j(\kappa)>\alpha$, and recall also that $\kappa$ is ${\rm I3}$ if there is an elementary embedding $j:V_\lambda\to V_\lambda$ with critical point $\kappa<\lambda$. Let us say that a cardinal $\kappa$ is virtually $\mathcal A$ if the embeddings $j:V_\alpha\to V_\beta$ needed to witness $\mathcal A$ can be found in setgeneric extensions of the universe $V$; equivalently we can say that the embeddings exist in the generic multiverse of $V$. Indeed, it is not difficult to see that it suffices to only consider the collapse extensions. So we now have that $\kappa$ is virtually extendible if for every $\alpha>\kappa$, some setforcing extension has an elementary embedding $j:V^V_\alpha\to V^V_\beta$ with critical point $\kappa$ and $j(\kappa)>\alpha$, and we have that $\kappa$ is virtually ${\rm I3}$ if some setforcing extension has an elementary embedding $j:V_\lambda^V\to V_\lambda^V$ with critical point $\kappa$. The template of virtual large cardinals can be applied to several large cardinals notions in the neighborhood of a supercompact cardinal. We can even apply it to inconsistent large cardinal principles to obtain virtual large cardinals that are compatible with $V=L$.
The concept of virtual large cardinals is close in spirit to generic large cardinals, but is technically very different. Suppose $\mathcal A$ is a large cardinal notion characterized by the existence of elementary embeddings $j:V\to M$ satisfying some list of properties. Then we say that a cardinal $\kappa$ is generically $\mathcal A$ if the embeddings needed to witness $\mathcal A$ exist in setforcing extensions of $V$. More precisely, if the existence of $j:V\to M$ satisfying some properties witnesses $\mathcal A$, then we want a forcing extension $V[G]$ to have a definable $j:V\to M$ with these properties, where $M$ is an inner model of $V[G]$. So for example, $\kappa$ is generically supercompact if for every $\lambda>\kappa$, some setforcing extension $V[G]$ has an elementary embedding $j:V\to M$ with critical point $\kappa$ and $j”\lambda\in M$. If $\kappa$ is not actually $\lambda$supercompact, the model $M$ will not be contained in $V$. Generic large cardinals are either known to have the same consistency strength as their actual counterparts or are conjectured to have the same consistency strength based on currently available evidence. Most importantly, generic large cardinals need not be actually “large” since, for instance, $\omega_1$ can be generically supercompact.
In the case of virtual large cardinals, because we consider only setsized embeddings, the source and target of the embedding are both from $V$, and because the embedding exists in a forcing extension, there is no a priori reason why the target model would have any closure at all. The combination of these gives that virtual large cardinals are actual large cardinals that fit into the large cardinal hierarchy between ineffable cardinals and $0^\#$. If $0^\#$ exists, the Silver indiscernibles have (nearly) all the virtual large cardinal properties we consider in this article, and all these notions will be downward absolute to $L$.
The first virtual large cardinal notion, the remarkable cardinal, was introduced by Schindler in [1]. A cardinal $\kappa$ is remarkable if for every $\lambda>\kappa$, there is $\bar\lambda<\kappa$ such that in some setforcing extension there is an elementary embedding $j:V_{\bar\lambda}^V \to V_\lambda^V$ with $j(\text{crit}(j))=\kappa$. It turns out that remarkable cardinals are virtually supercompact because, as shown by Magidor [2], $\kappa$ is supercompact precisely when for every $\lambda>\kappa$, there is $\bar\lambda<\kappa$ and an elementary embedding $j:V_{\bar\lambda}\to V_\lambda$ with $j(\text{crit}(j))=\kappa$. Schindler showed that the existence of a remarkable cardinal is equiconsistent with the assertion that the theory of $L(\mathbb R)$ cannot be changed by proper forcing [1], and since then it has turned out that remarkable cardinals are equiconsistent to other natural assertions such as the thirdorder Harrington’s principle [3].
The idea behind the concept of virtual large cardinals of taking a property characterized by the existence of elementary embeddings of sets and defining a virtual version of the property by positing that the embeddings exist in the generic multiverse can be extended beyond large cardinals. In [4], together with Bagaria, we studied a virtual version of Vopěnka’s Principle (Generic Vopěnka’s Principle) and a virtual version of the Proper Forcing Axiom ${\rm PFA}$. Fuchs has generalized this approach to obtain virtual versions of other forcing axioms such as the forcing axiom for subcomplete forcing ${\rm SCFA}$ [5] and resurrection axioms [6]. Each of these virtual properties has turned out to be equiconsistent with some virtual large cardinal, which has so far been the main application of these ideas.
Our template for the definition of virtual large cardinals requires the large cardinal notion to be characterized by the existence of elementary embeddings $j:V_\alpha\to V_\beta$. This template is quite restrictive. Its main advantage is that it gives a hierarchy of large cardinal notions that mirrors the hierarchy of its actual counterparts, and the large cardinals have other desirable properties such as being downward absolute to $L$.
@article {schindler:remarkable1,
AUTHOR = {Schindler, RalfDieter},
TITLE = {Proper forcing and remarkable cardinals},
JOURNAL = {Bull. Symbolic Logic},
FJOURNAL = {The Bulletin of Symbolic Logic},
VOLUME = {6},
YEAR = {2000},
NUMBER = {2},
PAGES = {176184},
ISSN = {10798986},
MRCLASS = {03E40 (03E45 03E55)},
MRNUMBER = {1765054 (2001h:03096)},
MRREVIEWER = {A. Kanamori},
DOI = {10.2307/421205},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.2307/421205},
}
@article {magidor:supercompact,
AUTHOR = {Magidor, M.},
TITLE = {On the role of supercompact and extendible cardinals in logic},
JOURNAL = {Israel J. Math.},
FJOURNAL = {Israel Journal of Mathematics},
VOLUME = {10},
YEAR = {1971},
PAGES = {147157},
ISSN = {00212172},
MRCLASS = {02K35},
MRNUMBER = {0295904 (45 \#4966)},
MRREVIEWER = {J. L. Bell},
}
@article {ChengSchindler:Harrington,
AUTHOR = {Cheng, Yong and Schindler, Ralf},
TITLE = {Harrington's principle in higher order arithmetic},
JOURNAL = {J. Symb. Log.},
FJOURNAL = {Journal of Symbolic Logic},
VOLUME = {80},
YEAR = {2015},
NUMBER = {2},
PAGES = {477489},
ISSN = {00224812},
MRCLASS = {03E30 (03E55)},
MRNUMBER = {3377352},
MRREVIEWER = {A. Kanamori},
DOI = {10.1017/jsl.2014.31},
URL = {http://dx.doi.org/10.1017/jsl.2014.31},
}
@ARTICLE{BagariaGitmanSchindler:VopenkaPrinciple,
AUTHOR = {Bagaria, Joan and Gitman, Victoria and Schindler, Ralf},
TITLE = {Generic {V}op\v enka's {P}rinciple, remarkable cardinals, and the
weak {P}roper {F}orcing {A}xiom},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {56},
YEAR = {2017},
NUMBER = {12},
PAGES = {120},
ISSN = {09335846},
MRCLASS = {03E35 (03E55 03E57)},
MRNUMBER = {3598793},
DOI = {10.1007/s001530160511x},
URL = {http://dx.doi.org/10.1007/s001530160511x},
pdf ={http://boolesrings.org/victoriagitman/files/2016/02/GenericVopenkaPrinciples.pdf},
}
@ARTICLE{Fuchs:HierarchiesForcingAxiomsContinuumHypothesisSquarePrinciples,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of forcing axioms, the continuum hypothesis and square principles},
Note ={Preprint},
}
@ARTICLE{Fuchs:HierarchiesVirtualResurrectionAxioms,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of (virtual) resurrection axioms},
Note ={Preprint},
}
The idea of considering virtual set theoretic assertions was introduced by Schindler, arising out of his work on remarkable cardinals. Suppose $\mathcal P$ is a set theoretic property asserting the existence of elementary embeddings between some firstorder structures. We will say that $\mathcal P$ holds virtually if embeddings of structures from $V$ characterizing $\mathcal P$ exist in the generic multiverse of $V$ (in its setforcing extensions). Large cardinals are primary candidates for virtualization. Recall, for instance, that a cardinal $\kappa$ is extendible if for every $\alpha>\kappa$, there is $j:V_\alpha\to V_\beta$ with $\text{crit}(j)=\kappa$ and $j(\kappa)>\alpha$. So we can say that $\kappa$ is virtually extendible if for every $\alpha>\kappa$ some setforcing extension has an extendibility embedding $j:V_\alpha^V\to V_\beta^V$. We can do the same with an appropriately chosen characterization of supercompact cardinals based on the existence of embeddings of setsized structures, as well as with several other large cardinals in the neighborhood of a supercompact. Other properties which seem to naturally lend themselves to virtualization are forcing axioms. Virtual versions of ${\rm PFA}$, ${\rm SCFA}$ (forcing axiom for subcomplete forcing) and resurrection axioms have been studied by Schindler and Fuchs [1], [2], [3]. Together with Bagaria and Schindler, we studied a virtual version of Vopěnka’s Principle [1].
We can even have (consistent) virtual versions of inconsistent settheoretic assertions. Observe for example that there can be a virtual elementary embedding from the reals to the rationals. To achieve this we simply force to collapse the cardinality of $\mathbb R$ to become countable so that in the collapse extension $\mathbb R^V$ is a countable dense linear order without endpoints and hence isomorphic to the rationals. Of course the reals of the forcing extension still cannot be embedded into $\mathbb Q$ but virtual properties are about $V$structures de re and not de dicto. It also turns out that Kunen’s Inconsistency does not hold for virtual embeddings. In a setforcing extension there can be elementary $j:V_\lambda^V\to V_\lambda^V$ with $\lambda$ much larger than the supremum of the critical sequence of $j$.
Schindler introduced remarkable cardinals when he discovered that a remarkable cardinal is equiconsistent with the assertion that the theory of $L(\mathbb R)$ cannot be changed by proper forcing [4]. He defined that $\kappa$ is remarkable if for every $\lambda>\kappa$, there is $\bar\lambda<\kappa$ such that in a setforcing extension there is an elementary $j:V_{\bar\lambda}^V\to V_\lambda^V$ with $j(\text{crit}(j))=\kappa$. By a theorem of Magidor [5], a cardinal $\kappa$ is supercompact precisely when the embeddings $j$ as above exist in $V$ itself. So remarkable cardinals are virtually supercompact. Although it was conjectured that absoluteness of the theory of $L(\mathbb R)$ by proper forcing would have strength in the neighborhood of a strong cardinal, Schindler showed that remarkable cardinals are consistent with $V=L$ [6].
Calling remarkable cardinals virtually supercompact can seem like cheating because we chose a very peculiar characterization of supercompact cardinals to virtualize. We recently observed with Schindler that equivalently $\kappa$ is remarkable if for every $\lambda>\kappa$, there is $\alpha>\lambda$ and a transitive $M$ with $M^\lambda\subseteq M$ such that in a setforcing extension there is $j:V_\alpha^V\to M$ with $\text{crit}(j)=\kappa$ and $j(\kappa)>\lambda$. More surprising is another equivalent characterization that for every $\lambda>\kappa$, there is $\alpha>\lambda$ and a transitive $M$ with $V_\lambda\subseteq M$ such that in a setforcing extension there is $j:V_\alpha^V\to M$ with $\text{crit}(j)=\kappa$ and $j(\kappa)>\lambda$, making remarkables also look like virtually strong cardinals. A deeper reason for this appears to be that closure (in $V$) of the target model does not calibrate the strength of virtual large cardinals. Only large cardinals with characterization involving $j:V_\alpha\to V_\beta$ have robust virtual versions [7]. So we have robust virtual versions of supercompact, $C^{(n)}$extendible, and rankintorank cardinals. The $n$huge cardinals do not appear to have a robust characterization for virtualizing, so we instead virtualized a related hierarchy of $n$huge* cardinals, where $\kappa$ is $n$huge* if there is $\alpha>\kappa$ and $j:V_\alpha\to V_\beta$ with $\text{crit}(j)=\kappa$ and $j^n(\kappa)<\alpha$ [7]. Schindler and Wilson recently defined a virtual Shelah for supercompactness cardinal and showed that it is equiconsistent with the assertion that every universally Baire set has a perfect subset [8]. The hierarchy of virtual large cardinals mirrors that of their actual counterparts. If $0^{\#}$ exists, then the Silver indiscernibles have all the virtual large cardinal properties. The virtual large cardinals fit between 1iterable and $\omega+1$iterable cardinals and they are are downward absolute to $L$ [7].
With Bagaria and Schindler we introduced, Generic Vopěnka’s Principle, a virtual version of Vopěnka’s Principle [1]. Vopěnka’s Principle asserts that every proper class of firstorder structures has a pair of distinct structures that elementarily embed. Generic Vopěnka’s Principle asserts that the embedding exists in a setforcing extension. Vopěnka’s Principle as well as its virtual version are secondorder assertions formalizable in GodelBernays set theory. The firstorder version of Vopěnka’s Principle which I will call here, Vopěnka’s Scheme, is the scheme of assertions ${\rm VP}(\Sigma_n)$ for every $n\in\omega$, which state that Vopěnka’s Principle holds for $\Sigma_n$definable (with parameters) classes. Generic Vopěnka’s Scheme is the scheme of analogous assertions ${\rm gVP}(\Sigma_n)$. Bagaria showed that ${\rm VP}(\Sigma_2)$ holds precisely when there is a proper class of supercompact cardinals and ${\rm VP}(\Sigma_{n+2})$ holds precisely when there is a proper class of $C^{(n)}$extendible cardinals [9]. Recall that $C^{(n)}$ is the class of all $\delta$ such that $V_\delta\prec_{\Sigma_n}V$. A cardinal $\kappa$ is $C^{(n)}$extendible if for every $\alpha>\kappa$ there is an extendibility embedding $j:V_\alpha\to V_\beta$ with $j(\kappa)\in C^{(n)}$.
With Bagaria and Schindler we showed that ${\rm gVP}(\Sigma_2)$ is equiconsistent with a proper class of remarkable cardinals and ${\rm gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$extendible cardinals [1]. If there is a proper class of remarkable or virtually $C^{(n)}$extendible cardinals then ${\rm gVP}(\Sigma_2)$ or ${\rm gVP}(\Sigma_{n+2})$ respectively holds. If ${\rm gVP}(\Sigma_2)$ holds then there is a proper class of cardinals each of which is either remarkable or virtually rankintorank, and the analogous result holds for ${\rm gVP}(\Sigma_{n+2})$ with remarkable replaced by virtually $C^{(n)}$extendible. In Bagaria’s argument you assumed that say there is no proper class of supercompacts and arrived at a contradiction by obtaining an embedding $j:V_{\lambda+2}\to V_{\lambda+2}$. But in the virtual case, such an embedding simply indicates the presence of a virtually rankintorank cardinal. Was it possible to eliminate the pesky case of a virtually rankintorank cardinal with a cleverer argument? I tried unsuccessfully for months. Then last summer with Joel Hamkins we showed that Kunen’s Inconsistency is fundamental to Bagaria’s proof. There is a model of Generic Vopěnka’s Scheme with no remarkable cardinals but a proper class of virtually rankintorank cardinals [10].
Slides to come!
@ARTICLE{BagariaGitmanSchindler:VopenkaPrinciple,
AUTHOR = {Bagaria, Joan and Gitman, Victoria and Schindler, Ralf},
TITLE = {Generic {V}op\v enka's {P}rinciple, remarkable cardinals, and the
weak {P}roper {F}orcing {A}xiom},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {56},
YEAR = {2017},
NUMBER = {12},
PAGES = {120},
ISSN = {09335846},
MRCLASS = {03E35 (03E55 03E57)},
MRNUMBER = {3598793},
DOI = {10.1007/s001530160511x},
URL = {http://dx.doi.org/10.1007/s001530160511x},
pdf ={http://boolesrings.org/victoriagitman/files/2016/02/GenericVopenkaPrinciples.pdf},
}
@ARTICLE{Fuchs:HierarchiesVirtualResurrectionAxioms,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of (virtual) resurrection axioms},
Note ={Preprint},
}
@ARTICLE{Fuchs:HierarchiesForcingAxiomsContinuumHypothesisSquarePrinciples,
AUTHOR= {Gunter Fuchs},
TITLE= {Hierarchies of forcing axioms, the continuum hypothesis and square principles},
Note ={Preprint},
}
@article {schindler:remarkable2,
AUTHOR = {Schindler, RalfDieter},
TITLE = {Proper forcing and remarkable cardinals. {II}},
JOURNAL = {J. Symbolic Logic},
FJOURNAL = {The Journal of Symbolic Logic},
VOLUME = {66},
YEAR = {2001},
NUMBER = {3},
PAGES = {14811492},
ISSN = {00224812},
CODEN = {JSYLA6},
MRCLASS = {03E55 (03E15 03E35)},
MRNUMBER = {1856755 (2002g:03111)},
MRREVIEWER = {A. Kanamori},
DOI = {10.2307/2695120},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.2307/2695120},
}
@article {magidor:supercompact,
AUTHOR = {Magidor, M.},
TITLE = {On the role of supercompact and extendible cardinals in logic},
JOURNAL = {Israel J. Math.},
FJOURNAL = {Israel Journal of Mathematics},
VOLUME = {10},
YEAR = {1971},
PAGES = {147157},
ISSN = {00212172},
MRCLASS = {02K35},
MRNUMBER = {0295904 (45 \#4966)},
MRREVIEWER = {J. L. Bell},
}
@article {schindler:remarkable1,
AUTHOR = {Schindler, RalfDieter},
TITLE = {Proper forcing and remarkable cardinals},
JOURNAL = {Bull. Symbolic Logic},
FJOURNAL = {The Bulletin of Symbolic Logic},
VOLUME = {6},
YEAR = {2000},
NUMBER = {2},
PAGES = {176184},
ISSN = {10798986},
MRCLASS = {03E40 (03E45 03E55)},
MRNUMBER = {1765054 (2001h:03096)},
MRREVIEWER = {A. Kanamori},
DOI = {10.2307/421205},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.2307/421205},
}
@ARTICLE{GitmanSchindler:virtualCardinals,
AUTHOR= {Gitman, Victoria and Schindler, Ralf},
TITLE= {Virtual large cardinals},
Note ={Submitted},
pdf={https://boolesrings.org/victoriagitman/files/2017/03/virtualLargeCardinals.pdf},
}
@ARTICLE{SchindlerWilson:UniversallyBaireSetsOfRealsPerfectSetProperty,
AUTHOR= {Ralf Schindler and Trevor Wilson},
TITLE= {Universally {B}aire sets of reals and the perfect set property},
Note ={In preparation},
}
@article {Bagaria:CnCardinals,
AUTHOR = {Bagaria, Joan},
TITLE = {{$C^{(n)}$}cardinals},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {51},
YEAR = {2012},
NUMBER = {34},
PAGES = {213240},
ISSN = {09335846},
CODEN = {AMLOEH},
MRCLASS = {03E55 (03C55)},
MRNUMBER = {2899689},
MRREVIEWER = {Bernhard A. K{\"o}nig},
DOI = {10.1007/s0015301102618},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.1007/s0015301102618},
}
@ARTICLE{GitmanHamkins:GVP,
AUTHOR= {Victoria Gitman and Joel David Hamkins},
TITLE= {A model of the generic Vop\v enka principle in which the ordinals are not $\Delta_2$Mahlo},
PDF={https://boolesrings.org/victoriagitman/files/2017/06/GenericVopenkawithOrdnotMahlo.pdf},
Note ={Submitted},
EPRINT ={1706.00843},
}
I gave an invited talk at the Set Theory workshop in Obwerwolfach, February 2017.
Talk Title: Coloring vs. Chromatic.
Abstract: In a joint work with Chris LambieHanson, we study the interaction between compactness for the chromatic number (of graphs) and compactness for the coloring number.
Downloads:
Registration for the 2017 Southwestern Undergraduate Mathematics Research Conference (aka SUnMaRC) is now open! Northern Arizona University is hosting this year’s conference on March 31April 2, 2017. We are excited to announce Kathryn Bryant (Colorado College), Henry Segerman (Oklahoma State University), and Steve Wilson (NAU, emeritus) as our invited speakers.
The goal of the conference is to welcome undergraduates to the wonderful world of mathematics research, to develop and foster a rich social network between the mathematics students and faculty throughout the great Southwest, and to celebrate the accomplishments of our undergraduate students. We encourage undergraduate students from all years of study to participate and give presentations in any area of mathematics, including applications to other disciplines. However, while we do recommend giving a talk, it is not a requirement for conference participation. To register for the conference and to submit a title and abstract for a student presentation, visit the 2017 SunMaRC Registration page.
The conference began in 2004 as the Arizona Mathematics Undergraduate Conference. In 2008, the conference changed to SUnMaRC to recognize the participation of institutions throughout the southwest.
If you have any questions about this year’s SUnMaRC, please contact one of the conference organizers:
]]>The following pictures are taken by Andrés Villaveces. Thank you Andrés!
]]>
Back in 2010, Garabed Gulbenkian asked a question on MathOverflow whether it is possible that a countable ordinal definable set of reals has elements that are not ordinal definable. For those who need to be reminded, a set is ordinal definable if it is definable with ordinal parameters. Lets start with some motivation for the question.
It is easy to see that every element of a finite ordinal definable set of reals $S$ is itself ordinal definable because it is the $m$th real of $S$ in the lexicographical order for some finite $m$. Note that this observation uses a fundamental property of reals that there is such a lexicographical order, and indeed, this it is consistent to have a finite ordinal definable set (of sets of reals) without ordinal definable members. In a forcing extension of $L$ by two mutually generic Sacks reals $r$ and $s$, there is a definable set of two elements, namely the $L$degrees of $r$ and $s$, neither of which is ordinal definable [1].
On the other hand, it is consistent that there is an uncountable ordinal definable set of reals without any ordinal definable elements. Let $L[G]$ be a Cohen forcing extension of $L$ and consider the set $S$ of all nonconstructible reals in $L[G]$. The set $S$ is obviously definable. The set $S$ cannot have any ordinal definable elements because by an automorphism argument, since Cohen forcing is almost homogeneous, every ordinal definable real of $L[G]$ is in $L$. (A forcing notion $\mathbb P$ is almost homogeneous if for any two conditions $p,q\in\mathbb P$, there is an automorphism $\pi$ such that $\pi(p)$ is compatible to $q$. A key property of almost homogeneous forcing is that if a condition forces a statement with ground model parameters, then this statement is forced by every condition.) Finally, $S$ is uncountable because it contains uncountably many Cohen reals: every constructible real gives rise to an automorphism of the Cohen poset via bitwise addition.
So what about countable ordinal definable sets of reals? It turned out that the answer to Gulbenkian’s question was not known. Then several set theorists including myself together with Joel Hamkins tried to solve it. The question was finally settled by Kanovei and Lyubetsky in 2014. They showed that it is consistent to have a countable ordinal definable set of reals without ANY ordinal definable elements.
The story of their proof starts with the question of determining the least projective complexity of a nonconstructible real. By Shoenfield’s Absoluteness, every $\Sigma_2^1$ or $\Pi_2^1$ real is constructible. In 1970, Jensen constructed in $L$ a ccc subposet $\mathbb P$ of Sacks forcing, using $\diamondsuit$ to seal maximal antichains, with the following properties [2]. In any model of set theory, the set of all $L$generic reals for $\mathbb P$ is $\Pi^1_2$definable, a property which is also true of Cohen forcing. But unlike Cohen extensions of $L$ which have uncountably many $L$generic Cohen reals (see above), an $L$generic extension by Jensen’s forcing $\mathbb P$ adds a unique $L$generic real, which is therefore $\Delta^1_3$definable. This is a good moment to recall that although a generic filter for a poset of perfect trees technically consists of a collection of perfect trees, it is determined by a generic real, which is the intersection of all trees in the generic. So it is consistent that there are $\Delta_3^1$nonconstructible reals.
Now let’s consider an $\omega$length finitesupport product $\mathbb P^{\lt\omega}$ of Jensen’s forcing $\mathbb P$. How many $L$generic reals for $\mathbb P$ does $\mathbb P^{\lt\omega}$ add? Suppose for a moment that the only $L$generic reals for $\mathbb P$ added by $\mathbb P^{\lt\omega}$ are those that appear on the coordinates of the generic filter for the product, in particular, there are countable many of them. Considering the uniqueness of generic reals property of $\mathbb P$, this is very plausible. It was conjectured to be true by Ali Enayat. If true, this would solve Gulbenkian’s question because by a coordinateswitching automorphism argument for finitesupport products, no real appearing on a coordinate of an $L$generic filter for $\mathbb P^{\lt\omega}$ can be ordinal definable. Kanovei and Lyubetsky proved that $\mathbb P^{\lt\omega}$ indeed has this property, finishing our story [3].
In the talk, I will give full details of their argument from [3] and if there is interest I will post my detailed notes on their argument. Here are the notes!
@article {GroszekLaver:leastDegrees,
AUTHOR = {Groszek, M. and Laver, R.},
TITLE = {Finite groups of {OD}conjugates},
JOURNAL = {Period. Math. Hungar.},
FJOURNAL = {Periodica Mathematica Hungarica. Journal of the J\'anos Bolyai
Mathematical Society},
VOLUME = {18},
YEAR = {1987},
NUMBER = {2},
PAGES = {8797},
ISSN = {00315303},
MRCLASS = {03E45 (03E10 03E35 03E40 20B05)},
MRNUMBER = {895774},
MRREVIEWER = {Thomas J. Jech},
DOI = {10.1007/BF01896284},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.1007/BF01896284},
}
@incollection {jensen:real,
AUTHOR = {Jensen, Ronald},
TITLE = {Definable sets of minimal degree},
BOOKTITLE = {Mathematical logic and foundations of set theory ({P}roc.
{I}nternat. {C}olloq., {J}erusalem, 1968)},
PAGES = {122128},
PUBLISHER = {NorthHolland, Amsterdam},
YEAR = {1970},
MRCLASS = {02K05},
MRNUMBER = {0306002 (46 \#5130)},
MRREVIEWER = {D. A. Martin},
}
@ARTICLE {kanovei:productOfJensenReals,
AUTHOR = {Kanovei, Vladimir and Lyubetsky, Vassily},
TITLE = {A countable definable set of reals containing no definable elements},
EPRINT ={1408.3901}}
One of my former students, Andrew Lebovitz, recently posted a link on Facebook to a Nature article that summarizes a paper, titled The classical origin of modern mathematics, which completed a comprehensive analysis of the MGP database. One of the interesting findings was that the individuals in the database fall into 84 distinct family trees with twothirds of the world’s mathematicians concentrated in just 24 of them.
After reading the Nature article, I was motivated to see if I could figure out whether I belonged to one of the 24 families. It wasn’t obvious to me how I would do this without manually clicking on my advisor (Richard M. Green), then my advisor’s advisor, etc. This was slightly more complicated than I expected because there were quite a few ancestors with 2 advisors, so I had to navigate down multiple paths. As I clicked around, I drew out my family tree in a notebook.
Here is what I discovered. My longest branch goes back to Nicolo Fontana Tartaglia (currently 14,428 descendants). My tree includes Isaac Newton, Galileo Galilei, and Marin Mersenne (who Mersenne primes were named after). Interestingly, no one on this path belongs to one of the 24 families mentioned in The classical origin of modern mathematics. Also, I was disappointed to find out that I wasn’t related to Leonhard Euler. However, I am a descendant of Henry Bracken, who is the head of one of the 24 families.
I posted some of this information on Facebook and asked if anyone knew how to automatically create a nice visualization of the directed graph corresponding to my family tree. Chris Drupieski replied and pointed out a program called Geneagrapher, which was built to do exactly what I was looking for. In particular, Geneagrapher gathers information for building math genealogy trees from the MGP, which is then stored in dot file format. This data can then be passed to Graphviz to generate a directed graph.
Here are the steps that I completed to get Geneagrapher up and running on my computer running MacOS 10.11. The Geneagrapher website suggests using easy_install
via Terminal, but this didn’t immediately work for me. It often seems that doing anything with Python on my Mac requires a few extra steps. After doing a little searching around, I found a post on Stack Overflow that solved my issue. At the command line, I typed the following:
sudo chown R <your_user>:wheel /Library/Python/2.7/sitepackages/
Of course, you should replace <your_user>
with your username. Note that using sudo
requires you to enter your password. Next, I installed Geneagrapher using the following:
easy_install http://www.davidalber.net/dist/geneagrapher/Geneagrapher0.2.1r2.tar.gz
In order to use Geneagrapher, you need to input a record number from MGP. Mine is 125763. At the command line, I typed:
ggrapher f ernst.dot a 125763
You can replace ernst
with whatever you’d like the output file to be called. The next step is to pass the dot file to Graphviz. If you don’t already have Graphviz installed, you can do so using Homebrew (which is also easy to install):
brew install graphviz
Following the Geneagrapher instructions, I typed the following to generate my family tree:
dot Tpng ernst.dot > ernst.png
Maybe it is worth mentioning that unless you specify otherwise, the dot and png files will be stored in your home directory. Below is my mathematical family tree created using Geneagrapher. As you can see, it took a while for my ancestors to leave the University of Cambridge.
]]>Catalog description: Linear algebra from a matrix perspective with applications from the applied sciences. Topics include the algebra of matrices, methods for solving linear systems of equations, eigenvalues and eigenvectors, matrix decompositions, vector spaces, linear transformations, least squares, and numerical techniques.
]]>Catalog description: Definitions of limit, derivative, and integral. Computation of the derivative, including logarithmic, exponential and trigonometric functions. Applications of the derivative, approximations, optimization, mean value theorem. Fundamental theorem of calculus, brief introduction to the applications of the integral and to computations of antiderivatives. Intended for students in engineering, mathematics and the sciences.
]]>Introduction: The Euclidean Algorithm was first published in 300 B.C. yet still remains widely useful in solving the greatest common divisor of two computationally large natural numbers. The algorithm provides a step by step process to reduce natural numbers into remainders derived from the division theorem with the same common divisors. While the algorithm itself is rather simple, it has several unique behaviors that make it fascinating to study. As years pass, mathematicians consistently rely on the Euclidean algorithm to be wellconditioned, and provide accurate computational results.
Summary: The thesis defines and illustrates the algorithm. It uses experimental methods to investigate the likelihood of each outcome of the algorithm. It then uses both experimental and rigorous methods to examine the case when the outcome is 1, that is, the two inputs are relatively prime.
]]>Abstract: This article will show the derivation of closed form radical expressions for polynomials of degree $n\leq4$. For degree one and two polynomials, it is simplistic to show solvability by radicals. For degree three and four polynomials however, these derivations can be quite complex. Due to this, much greater detail is shown throughout those sections. We will also introduce the reader to aspects of Group and Field theory which will serve as a stepping stone to Galois theory. We will use Galois theory to show that for polynomials of degree $n\geq5$, no closed form radical expression for the roots exists.
]]>Joint work with James Cummings, SyDavid Friedman, Menachem Magidor, and Dima Sinapova.
Abstract. Three central combinatorial properties in set theory are the tree property, the approachability property and stationary reflection. We prove the mutual independence of these properties by showing that any of their eight Boolean combinations can be forced to hold at $\kappa^{++}$, assuming that $\kappa=\kappa^{<\kappa}$ and there is a weakly compact cardinal above $\kappa$.
If in addition $\kappa$ is supercompact then we can force $\kappa$ to be $\aleph_\omega$ in the extension. The proofs combine the techniques of adding and then destroying a nonreflecting stationary set or a $\kappa^{++}$Souslin tree, variants of Mitchell’s forcing to obtain the tree property, together with the Prikrycollapse poset for turning a large cardinal into $\aleph_\omega$.
Downloads:
Joint work with Chris LambieHanson.
Abstract. We prove that reflection of the coloring number of graphs is consistent with nonreflection of the chromatic number. Moreover, it is proved that incompactness for the chromatic number of graphs (with arbitrarily large gaps) is compatible with each of the following compactness principles: Rado’s conjecture, Fodortype reflection, $\Delta$reflection, Stationarysets reflection, Martin’s Maximum, and a generalized Chang’s conjecture.
This is accomplished by showing that, under GCHtype assumptions, instances of incompactness for the chromatic number can be derived from squarelike principles that are compatible with large amounts of compactness.
Downloads:
Title: Dual Ramsey, the Gurarij space and the Poulsen simplex 1 (of 3).
Lecturer: Dana Bartošová.
Date: December 12, 2016.
Main Topics: Comparison of various Fraïssé settings, metric Fraïssé definitions and properties, KPT of metric structures, Thick sets
Definitions: continuous logic, metric Fraïssé properties, NAP (near amalgamation property), PP (Polish Property), ARP (Approximate Ramsey Property), Thick, Thick partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
Throughout the DocCourse we have primarily focused on Fraïssé limits of finite structures. As we saw in Solecki’s first lecture (not posted yet), it makes sense, and is useful, to consider Fraïssé limits in a broader context. Today we will discuss those other contexts.
Solecki’s first lecture discussed how to take projective Fraïssé limits. Panagiotopolous’ lecture (not posted yet) looked at a specific application of these projective limits. We will see how to take metric (direct) Fraïssé limits.
Discrete  Compact  Metric Structure  

Size  Countable  Separable  Separable, complete 
Limit  Fraïssé limit  Quotient of the projective limit  (direct or projective) Metric Fraïssé limit 
Homogeneity  (ultra)homogeneity  Projective approximate homogeneity  Approximate homogeneity (*) 
Automorphism group  nonarchimedian groups (closed subgroups of $S^\infty$  homeomorphism groups  Polish Groups 
KPT, extremely amenable iff  RP  Dual Ramsey  Approximate RP (**) 
Metrizability of UMF iff  finite Ramsey degree  (***)  (Open) Compact RP? 
Where we’ve seen these  Classical  Solecki’s lectures  These lectures 
(*) – Exact homogeneity is often not possible.
(**) – In the projective setting this is fairly unexplored. These proofs are usually via direct (discrete) Ramsey, or through concentration of measure.
(***) – You have KPT before you take the quotient, but lose it after taking the quotient. e.g. UMF(prepseudo arc) is not metrizable (through RP). A question of Uspenskij asks about the UMF(pseudo arc).
In the context of Banach spaces, it makes sense to use continuous logic. This is where we instead of the usual $\{0,1\}$valued logic we allow sentences to take on values in the interval $[0,1]$. We also suitably adjust the logical constructives.
Classical logic  Continuous logic 

True  0 
False  1 
$=$  $d$ 
$x \vee y$  $\min\{x,y\}$ 
$x \wedge y$  $\max\{x,y\}$ 
$\neg x$  $1x$ 
$x \Rightarrow y$  $(xy) \vee 0$ 
$\forall$  $\sup$ 
$\exists$  $\inf$ 
Now we define functions and relations. Let $(A,d)$ be a complete metric space. So $(A^n, d)$ will be given the sup metric.
Then functions and relations must satisfy the usual things that functions and relations satisfy in classical logic.
Finitely generated substructures  Limit  maps  Language  

Separable metric spaces  finite metric spaces  Separable Urysohn space  isometric embedding  just the distance 
Separable Banach spaces  finite dimensional Banach spaces (**)  Gurarij space  isometric linear embedding  $\{ \cdot , +, (\cdot \lambda)_{\lambda \in \mathbb{Q}}\}$ 
Separable Choquet spaces  finite dimensional simplices  Poulsen simplex  affine homeomorphisms (*)  Something that captures the convex structure 
(*) – An affine homeomorphism sends $S_0 \rightarrow S_1$ and sends extreme points to extreme points, then is extended affinely to the rest of the simplex. The metric here is not canonical.
(**) – Similar to the discrete case, to take a limit you only need a cofinal sequence. In this case we take $\ell^n_\infty$.
In continuous logic the maps between models are isometric embeddings that preserves functions and relations.
In the classical Fraïssé setting we looked at homogeneity, HP, JEP and AP. These notions have suitable generalizations in the metric Fraïssé setting.
We say that $(A,d)$ is approximately ultrahomogeneous (AUH) if $\forall \vec{a} \in A^n, (\forall n)$ and for every $\phi: \langle \vec{a} \rangle \rightarrow A$ morphism, and for all $\epsilon >0$, there is a $\hat{\phi} \in \text{Aut}(A)$ such that $d(\phi(\vec{a}), \hat{\phi}(\vec{a}))<\epsilon$.
$\text{Age}(A)$ is the collection of finitely generated substructures of $A$.
We now explain NAP and PP. The NAP is a striaghtforward generalization of AP.
$$\forall \epsilon > 0, \forall \vec{a} \in A^n, (\forall n), \exists C \in \mathcal{K}, \exists g_i : B_i \rightarrow C$$
such that
$$d_C (g_1 f_1 (\vec{a}), g_2 f_2 (\vec{a}) < \epsilon.$$
The PP measures how closely you can embed two metric spaces.
We say $\mathcal{K}$ satisfies the Polish Property (PP) if $(K_n, d_n)$ is separable for all $n$.
This gives us the following Fraïssé theorem for metric structures.
Recall that $(\mathbb{U}, d)$ is the separable Urysohn space. It is the (unique) complete, separable metric space, universal for separable metric spaces and (exactly) ultrahomogeneous with respect to finite metric spaces.
Its age is the collection of finite metric spaces. It is a metric Fraïssé class.
Its automorphism group has a similar universal property.
See these notes for more information.
Recall the following fact about (classical) Fraïssé structures.
The following observation of Melleray is the corresponding fact for metric structures. It has a similar proof to the classical fact.
For every orbit closure in $G$ of a point $x \in \mathbb{U}^n$ add a relational symbol $C = \overline{G \cdot c}$ called $R_C$.
The first relevant result is the following:
This proof uses the finite Ramsey theorem and concentration of measure.
The KPT theorem for metric structures is given by the following.
We define the approximate Ramsey Property.
(ARP):
$$\forall A,B \in \mathcal{K}, \forall r \geq 2, \forall \epsilon >0, \forall F \in [\text{Emb}(A,B)]^{<\omega},$$
there is a $C \in \mathcal{K}$ such that
$$\forall c: \text{Emb}(A,C) \rightarrow [r], \exists \phi \in \text{Emb}(B,C), \exists i \in [r]$$
such that
$$\{f \circ \phi : f \in F\} \subseteq (c^{1}(i))_\epsilon.$$
Here $(X)_\epsilon \subset \text{Emb}(A,C)$, and the $\epsilon$fattening is using the embedding distance (which we haven't defined).
Recall that in the infinite case, rigidity was needed to have the embedding RP. That is why in finite metric spaces we added linear orders to get the RP. However, metric spaces do satisfy the ARP (by Pestov from extreme amenabilty of $\text{Iso}(\mathbb{U},d)$, without needing to add linear orders.
Also, by using the usual compactness arguments, we can assume that the witness $C$ to ARP is the full Fraïssé limit.
In the KPT correspondence, we saw a useful connection between the stabilizer of a set and collections of finite structures. See Lionel Ngyuen van The’s first DocCourse lecture.
Here we mention an analogous connection.
So we can reword the ARP for finite metric spaces, by transfering the colouring $c: \text{Emb}(A,\mathbb{U}) \rightarrow [r]$ to a colouring $\hat{c} : G / \text{Stab}(A) \rightarrow [r]$.
Thickness is a group property that captures some Ramsey properties. This is desirable because we would like to be able to detect Ramsey type phenomena from the group itself, without having to know the underlying Fraïssé limit.
$G$ is thick partition regular iff $\forall V_X^\epsilon, \forall G / \text{Stab}(x) = \bigcup_{i=1}^n = P_i$ there is a $P_{i_0}$ that is thick.
This is really just unwinding definitions. Then by general topological dynamics abstract nonsense we get:
Note that this is a theorem just about groups. This doesn’t use much of the structure of $\mathbb{U}$. Our goal is to prove extreme amenability without having to first prove Ramsey theorems.
In the next lectures we will examine the Gurarij space and prove the ARP for $\ell_\infty^n$ (i.e. Banach spaces).
(This is incomplete – Mike)
There are many philosophical debates (e.g., multiverse view versus universe view) about the metamathematical interpretation of set forcing. The two standard ways of resolving the difficulty that generic filters exist outside the set theoretic universe are the countable transitive models approach and the Boolean valued models approach. In the countable transitive models approach, we force over countable transitive models that live in some larger ambient ${\rm ZFC}$ universe which has all the required generic filters. In the Boolean valued models approach, we find a definable, over $V$, class model of ${\rm ZFC}$ into which our universe $V$ elementary embeds and for which we have a generic filter in $V$ so that we can form its generic extension. Thus, we have classes in our universe which look like a model of ${\rm ZFC}$ and its forcing extension, and moreover this model behaves essentially like $V$ because $V$ elementarily embeds into it. In order to define these models, we pass from a fixed forcing $\mathbb P$ to its Boolean completion $\mathbb B$ (a unique up to isomorphism compelte Boolean algebra of which $\mathbb P$ is a dense subset). With $\mathbb B$, we can build the Boolean valued model $V^{\mathbb B}$ in the language with $\in$ and the predicate $\check V$ for the ground model (defined by $[[\tau\in\check V]]=\bigvee_{x\in V}[[\tau=x]]$). Let $U$ be any ultrafilter on $\mathbb B$ (no genericity is required). Let $V^{\mathbb B}/U$ consist of all $[\tau]_U$ with $\tau\in V^{\mathbb B}$ and $\check V_U$ consist of all $[\tau]_U$ with $[[\tau\in \check V]]\in U$. Then it will be the case that $\check V_U[[\dot G]_U]=V^{\mathbb B}/U$ and there is an elementary embedding $i:V\to \check V_U$, so that we can use $\check V_U$ and $V^{\mathbb B}/U$ as surrogates for $V$ and $V[G]$. For class forcing over models of ${\rm GBC}$, we are forced to adapt the countable transitive models approach because class partial orders don’t need to have Boolean completions (sometimes not even for setsized suprema). To be more precise, $\mathbb M=(M,\in,\mathcal C)$ of secondorder set theory is a countable transitive model if $M$ is countable and transitive, and $\mathcal C$ is a countable collection of subsets of $M$. All results below are about countable transitive models of some secondorder set theory. In the general setup for class forcing, the transitivity requirement can be relaxed to allow for countable nonstandard models, but it has to be carefully checked (by someone) which of the results below transfer to this more general context.
Forcing Theorem
Suppose $\mathbb P$ is a notion of forcing. The Forcing Theorem for $\mathbb P$ has two parts. The Definability Lemma states that for a fixed formula $\varphi(\vec x)$, the set of all $(p,\vec\tau)$ with $p\in\mathbb P$ and $\mathbb P$names $\vec\tau$ such that $p\Vdash\varphi(\vec\tau)$ is definable. The Truth Lemma states that if a forcing extension $V[G]$ by $\mathbb P$ satisfies $\varphi(\vec\tau)$, then there is some condition $p\in G$ with $p\Vdash\varphi(\vec\tau)$. The Forcing Theorem is the most basic fact about set forcing and it can fail for class forcing. Since, it is shown in [2] that the full Forcing Theorem follows from the Definability Lemma for atomic formulas, the failure of the Forcing Theorem for class forcing is already in the definability of atomic formulas. There are two ways to approach this problem. The Forcing Theorem holds for all pretame forcing notions (see [1] or [3]), and so restricting to this class eliminates the worry that the Forcing Theorem fails. It should be noted that there are nonpretame forcing notions for which the Forcing Theorem holds (for instance ${\rm Coll}(\omega,{\rm ORD})$, see [1]). We can also eliminate all such problems by moving to a stronger base theory, for example, ${\rm KM}$. Over ${\rm KM}$ and in fact already over the much weaker theory ${\rm GBC}+{\rm ETR}$ (${\rm ETR}$ is the principle of transfinite recursion over wellfounded class relations, which states that every such firstorder definable recursion has a solution, see [4] for more details), every class forcing satisfies the Forcing Theorem. The reason is that to prove the Definability Lemma for atomic formulas we need to perform a recursion over a wellfounded class relation.
Boolean completions
Every separative set partial order $\mathbb P$ is a dense subset of its regular open algebra $\mathbb B$ which is a complete Boolean algebra. This algebra is unique up to isomorphism in the sense that if $\mathbb P$ is a dense subset of any other complete Boolean algebra $\bar {\mathbb B}$, then there is an isomorphism between $\mathbb B$ and $\bar {\mathbb B}$ fixing $\mathbb P$. A class Boolean algebra that has a classsized antichain can never be complete [3]. So in the context of class forcing the most we can ask for is that every (separative) class partial order is a dense subset of a setcomplete Boolean algebra. Even this fails [2]. There are separative class partial orders that don’t have setcomplete Boolean completions, and if such a completion does exist it need not be unique [2]. If the Forcing Theorem holds for a forcing notion $\mathbb P$, then $\mathbb P$ has a setcomplete Boolean completion [2]. So every pretame notion of forcing has a setcomplete Boolean completion. Again, this difficulty can also be eliminated by moving to a stronger theory such as ${\rm KM}$. In models of ${\rm KM}$, every (separative) class partial order $\mathbb P$ is a dense subset of a setcomplete Boolean algebra and the regular open algebra of $\mathbb P$ is a definable hyperclass. So we have at least some access to the full Boolean completion of $\mathbb P$, which means that in principle we can attempt the Boolean valued model construction from set forcing.
Dense embeddings
If $\mathbb P$ and $\mathbb Q$ are set partial orders such that $\mathbb P$ densely embeds into $\mathbb Q$, then $\mathbb P$ and $\mathbb Q$ have the same forcing extensions. This can fail for class forcing, and the property holding is precisely equivalent to pretameness [3]. As an example of the failure of this property consider again the partial order ${\rm Coll}(\omega,{\rm ORD})$ and the variant ${\rm Coll}_*(\omega,{\rm ORD})$ whose conditions are functions $p:n\to{\rm ORD}$ for some $n\in\omega$. Clearly ${\rm Coll}_*(\omega,{\rm ORD})$ densely embeds into ${\rm Coll}(\omega,{\rm ORD})$, but with a little bit of work it can be shown that ${\rm Coll}(\omega,{\rm ORD})$ adds sets, while ${\rm Coll}_*(\omega,{\rm ORD})$ does not [2].
Fullness
Suppose $\mathbb P$ is a set partial order. If $p\in\mathbb P$ and $p\Vdash \exists x\,\varphi(x)$, then there is a $\mathbb P$name $\tau$ such that $p\Vdash\varphi(\tau)$. This property is called fullness. For class notions of forcing, fullness is equivalent to ${\rm ORD}$cc.
Nice names
If $\mathbb P$ is a set forcing, then every subset of ordinals (say of $\alpha$) in a forcing extension has a nice name, meaning a $\mathbb P$name of the form $\bigcup_{\gamma<\alpha}\{\check\gamma\}\times A_\gamma$, where the $A_\gamma$ are antichains of $\mathbb P$. Every forcing extension by ${\rm Coll}(\omega,{\rm ORD})$ has a subset of $\omega$ for which there is no nice name [3]. Pretameness is equivalent to the property that every subset of ordinals in the extension has a nice name. ${\rm ORD}$cc is equivalent to the property that for every $\mathbb P$name $\sigma$ such that $1\Vdash\sigma\subseteq\check\alpha$ for some ordinal $\alpha$, there is a nice name $\tau$ such that $1 \Vdash\sigma=\tau$ [3].
Separation
We already know that class forcing need not preserve ${\rm GBC}$. Replacement clearly fails in any forcing extension by ${\rm Coll}_*(\omega,{\rm ORD})$. It is a bit trickier to see that Separation fails as well. Let $F:\omega\to{\rm ORD}$ be the surjection added by ${\rm Coll}_*(\omega,{\rm ORD})$. The set $\{n\in\omega\mid F(n)\text{ is odd}\}$ is not in the extension [5]. The failure of this instance of Separation uses a class parameter, namely $F$. It is also true that there is a cardinal and cofinality preserving partial order $\mathbb P$ (satisfying the Forcing Theorem) all of whose extensions have a failure of Separation that does not use class parameters [5].
@book {friedman:classforcing,
AUTHOR = {Friedman, Sy D.},
TITLE = {Fine structure and class forcing},
SERIES = {de Gruyter Series in Logic and its Applications},
VOLUME = {3},
PUBLISHER = {Walter de Gruyter \& Co., Berlin},
YEAR = {2000},
PAGES = {x+221},
ISBN = {3110167778},
MRCLASS = {0302 (03E15 03E35 03E45 03E55)},
MRNUMBER = {1780138},
MRREVIEWER = {A. Kanamori},
DOI = {10.1515/9783110809114},
URL = {http://dx.doi.org/10.1515/9783110809114},
}
@article {PeterHolyRegulaKrapfPhilippLuckeAnaNjegomirPhilippSchlicht:classforcing1,
AUTHOR = {Peter Holy and Regula Krapf and Philipp L\"{u}cke and Ana Njegomir and Philipp Schlicht},
TITLE = {Class Forcing, the Forcing Theorem and Boolean Completions},
NOTE ={To appear in the Journal of Symbolic Logic}
}
@article {PeterHolyRegulaKrapfPhilippSchlicht:classforcing2,
AUTHOR = {Peter Holy and Regula Krapf and Philipp Schlicht},
TITLE = {Characterizations of Pretameness and the {O}rdcc},
NOTE ={Preprint}
}
@INCOLLECTION{GitmanHamkins:OpenDeterminacyForClassGames,
author = {Victoria Gitman and Joel David Hamkins},
title = {Open determinacy for class games},
booktitle = {Foundations of Mathematics, Logic at Harvard, Essays in Honor of Hugh Woodin's 60th Birthday},
publisher = {American Mathematical Society},
year = {(expected) 2016},
editor = {Andr\'es E. Caicedo and James Cummings and Peter Koellner and Paul Larson},
volume = {},
number = {},
series = {Contemporary Mathematics},
type = {},
chapter = {},
pages = {},
address = {},
edition = {},
month = {},
note = {Newton Institute preprint ni15064},
url = {http://jdh.hamkins.org/opendeterminacyforclassgames},
eprint = {1509.01099},
archivePrefix = {arXiv},
primaryClass = {math.LO},
abstract = {},
keywords = {},
pdf= {http://boolesrings.org/victoriagitman/files/2016/09/Properclassgames.pdf},
}
@article {PeterHolyRegulaKrapfPhilippSchlicht:classforcing3,
AUTHOR = {Peter Holy and Regula Krapf and Philipp Schlicht},
TITLE = {Separation in Class Forcing Extensions},
NOTE ={Preprint}
}
Title: Bootcamp 1 – Informal meeting.
Lecturer: Jaroslav Nešetřil.
Date: September 20, 2016.
Main Topics: Overview over the topics of the DocCourse; classical result in Ramsey theory
Definitions: Arrow notation, Ramsey numbers, arithmetical progression
Bootcamp 1 – Bootcamp 2 – Bootcamp 3 – Bootcamp 4 – Bootcamp 5 – Bootcamp 6 – Bootcamp 7 – Bootcamp 8
The main scope of this lecture was to give a historical overview over classical results in Ramsey theory, including Ramsey’s theorem itself. Further the program of the DocCourse was presented, which can be found here.
The three books below are a main references for Ramsey theory in general and the Bootcamp lectures in particular. Jarik also passed around an original version of Ramsey’s paper, which is depicted on the conference poster.
In order to phrase Ramsey’s theorems we first introduce some standard notation:
Then Ramsey’s theorem states as follows:
A proof of Ramsey’s theorem can be found in the notes to David Fox’ lectures (Mike: Coming soon!), including some estimates for the corresponding Ramsey numbers:
By the pigeonhole principle we have $r(1,k,n) = k(n1) + 1$. However already the situation for Ramsey number $r(2,2,n)$ is much more complex, only estimates are known for $n \geq 5$.
Ramsey’s work did not result from pure interest in combinatorics, but was motivated by Hilbert’s Entscheidungsproblem, the problem of finding an algorithm that tells if every statement expressible in firstorder logic is provable (from a given set of axioms). The finite Ramsey theorem was only used as an auxiliary result in “On a Problem of Formal Logic.”, in order to prove that every formula of the form
$$\exists x_1 \exists x_2 \cdots \exists x_n \forall y_1 \forall y_2 \cdots \forall y_n \phi(x_1, \ldots, x_n, y_1, \ldots, y_n)$$
is decidable.
We remark that by Gödel’s incompleteness Theorem, the Entscheidungsproblem in general is not decidable; by a result of Trakthenbrot already adding one additional quantifier alternation results in undecidable formulas.
In the same paper Ramsey also presented a proof for the following infinite version of his theorem:
The proof of the infinite Ramsey theorem requires the axiom of choice. There exists a slight strengthening of the Finite Ramsey theorem, which we will denote by FRT*. In FRT*, we additionally can assume that the minimum monochromatic set $Y$ is bounded by the size of $Y$:
We are going to show that the infinite Ramsey theorem implies the strengthened version of the finite Ramsey theorem:
Note that, since the above proof of the FRT* uses the infinite Ramsey theorem, it requires also the axiom of choice. It can be shown that this assumption is indeed necessary: Paris and Harrington proved that FRT* is a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic. It was already known that such statements existed by Gödel’s first incompleteness theorem, however no examples of “natural” such theorems were known.
Their proof lead also to the notion of indiscernibles in mathematical logic, i.e. are objects which cannot be distinguished by any property or relation defined by a formula.
As mentioned above, Ramsey himself used his result only as an auxiliary result to prove statements about decidability. The Happy ending theorem is often considered as starting point for the development of Ramsey theory as a whole new branch of mathematics:
Ramsey’s theorem was preceded by several other results, which we nowadays consider to be part of Ramsey theory, although they were also not studied from a combinatorial point of view, when they were first proven. One example is Van der Waerden’s theorem:
In reproving a theorem of Dickson on a modular version of Fermat’s conjecture, Schur showed the following:
Hilbert’s cube lemma is probably the earliest result which can be viewed as a Ramseytype theorem (besides, of course, the pigeonhole principle). It was established in connection with investigations on the irreducibility of rational functions with integer coefficients.
Title: Bootcamp 2 (of 8)
Lecturer: Jaroslav Nešetřil.
Date: September 21, 2016.
Main Topics: The Rado graph, homogeneous structures, universal graphs
Definitions: Language, structures, homomorphisms, embeddings, homogeneity, universality, Rado graph (Random graph),…
Bootcamp 1 – Bootcamp 2 – Bootcamp 3 – Bootcamp 4 – Bootcamp 5 – Bootcamp 6 – Bootcamp 7 – Bootcamp 8
In this lecture we discussed some standard notions from model theory that will be used in the rest of the Bootcamp lectures. Further we discussed the Rado graph (also known as Random graph) as an example of a homogeneous structure.
Then an $L$structure $\mathcal{A}$ is a triple $\mathcal{A} = (A,L,I)$, where $A$ is called the domain of $\mathcal{A}$ and $I$ the interpretation function. For $I$ we require that $I(R): \subseteq A^{\text{ar}(R)}$ for every relational symbol $R$ (i.e. $I(R)$ is an $n$ary relation on $A$) and $I(f)$ is a function from $A^{\text{ar}(f)} to $A$.
For simplicity, we usually don’t talk about the interpretation function and write $R^\mathcal{A} = I(R)$ and $f^\mathcal{A} = I(f)$. If it is clear from the context, we sometimes abuse notation and write $R$ for both the symbol and its interpretation in a structure.
Constants can be regarded as unary singleton relations, or as 0ary functions. However, in the Bootcamp lectures, we are only going to discuss relational structures, i.e. structures whose language only consists of relational symbols.
Injective homomorphisms are called monomorphisms, injective strong homomorphisms are called embeddings, bijective embeddings are called isomorphisms. An isomorphism from a structure $\mathcal A$ to itself is called an automorphism of $\mathcal A$.
We say $\mathcal A$ is a substructure $\mathcal B$, if $A \subseteq B$ and the identity is an embedding of $\mathcal A$ into $\mathcal B$. If there is an embedding of $e: \mathcal A \to \mathcal B$, we call the image $e(\mathcal A)$ a copy of $\mathcal A$ in $\mathcal B$.
Erdös and Rényi showed the paradoxical result that there is a unique (and highly symmetric) countably infinite random graph. We are going to discuss this graph and some of its properties in this section.
Suppose we have already constructed an isomorphism $I$ from a finite subset $A \subseteq V$ to $I(A) \subseteq V’$. Then let $a$ be the first element of $V \setminus A$; it gives us a partition of $A$ into the set of its neighbors $A_E = \{x \in A: E(x,a)\}$ and nonneighbors $A_{\bot} = \{x \in A: \not E(x,a)\}$. By the extension property of $G’$, there is also a vertex $a’$ in $V’$ such that $a’$ has an edge with all elements of $I(A_E)$ and has no edge with elements of $I(A_{\bot})$. By setting $I(a) = a’$ we extended the given isomorphism to $A \cup \{a\}$.
To ensure that every vertex of $G’$ is in the image of $I$ we alternate in the next step, finding a suitable preimage of the first element of $G’ \setminus I(A)$. This can be done symmetrically by the extension property of $G$.
Since both graphs $G$ and $G’$ are countable, the union of this ascending sequence of finite isomorphisms is an isomorphism from $G$ to $G’$.
The technique used in proof above is known as backandforth argument or zigzag argument. This proof techniques appears also in other talks of the course, in particular in the proof of Fraïssé’s theorem in Bootcamp 5.
It is not difficult to show that there are graphs with the extension property. An explicit description of such a graph was given by Rado in 1964. The vertex set of the Rado graph $\mathcal R$ are the natural numbers, where for $a < b$ there is an edge between $a$ and $b$ if and only if the binary representation of $B$ has a $1$ on its $a$th position.
There is also a probabilistic characterization of such graphs by Erdös and Rényi, which preceded Rado’s construction. Let us denote by a random graph a probabilistic distribution over graphs, in which the probabilities for edges are distributed independently, with probability $\frac{1}{2}$ each (Note by Michael: In literature the term “random graph” sometimes also refers to graphs generated by some other random process).
Then the following holds:
By the above theorem the Rado graph is often also called the Random graph. The Rado graph $\mathcal R$ has several other nice features, making it a highly symmetric structure:
Examples:
In the case of graphs a full classification of the homogeneous graphs is known: in the finite case this classification is due to Gardiner, in the countable case due to Lachlan and Woodrow.
We will hear more about homogeneous structures and a way of constructing them in Bootcamp 5.
Examples:
In this section we are going to show that not for every class of countable structure $\mathcal C$ there is a $\mathcal C$universal structure. A counterexample for graphs was given by Füredi and Komjáth:
We remark that this result was proven in a more general setting (substitute $C_4$ by any finite, 2connected, but not complete graph); but here we only present a proof for $C_4$.
(Michael: My notes on the proof of this lemma are not complete…)
Now let us take the hypergraph $H(7,5)$ given by the above Lemma. Further let $G_0$, $G_1$ be graphs on 7element vertex set, such that in $G_0$ the first 4 elements form a path and the last 3 a cycle; and in $G_1$ also the last 3 elements form a cycle and there is an edge from the first to the forth vertex.
For every function $f: \mathbb N \to \{0,1\}$ we then form a $C_4$free graph $G_f$ on $\omega$, by replacing every hyperedge $E_i$ in $H(7,5)$ by $G_0$ if $f(i)=0$ and by $G_i$ if $f(i)=1$. Note that the graph is welldefined and $C_4$free by the properties of $H(7,5)$.
Now assume that there is a countable universal $C_4$free graph $U$. Then $U$ has to embed all the graphs of the form $G_f$; For every $G_f$ let $e_f$ be an embedding of $G_f$ into $U$. Since $U$ is countable, there are two graphs $G_{f}$, $G_{h}$, such that $e_f$ and $e_h$ agree on the set $\{1,2,\ldots,N\}$. But since $f \neq h$, there has to be a minimal integer $j+N$, where they disagree. Then, by construction of the graphs $G_f$ and $G_h$, the union $e_f(E_j) \cup e_h(E_j)$ has to contain a 4cycle. But this is a contradiction.
Title: Introduction to the KPT correspondence 3 (of 3).
Lecturer: Lionel Ngyuen Van Thé.
Date: November 18, 2016.
Main Topics:
Definitions: Expansion property,
Lecture 1 – Lecture 2 – Lecture 3
In the second lecture we saw that the Ramsey property of $\mathbb{K}^\star$ (a combinatorial property) ensures universality of a certain minimal flow (a dynamical property). Today we’ll look at going from a dynamical property (minimality) to a combinatorial property (the expansion property).
Recall that we proved the following in the second lecture:
Here $$X^\star := \overline{\text{Aut}(\mathbb{K}) \cdot \vec{R^\star}}.$$
Last time we saw that precompactness of the expansion allows us to topologically identify $$\text{Aut}(\mathbb{K}) / \text{Aut}(\mathbb{K}^\star)\cong X^\star.$$ We also saw that $X^\star$ is a subset of a large compact product $$P^\star := \prod_{i \in I} \{0,1\}^{\mathbb{N}^{a(i)}}.$$
Our main question today will be “What combinatorial properties guarantee that $X^\star$ is a minimal flow?” More precisely, what condition must an expansion $\vec{R^\star} \in P^\star$ satisfy so that $X^\star$ is minimal.
We start by reminding you about the expansion property (which we looked at in Bootcamp 4 and Bootcamp 7).
We say that $\mathcal{K}^\star$ has the expansion property (EP) (relative to $\mathcal{K}$) when $\forall A \in \mathcal{K}, \exists B \in \mathcal{K}$ such that $\forall A^\star, B^\star \in \mathcal{K}^\star$ (expansions of $A,B$ respectively), we have $A^\star$ embeds in $B^\star$.
When $\mathcal{K}$ has the Joint Embedding Property, then (EP) is equivalent to $\forall A \in \mathcal{K}, \forall A^\star \in \mathcal{K}^\star, \exists B \in \mathcal{K}$ such that $\forall B^\star \in \mathcal{K}^\star$ (an expansion of $B$), we have $A^\star$ embeds in $B^\star$.
Here is the major theorem we will prove.
“You have to understand the purpose!” – Nešetřil.
“The difficulty is really translating into dynamical language what the combinatorics mean.” – Lionel
Before proving this theorem, we prove two propositions which will contain all the heavy lifting. For notational simplicity you may assume that $R^\star$ is just a single relation $R$.
“(2) is the correct finitization of (1).”
By (1), for all $\vec{S} \in X(R)$ we have $A^\star$ embeds into $(\mathbb{K}, \vec{S})$. So there is a finite $C \subset \text{dom}(\mathbb{K})$ such that $$\vec{S} \in X_C := \{\vec{T} \in X(R) : A^\star \cong (\mathbb{K}, \vec{T}) \upharpoonright C\}$$ which is open in $P^\star$.
In this way $\{X_C : C \in [\text{dom}(\mathbb{K})]^{< \omega}\}$ forms an open cover of $X(R)$.
By compactness, there are $C_1, \ldots, C_n$ finite such that $X(R) = \bigcup_{i \leq n} X_{C_i}$.
Let $B$ be the finite substructure of $\mathbb{K}$ supported by $C = \bigcup_{i \leq n} C_i$.
Claim: $B$ witnesses the (EP) for $A^\star$.
This is all that remains to finish the proof that $(1) \Rightarrow (2)$.
This induces an embedding $\phi^\prime : B \rightarrow \mathbb{K}$. By ultrahogeneity (for $\mathbb{K}$) we can extend $\phi^\prime$ to an automorphism $g: \mathbb{K} \rightarrow \mathbb{K}$.
Then, for $i \in I$ and $y_1, \ldots, y_{a(i)} \in B$ we have
So setting $S_i = g^{1} R^\star$ for all $i \in I$ we get $B^\star \cong (\mathbb{K}, \vec{S}) \upharpoonright C$.
Now $\vec{S} \in \bigcup_{i \leq n} X_{C_i}$, so there is an $l \leq n$ such that $\vec{S} \in X_{C_l}$.
So $$A^\star \cong (\mathbb{K}, \vec{S}) \upharpoonright C_l \leq (\mathbb{K}, \vec{S}) \upharpoonright C \cong B^\star.$$ Thus $A^\star$ embeds into $B^\star$.
We now prove $(2) \Rightarrow (1)$. Fix $A^\star \in \text{Age}(\mathbb{K}^\star), B \in \text{Age}(\mathbb{K})$ witnessing the (EP).
Take an $\vec{S} \in X(R)$. Then, by the (EP), $$A^\star \leq (\mathbb{K}, \vec{S}) \upharpoonright B \in \text{Age}(\mathbb{K}, \vec{S}).$$ So $\text{Age}(\mathbb{K}^\star) \subseteq \text{Age}(\mathbb{K}, \vec{S})$.
We can now combine this with the result from the second lecture (which tells us about universality) to get the following method for computing universal minimal flows.
This gives an explicit, combinatorial way to compute a universal minimal flow. You only need to find a precompact expansion of $\mathbb{K}$ with (EP) and (RP). Often (RP) is used to prove (EP).
All of the universal minimal flows constructed in this way will be metrizable.
The following captures the uniqueness of a precompact expansion.
We saw in lecture 2 that the “smallness” of the universal minimal flow is dictated partly by the homogeneity and Ramsey properties of the group. The following theorem captures that notion.
Why metrizability? It is a reasonable “smallness” condition.
This was expanded by Zucker, and he was able to drop the $G_\delta$ condition, while capturing the Ramsey degree.
One way to interpret this result is that if you have a combinatorial property (3), then you get a precompact expansion with the (EP) and the (RP). This suggests (or at least seems to suggest) that precompact expansions are the relevant ones to consider.
Natural question (Tsankov 2009). Which $\mathbb{K}$ satisfy these theorems? (Just knowing $\mathbb{K}$ and not assuming (RP).)
Conjecture (Nguyen Van Thé 2012). When $\mathbb{K}$ is precomapct.
This conjecture was shown to be false in 2015 by Evans using a Hrushovski construction. See his DocCourse lectures.
Conjecture (Bodirsky, Pinsker). This should be true for finite languages.
“What does the finite language mean topologically? Something about growth rate of number of structures of cardinality $n$? Related to amenability? Maybe the arity matters? This might require more examples of high arity.”
Research has gone in many directions from the original KPT paper.
Main references:
Other works cited (Mike: I have to fix some of these. This is obviously unfinished.)
Title: Introduction to the KPT correspondence 2 (of 3).
Lecturer: Lionel Ngyuen Van Thé.
Date: November 16, 2016.
Main Topics: Computing universal minimal flows, $M(S_\infty)$, why precompactness is important.
Definitions: Minimal flow, universal flow, Logic action, $G$equivariant.
Lecture 1 – Lecture 2 – Lecture 3
Last time we looked at how the Ramsey property of a structure $\mathbb{K}$ ensures that $\text{Aut}(\mathbb{K})$ is extremely amenable.
Today we will look at what can be said about the dynamics of $\text{Aut}(\mathbb{K})$ when $\text{Age}(\mathbb{K})$ is not Ramsey?
Last lecture we did not provide many examples of extremely amenable groups, so let us fix that now.
The underlying Ramsey principle here is the classical Ramsey theorem. This was the first known example of an extremely amenable group. Note that it comes seven years before the 2005 KPT paper.
The following examples were shown to be extremely amenable using the 2005 KPT correspondence, although the underlying Ramsey principles were already known.
Theorem (KPT, 2005). The folowing groups are extremely amenable. The needed Ramsey principle is in brackets.
In order to analyze what happens to $\text{Aut}(\mathbb{K})$ when $\mathbb{K}$ is not Ramsey, we will introduce the notion of a universal minimal flow, which at its heart is a canonical compact object we can associate to a group. The size (both topologically and in terms of cardinality) of a group’s universal minimal flow will be determined by the “amount of Ramsey” that the group has.
Here are two exercises to play around with these concepts.
For a fixed $G$, the object that is universal in the class of minimal $G$flows will be a canonical object we can associate to $G$, called the universal minimal flow of $G$. To make sense of this, we introduce the concept of universality and flow homomorphism.
Definition. Given $G$flows $G \curvearrowright X$ and $G \curvearrowright Y$, a flow homomorphism is a map $\pi: X \rightarrow Y$ that is continuous and $G$invariant.
A map $\pi: X \rightarrow Y$ is $G$invariant if $\forall g \in G, \forall x \in X$ we have $$\pi(g \cdot x) = g \cdot \pi(x).$$
These universal objects always exist, although the proof is nonconstructive.
Theorem. Let $G$ be a topological gorup. There is a minimal $G$flow $G \curvearrowright M(G)$ that is universal in the sense that for all $G \curvearrowright Y$ minimal there is an onto flow homomorphism $\pi: M(G) \rightarrow Y$.
In addition, $M(G)$ is unique (up to flow isomorphism). So $M(G)$ is called the universal minimal flow of $G$.
Typically $M(G)$ will be hard to describe. The following facts show cases where they are easily understood.
Exercise.
Two other examples where $M(G)$ is known.
The first known example of a nontrivial metrizable universal minimal flow is the following.
We will compute the universal minimal flow of $S_\infty$. The original proof is due to GlasnerWeiss in 2002, but we will present proof that is easier to generalize. You should compare this with their original proof.
Proof. By an earlier exercise, $\text{LO}(\mathbb{N})$ is a minimal flow, so we need “only” show that it is universal. So let $G = S_\infty$ and let $G \curvearrowright X$.
Step 1: Use extreme amenability of a smaller group.
Fix a linear ordering $<^\mathbb{Q} \in \text{LO}(\mathbb{N})$ such that $(\mathbb{N}, <^\mathbb{Q}) \cong (\mathbb{Q}, <)$.
In this way we have that $G^\star = \text{Aut}(\mathbb{N}, <^\mathbb{Q}) \cong \text{Aut}(\mathbb{Q}, <)$ which is extremely amenable by Pestov’s theorem. Note that $G^\star \leq G$. So $G \curvearrowright X$ induces an action $G^\star \curvearrowright X$. By extreme amenability of $G^\star$, there is a $G^\star$fixed point $x \in X$.
Step 2: Use uniform spaces to extend the group action.
Now consider the map $\pi: G \rightarrow X$ that sends $g \mapsto g \cdot x$. Since $G^\star = \text{stab}(<^\mathbb{Q})$ we have that $\pi(g)$ only depends on $[g] \in G / G^\star$. Thus $$G / G^\star \cong G \cdot <^\mathbb{Q}.$$
We also see that $$G \cdot <^\mathbb{Q} = \{\preceq \in \text{LO}(\mathbb{N}) : (\mathbb{N}, \preceq) \cong (\mathbb{N}, <^\mathbb{Q}) \cong (\mathbb{Q}, <)\}.$$
So, in this way we can think of, $\pi: G \cdot <^\mathbb{Q} \rightarrow X$.
Assume for the moment that $\pi$ can be continuously extended to a map $\tilde{\pi}$ on all of $\text{LO}(\mathbb{N})$. In this case $\tilde{\pi}[\text{LO}(\mathbb{N})]$ is a compact subspace of $X$ containing $x$ (the $G^\star$ fixed point), hence $G \cdot x$. Since $X$ is minimal, $X = \overline{G \cdot x} \subseteq \tilde{\pi}[\text{LO}(\mathbb{N})] \subseteq X$. So we are done.
Claim. $\pi$ can be continuously extended to a map $\tilde{\pi}$ on all of $\text{LO}(\mathbb{N})$.
Proof of claim. We would like to show first that $\pi$ is uniformly continuous. What does that even mean in the nonmetric setting? How do we capture the interplay between the topology of $\text{LO}(\mathbb{N})$ and the group $G$?
We can’t assume that $X$ has a metric, but it will always have a unique uniformization, which will act like a metric for the purposes of defining uniform continuity.
To extend $\pi$ continuously, if you are familiar with uniform spaces:
If you aren’t familiar with uniform spaces, then just pretend that $X$ has a metric and do the same as above.
This part shows why this type of argument doesn’t always work.
This proof works directly when you replace $S_\infty$ by $\text{Aut}(\mathbb{K})$ and $(\mathbb{N}, <^\mathbb{Q})$ is replaced by a closed subgroup $G^\star \leq G$ such that
Question: What does “$G/G^\star$ is precompact” mean combinatorially? Put another way, what do such $G^\star$ look like?
Since $G^\star \leq G = \text{Aut}(\mathbb{K})$ we can think of $G^\star = \text{Aut}(\mathbb{K}^\star)$ as an expansion of $\mathbb{K}$ where $\mathbb{K}^\star = (\mathbb{K}, (R_i^\star)_{i \in I}) = (\mathbb{K}, \vec{R^\star})$, where $I$ is possibly infinite.
If the parity of $R_i^\star$ is denoted by $a(i)$, then $$\vec{R^\star} \in \prod_{i \in I} \{0,1\}^{\mathbb{N}^{a(i)}} =: P^\star$$ is compact.
Here are two exercises to help you understand the interplay of these objects.
A priori, $d^\star$ gives the box topology which could be different than the product topology. However, precompactness guarantees that these are the same.
Exercise. Show that $(G / G^\star, \text{proj}_R)$ is precompact iff $d^\star$ generates the product topology on $P^\star$, and every element of $\text{Age}(\mathbb{K})$ has only finitely many expansions in $\text{Age}(\mathbb{K}^\star)$.
That is, $\mathbb{K}^\star$ is a precompact expansion of $\mathbb{K}$, hence the name.
In this case, we write $$X^\star:= \overline{G \cdot \vec{R^\star}} \subset P^\star.$$
Recall that $G \curvearrowright X$ is minimal iff there is a flow homomorphism $\pi: X^\star \rightarrow X$. Now for $Y \subseteq X^\star$ any minimal flow we take $y \in Y$ and see that $\pi(y) \supseteq \overline{G \cdot \pi(y)} = X$.
Corollary. Under the same assumptions, any minimal subflow of $\text{Aut}(\mathbb{K}) \curvearrowright X^\star$ is the universal minimal flow.
In particular, $M(\text{Aut}(\mathbb{K}))$ is metrizable.
In practice, computing this requires understanding what the minimal subflows of $\text{Aut}(\mathbb{K}) \curvearrowright X^\star$ look like. This amounts to understanding when $\text{Aut}(\mathbb{K}) \curvearrowright \overline{G \cdot \vec{R^\star}}$ is minimal.
These are our overarching references
Here are the references to specific theorems we mentioned. (Mike: I’m missing a couple.)
Title: Topological dynamics and Ramsey classes.
Lecturer: Lionel Ngyuen Van Thé.
Date: November 14, 2016.
Main Topics: Proof of KPT correspondence between extreme amenability and ramsey class.
Definitions: Topological group, $S_\infty$, $d_R, d_L$, Polish group, ultrametric, $G$flow, extreme amenability.
Our main goal is to introduce the KPT correspondence and provide proofs of two main results. The flavour is combinatorial, but the techniques are topological. The KPT correspondence is a powerful bridge between Structural Ramsey Theory and Topological Dynamics.
Here are the main references for these lectures. We will provide other secondary references with each lecture.
A disclaimer that all spaces discussed will be Hausdorff spaces, so we will not mention it again.
Typically we will be looking at autmorphisms, or isomorphisms, or some other collection of bijections.
Example. Let $S_\infty :=$ the collection of all bijections on $\mathbb{N}$, together with the topology of pointwise convergence. That is, basic open sets are of the form $A(g,F) = \{h \in S_\infty : h \upharpoonright F = g \upharpoonright F\}$, where $F \subset \mathbb{N}$ is finite and $g \in S_\infty$.
This has some compatible metrics:
A metric space $(X, \rho)$ is an ultrametric space if
$$\forall x,y,z \in X \text{ we have }d(x,z) \leq \max\{d(x,y), d(y,z)\}.$$ dThis is a strong form of the triangle inequality.
“What is happening today is really about completions; specifically $d_R$.”
The last exercise is partly why closed subgroups have nice interactions with respect to combinatorics.
The KPT machinery can be transposed into the Polish group setting, but requires continuous Fraïssé theory (which we will learn about in later talks).
A $G$flow $G \curvearrowright X$ is a continuous action of $G$ on a compact space $X$.
A topological group $G$ is extremely amenable when every $G$flow has a fixed point. That is there is a $x \in X$ such that $\forall g \in G$ we have $g \cdot x = x$.
We will not use the notion of amenability here, but to mention it: a group $G$ is amenable when every $G$flow admits an invariant Borel probability measure. So in this way we see that extreme amenability implies amenability.
Flows of this form are very important, and we will investigate them in more detail in the second lecture.
Here is the major correspondence between Ramsey properties and extreme amenability.
The proof will be selfcontained. The right way to think about this might be to use more sophisticated topological notions from functional analysis. We will hint at these at the end of the lecture, then go into more detail in the following lectures.
We use extreme amenability to prove the Ramsey property. We do this by constructing a compact $G$flow, and then correctly interpreting what a fixed point is.
Let $G = \text{Aut}(\mathbb{K})$. Fix $k \in \mathbb{N}$, $A,B \in \text{Age}(\mathbb{K})$. It suffices to show that $\mathbb{K} \longrightarrow (B)_k^A$. So fix a colouring $\xi: \binom{\mathbb{K}}{A} \longrightarrow [k]$. (You will probably forget about all these things, because we are going to leave them to the side for now. We’ll come back to them though!)
In order to use extreme amenability, we construct a compact space that $G$ acts on. Let $X$ be the collection of all $k$ colourings of $\binom{\mathbb{K}}{A}$. Specifically, $$X = [k]^\binom{\mathbb{K}}{A}$$ which is compact when given the product topology. $G$ acts on $X$ by permuting the copies of $A$, specifically $g \cdot \gamma (\tilde{A}) = \gamma(g^{1}(\tilde{A}))$. The inverse is only there to ensure that it is an action; it is not mysterious.
Now applying extreme amenability to $X$ will be useless. We can already identify fixed points, namely constant colourings. Also, $X$ does not know anything about $\chi$. (Where $\chi$ was our original colouring. Did you forget about it?) So we go to a place that knows about $\chi$. We instead consider the $G$flow $\overline{G \cdot \chi}$.
By extreme amenability, this has a $G$fixed point. So there is a $\chi_0 \in \overline{G \cdot \chi}$ such that $\forall g \in G$ we have $g \cdot \chi_0 = \chi_0$.
By ultrahomogeneity of $\mathbb{K}$, $\chi_0$ is a constant colouring on $\binom{\mathbb{K}}{A}$. We can see that because for all $g \in G$ and all $\tilde{A} \in \binom{\mathbb{K}}{A}$ we have $\chi_0 (\tilde{A}) = \chi_0 (g^{1} (\tilde{A}))$. Since there is also an automorphism of $\mathbb{K}$ that can map $A$ to a copy $\tilde{A}$, we have that $\chi_0$ is constant.
Now we are going to transfer this to knowledge about $\chi$. Note that $\binom{B}{A}$ is a finite subset of $\binom{\mathbb{K}}{A}$, so the values $\chi_0$ takes on this set specifies a basic open set $U$ in $X$. Since $\chi_0 \in \overline{G \cdot \chi}$, that means $U \cap (G \cdot \chi) \neq \emptyset$. Namely take $g$ to witness this.
Therefore $$\chi_0 \upharpoonright \binom{B}{A} = g \cdot \chi \binom{B}{A} = \chi \upharpoonright \binom{g^{1}(B)}{A}.$$ Setting $\tilde{B} = g^{1}(B)$ we have that $\chi \upharpoonright \binom{\tilde{B}}{A}$ is constant, as desired.
To prove that a group is extremely amenable from the Ramsey property we will discretize $G$. We will prove a (discrete) Ramseytype property in our setting, and a continuous, approximate version (using the discrete version). The continuous Ramsey version will allow us to approximate a fixed point arbitrarily well. By taking a limit, we will get a true fixed point.
First we may assume that the domain of $\mathbb{K}$ is $\mathbb{N}$. Then let $A_m$ be the substructure of $\mathbb{K}$ supported by the domain $[m]$. (We used this same trick in Bootcamp 5, but there it was for compactness reasons.)
Since $A_m$ is rigid, the setwise stabilizer is the same as the pointwise stabilizer on $A_m$. That is $$\{g \in G : g(A_m) = A_m\} = \text{stab}(A_m).$$ Note that $\text{stab}(A_m)$ is a closed subgroup of $G$.
The last step used rigidity in the reverse implication.
Observe that $[g]$ is the $d_R$ ball of radius $2^{m}$ around $g$. Recall that these balls give a finite partition of $G$.
We are now ready to state a discrete Ramseytype result in this setting.
THEN there is a $g \in G$ such that $f$ is constant on $Fg = \{hg : h \in F\}$.
By ultrahomogeneity, there is a $g \in G$ such that $\tilde{B} = g^{1}(B)$. (We’ll use this in a moment.)
Now,
Since $f$ is constant on $\binom{\tilde{B}}{A}$, it must also be constant on $[Fg]$. Since $f$ was constant on each equivalence class, this means that $f$ is constant on $Fg$, as desired.
We will now establish a continuous version of this Ramsey property.
There is a $g \in G$ such that $\forall f \in \mathcal{F}$, $f$ is constant up to $\epsilon$ on $Fg$. That is, $$\forall h, h^\prime \in F, \vert f(hg) – f(h^\prime g) \vert < \epsilon.$$
Use uniform continuity to make sure that $f$ is constant on each equivalence class (use the fact about how $d_R$ creates partitions of $G$.)
Then apply the discrete Ramsey to the step function version of $f$. Unwinding what that means about the true $f$ will give the desired conclusion.
Now we are in a position to finish the original proof. We wish to show that $G$ is extremely amenable. So let $G \curvearrowright X$ be a $G$flow.
Fix $F \in [G]^{\omega}$, $\mathcal{F}$ a finite familiy of functions $f_i : X \rightarrow \mathbb{C}$ that are uniformly continuous, bounded. (Note that the domain of these functions is different than the hypothesis of the continuous Ramsey fact. You might also wonder what uniform continuity means in this context. Don’t worry for now; we’ll fix that later.) Let $\epsilon >0$. Define $$E(F, \mathcal{F}, \epsilon) := \{x \in X : \forall h \in F, \forall f \in \mathcal{F}, \vert f(hx) – f(x) \vert < \epsilon \}.$$ This is the collection of all approximate fixed points.
This is a closed subset of $X$, and hence compact.
In this way, for a $x \in X$, $\mathcal{F}$ transfers to $\mathcal{F}_x = \{f_x : f \in \mathcal{F}\}$, a collection of uniformly continuous, bounded functions from $(G, d_R)$ to $\mathbb{C}$.
Applying the continous Ramsey fact we see that every $E(F, \mathcal{F}, \epsilon)$ is nonempty, and these sets have the finite intersection property (finite nested such $E$ have nonempty intersection).
Since they are compact, we know that the full infinite intersection is nonempty. That is there is a $$x_0 \in \bigcap_{F, \mathcal{F}, \epsilon} E(F, \mathcal{F}, \epsilon).$$
Claim. $x_0$ is a fixed point of $G$.
Once we have this, the proof is finished.
This contradicts the fact that $x_0 \in E(\{f_0\}, \{g_0\}, \frac{1}{3})$.
This proof is not technically difficult, but the picture is hard to see. We’ll give a broader picture in later lectures.
Let us play around with the use of rigidity. It was only used in one part of the proof (find it!).
The Ramsey property should be thought of as a natural notion of separation. It says that some functions cannot be separated.
We introduce the concept of uniform structures. Broadly, a uniform structure is weaker than a metric structure, and is the weakest place where the notion of “uniform continuity” still makes sense. This will fix the issue that was present in the proof of $2 \Rightarrow 1$ where we used uniformly continuous functions from $X$ to $\mathbb{C}$. We made no assumption about the metrizability of the compact space $X$, but it will turn out that compact spaces always have a unique uniform structure (that agrees with its topology).
These nLab notes provide a good introduction to uniform spaces. (Mike: These notes are better written than I could do without a lot of work. It isn’t essential to understand uniform spaces to understand the arguments being used in these lectures.)
]]>Abstract: Let $x$ be a real of sufficiently high Turing degree, let $\kappa_x$ be the least inaccessible cardinal in $L[x]$ and let $G$ be $Col(\omega, {<}\kappa_x)$generic over $L[x]$. Then Woodin has shown that $\operatorname{HOD}^{L[x,G]}$ is a core model, together with a fragment of its own iteration strategy.
Our plan is to extend this result to mice which have finitely many Woodin cardinals. We will introduce a direct limit system of mice due to Grigor Sargsyan and sketch a scenario to show the following result. Let $n \geq 1$ and let $x$ again be a real of sufficiently high Turing degree. Let $\kappa_x$ be the least inaccessible strong cutpoint cardinal of $M_n(x)$ such that $\kappa_x$ is a limit of strong cutpoint cardinals in $M_n(x)$ and let $g$ be $Col(\omega, {<}\kappa_x)$generic over $M_n(x)$. Then $\operatorname{HOD}^{M_n(x,g)}$ is again a core model, together with a fragment of its own iteration strategy.
This is joint work in progress with Grigor Sargsyan.
Many thanks to Richard again for the great pictures!
]]>Title: Fractional Hedetniemi’s conjecture and Chromatic Ramsey number
Lecturer: Xuding Zhu
Date: November 9, 2016
Main Topics: Chromatic Ramsey numbers, lower bound for them, Hedetniemi’s conjecture, fractional Hedetniemi’s conjecture.
Definitions: $\rho$Ramsey number, $\chi$Ramsey number, wreath product, product graph, graph homomorphism, fractional chromatic number
We introduce a natural generalization of Ramsey number for graphs first investigated by Burr, Erdős and Lovasz in the 1970s. We look for Ramsey witnesses of minimal chromatic number, not of minimal number of vertices. We look at bounds for this quantity and show that a conjectured lower bound of BurrErdősLovasz is tight.
At the heart of these discussions is Hedetniemi’s product conjecture that the graph product preserves chromatic number. In one construction we would like to use this conjecture, but instead we work around and use a weaker version of the product conjecture that is known to hold.
Warning: Unlike most of the rest of the DocCourse, subgraphs are not induced, they are subcollections of edges.
Equivalently, $\chi(G)$ is the least number of clours $n$, such that for any partition of $V$ into $n1$ sets, one colour contains an edge.
We’ve looked at chromatic number in Bootcamp 6.
We now define (weak) Ramsey for two classes.
We define
$$H \longrightarrow (\mathcal{G}) :\equiv H \longrightarrow (\mathcal{G}, \mathcal{G}).$$
Again, note that these are weak subgraphs, not necessarily induced subgraphs.
Ramsey’s theorem for graphs states that for all $\mathcal{G}, \mathcal{F}$ there is an $H$ such that $H \longrightarrow (\mathcal{F}, \mathcal{G})$. This leads to the question of “What is the minimum such $H$?”. Of course we need to specify what “minimum means”. We could use any of the following scales:
Traditional Ramsey numbers are measured using $\rho_1$. We introduce Ramsey numbers subject to the other scales.
In particular, $R_\chi (G) = \min\{\chi(G) : H \longrightarrow (\mathcal{F}, \mathcal{G})\}$.
The quantity $R_\chi(G)$ was first studied by BurrErdősLovasz in 1976. On the surface it seems more difficult, but in reality it’s just different. We have many techniques for constructing graphs of a specific chromatic number.
One approach to understanding $R_\chi(G)$ is to fix $\chi(G)=n$ and ask about upper and lower bounds for $R_\chi(G)$ (as a function of $n$).
One way to investigate the quantity $R_\chi(G)$ is through a type of “maximal” equivalence. Before we give it, we give some relevant definitions.
The class of every homomorphism $f$, for which there is a $G \in \mathcal{G}$, such that $f$ is onto $V(G)$ is denoted $\text{Hom}(\mathcal{G})$.
When $\mathcal{G}$ has a single element $G$, we denote $\text{Hom}(G) := \text{Hom}(\mathcal{G})$.
We now give the equivalence.
This allows us to relate to classical Ramsey numbers, and that large body of work. We can also relate to $n$partite graphs in the following way.
More generally, we could replace each vertex of $V$ with an independent set of possibly different cardinality. Denote this by $G[\mathcal{I}_\omega]$.
Even more generally, if $\mathcal{G}, \mathcal{H}$ are families of graphs, then $\mathcal{G}[\mathcal{H}]$ is the class of all graphs obtained by replacing each vertix $v \in V(G)$ of some $G \in \mathcal{G}$ with a copy of $H_v \in \mathcal{H}$, and extended the edge relation as before.
This wreath product plays very well with homomorphisms.
For the second part, collapsing all vertices of the same colour is a homomophism.
We are now in a position to relate the BEL characterization, and chromatic Ramsey numbers, to wreath products.
Putting this all together, the question about finding the chromatic Ramsey number can be framed as follows (using the example of $C_5$):
In the case that $G=K_n$, the only $\leq$ becomes an equality.
Put another way we have the following:
Now we give a lower bound. This will involve constructing an interesting graph and colouring.
Let $B = K_{n1}$ be a complete graph on $n1$ vertices with all of its edges blue. Let $R = K_{n1}$ be the same, but with red edges.
The graph $R[B]$, obtained by replacing each vertex in $R$ with a copy of $B$ and extending the red edges between copies of $B$, is the desired graph. It is straightforward to show it does not contain a monochromatic copy of $K_n$ (and so no monochromatic copy of $G$).
This lower bound made BEL conjecture that it was tight.
This conjecture was proved by Zhu, and we will see a partial proof. Before that we introduce a conjecture that would greatly simplify the proof.
Recall the following product construction we introduced in Bootcamp 6.
This conjecture is natural, and the $\geq$ direction is immediate. (In this case check that a vertexpartition of $G$ pushes up to a vertexpartition of $G \times H$. However, a vertex partition of $G \times H$ need not project onto $G$ or $H$.)
This conjecture was vigourously debated in the Workshop on Colourings and Homomorphisms in Vancouver BC, in July 2000, and remains an important open problem in chromatic graph theory. (Mike: I’ve included a link to the original conference schedule, but it appears the links are all broken. It still contains the speakers and their talk titles.)
See the references below for surveys about this conjecture.
We give a proof that relies on the Hedetniemi conjecture. After this proof we discuss how to fix this. Interestingly, this construction appears in the 1976 BEL paper, but they did not see how to overcome the use of Hedetniemi’s conjecture.
For each $c_i$ there is a monochromatic subgraph $G_i$ with $\chi(G_i)=n$.
Let $G = G_1 \times \ldots \times G_N$. (“A quite natural candidate.”)
Assuming Hedetniemi’s conjecture, we know $\chi(G) = n$. So $R_\chi(G) = (n1)^2+1$ as desired.
It will turn out that we can use a slightly weaker (and true!) form of Hedetniemi’s conjecture. This will require that we find slightly more sophisticated graphs $G_i$. More on that in a moment.
We introduce the fractional chromatic number.
In this case, the fractional chromatic number of $G$ (with respect to $f$) is
$$\chi_f(G) := \min \sum_{I \in \mathcal{I}} f(I).$$
The corresponding fractional Hedetniemi’s conjecture is true. (Again, the $\geq$ direction is an easy exercise.)
Tardif observed that the fractional Hedetniemi’s conjecture would be enough to prove the BEL conjecture.
If $\chi_f(\text{Red}) \leq n1$, then $\chi(G) \leq n1$, which implies $\omega(\text{Blue}) \geq n$, which implies $\chi_f (\text{Blue}) \geq n$. Here $\omega(G)$ is the largest size of a complete subgraph of $G$, called the clique number of $G$.
Use this observation to construct the $G_i$, and then the result follows from the fractional Hedetniemi conejcture.
Mike’s comment. In lecture Zhu provided a proof of the fractional conjecture. I have not included it here, but it can be found in his 2011 paper (reference below).
Title: The first dynamical system; and Random Number Theory
Lecturer: Carl Pomerance
Date: November 8, 2016
Main Topics: Chains with $\sigma$, distribution of primes, randomness in math
Definitions: Amicable, Perfect, Abundant, Deficient
There were two talks given on November 8, 2016. The first (“the first dynamical system”) was about the natural numbers and the function which sums its divisors. The second (“Random number theory”) discusses the value of using randomness in number theory and mathematics.
The slides for both talks are included as links. The second talk was recorded and will be linked to as soon as it is published.
My notes are sparse because there were slides and the second talk was recorded. Instead of including detailed notes, I’ve included some extra problems about these topics.
Here are the slides from the talk [PDF].
Carl Pomerance has many other talks on his website.
The talk primarily concerns the function $\sigma(n)$ which sums the proper divisors of a natural number $n$. For example,
A pair of natural numbers $n,m$ are amicable if $\sigma(n) = m$ and $\sigma(m)=n$.
Project Euler (an online collection of math related programming problems) has many problems related to $\sigma$, abundant numbers and amicable pairs. Here are some of them to give you a feel for these objects.
Here are the slides [PDF].
Carl Pomerance has many other talks on his website.
Carl Pomerance described the origin of the quote misattributed to Paul Erdős:
Einstein: “God does not play dice with the universe.”
Paul ErdősKac: Maybe so, but something is going on with the primes.
The intention was that the Paul ErdősKac theorem says something about the distribution of the primes, not that Paul Erdős and Kac themselves has said this (Note the lack of quotation marks!). Wikiquotes has a good description of the story.
]]>I’ve been really enjoying my new job at Time Service in Toledo. I’m about to finish my third month here, and I expect I’ll be staying with this job for quite a while. I find that working in business gives me a variety of interesting problems to solve, and although they’re not deep and abstract in the same way as math research problems, they still require a lot of creative thinking and give me challenges to work on over time and puzzles to chew on as I drift off to sleep, in my morning shower, etc., just like math research did. The whole operation of helping to run a business feels like a big optimization problem — how do I figure out the best way to use all of our company’s resources to the greatest effect?
I hope all my friends in the New York Logic community are doing well. Please keep in touch!
]]>Abstract: The set splittability problem is the following: given a finite collection of finite sets, does there exits a single set that selects half the elements from each set in the collection? (If a set has odd size, we allow the floor or ceiling.) It is natural to study the set splittability problem in the context of combinatorial discrepancy theory and its applications, since a collection is splittable if and only if it has discrepancy $\leq1$.
After introducing the concepts and their background, we show that the set splittability problem is NPcomplete. We in fact establish this for the generalized version called the $p$splittability problem, in which one seeks to select the fraction $p$ from each set instead of half. Next we investigate several criteria for splittability and $p$splittability, giving a complete characterization of $p$splittability for three sets and of splittability for four sets. Finally we show that when there are sufficiently many elements, unsplittability is asymptotically much more rare than splittability.
]]>
@article{SUW,
author = {R. Schindler and S. Uhlenbrock and W. H. Woodin},
title = {{Mice with Finitely many Woodin Cardinals from Optimal Determinacy Hypotheses}},
note = {Submitted},
year = 2016
}
We prove the following result which is due to the third author.
Let $n \geq 1$. If $\boldsymbol\Pi^1_n$ determinacy and $\Pi^1_{n+1}$ determinacy both
hold true and there is no $\boldsymbol\Sigma^1_{n+2}$definable $\omega_1$sequence of
pairwise distinct reals, then $M_n^\#$ exists and is $\omega_1$iterable.
The proof yields that $\boldsymbol\Pi^1_{n+1}$ determinacy implies that $M_n^\#(x)$
exists and is $\omega_1$iterable for all reals $x$.
A consequence is the Determinacy Transfer Theorem for arbitrary
$n \geq 1$, namely the statement that $\boldsymbol\Pi^1_{n+1}$ determinacy implies
$\Game^{(n)}(<\omega^2 – \boldsymbol\Pi^1_1)$ determinacy.
@phdthesis{Uh16,
author = {S. Uhlenbrock},
title = {{Pure and Hybrid Mice with Finitely Many Woodin Cardinals from Levels of Determinacy}},
school = {WWU Münster},
year = 2016
}
Mice are sufficiently iterable canonical models of set theory. Martin and
Steel showed in the 1980s that for every natural number $n$ the existence of
$n$ Woodin cardinals with a measurable cardinal above them all implies that
boldface $\boldsymbol\Pi^1_{n+1}$ determinacy holds, where $\boldsymbol\Pi^1_{n+1}$ is a pointclass in the
projective hierarchy. Neeman and Woodin later proved an exact correspondence
between mice and projective determinacy. They showed that boldface $\boldsymbol\Pi^1_{n+1}$
determinacy is equivalent to the fact that the mouse $M_n^\#(x)$ exists and is
$\omega_1$iterable for all reals x.
We prove one implication of this result, that is boldface $\boldsymbol\Pi^1_{n+1}$ determinacy
implies that $M_n^\#(x)$ exists and is $\omega_1$iterable for all reals $x$, which is an old,
so far unpublished result by W. Hugh Woodin. As a consequence, we can
obtain the determinacy transfer theorem for all levels $n$.
Following this, we will consider pointclasses in the $L(\mathbb{R})$hierarchy and show
that determinacy for them implies the existence and $\omega_1$iterability of
certain hybrid mice with finitely many Woodin cardinals, which we call $M_k^{\Sigma, \#}$.
These hybrid mice are like ordinary mice, but equipped with an iteration
strategy for a mouse they are containing, and they naturally appear in the
core model induction technique.
Diese Masterarbeit befasst sich mit der von W. Hugh Woodin aufgestellten
$HOD$Vermutung über die Klasse der erblich Ordinalzahldefinierbaren Mengen.
Nach einer kurzen thematischen Einführung in Kapitel 2, wird in Kapitel 3 der
Hauptteil dieser Arbeit dargestellt.
Zunächst wird der Einfluss der $HOD$Vermutung auf den Zusammenhang zwischen der
Klasse $HOD$ und dem Mengenuniversum $V$ untersucht. In Abschnitt 3.1 wird dazu
gezeigt, dass, falls die $HOD$Vermutung nicht gilt, viele Nachfolgerkardinalzahlen
in $HOD$ falsch ausgerechnet werden. Außerdem wird gezeigt, dass, falls die
$HOD$Vermutung gilt und eine $HOD$superkompakte Kardinalzahl existiert, viele
Nachfolgerkardinalzahlen in $HOD$ korrekt bestimmt werden.
In Abschnitt 3.2 werden dann darauf aufbauend äquivalente Formulierungen der
$HOD$Vermutung bewiesen. Insbesondere werden dort Extender und die Aussage, dass
$HOD$ ein geeignetes ExtenderModell ist, behandelt.
Das Kapitel 3 dieser Arbeit basiert auf dem Paper “Suitable Extender Models I” von W. Hugh Woodin (Journal of Mathematical Logic, Volume 10, Nos. 1&2, 2010).
]]>Below are 15 problems from the course. Originally I was only going to list 5, but it was hard enough to only pick 15. I attempted to showcase a variety of problems that utilize different ways of thinking. I’m intentionally not providing any solutions. Some of these problems are classics or variations on classics. Have fun playing!
If you want to see more problems from the course, go here.
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>It turns out that up to isomorphism, there are exactly 5 groups of order 8. Below are representatives from each isomorphism class:
The first three groups listed above are abelian while the last two are not. It’s a fairly straightforward exercise to prove that none of these groups are isomorphic to each other. It’s a bit more work to prove that the list is complete. The Fundamental Theorem of Finitely Generated Abelian Groups guarantees that we haven’t omitted any abelian groups of order 8. Handling the nonabelian case is trickier. If you want to know more about to prove that the classification above is correct, check out the Mathematics Stack Exchange post here, the GroupProps wiki page about groups of order 8, and the nice classification of all groups of order less or equal to 8 that is located here.
Since groups have binary operations at their core, we can represent a finite group using a table, called a group table. In order to help out minds recognize patterns in the table, we can color the entries in the table according to which group element occurs. Of course, if we rearrange the column and row headings of the table, we have to rearrange or recolor the entries of the table accordingly. Doing so may make some patterns more or less visually recognizable. Similar to the book Visual Group Theory by Nathan Carter (Bentley University), I utilize colored group tables in several chapters of An InquiryBased Approach to Abstract Algebra, which is an opensource abstract algebra book that I wrote to be used with an IBL approach to the subject.
While I was teaching out of Carter’s book during the summer of 2009, one of my students (Michelle Reagan) made five quilts that correspond to colored group tables for the five groups of order 8. Here are pictures of the quilts.
It’s a fun exercise to figure out which quilt corresponds to which group. I’ll leave it to you to think about.
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gammanonnegativity and realrootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group.
The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions.
The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, which survey more advanced topics, including some open problems in combinatorial topology.
This textbook will serve a resource for experts in the field as well as for graduate students and others hoping to learn about these topics for the first time.
Generally speaking, most of my research in pure mathematics falls in the category of algebraic combinatorics. However, I’ve had very little formal training in combinatorics. It turns out that I know quite a bit about Catalan combinatorics, but again, it’s not a subject that I’ve explicitly studied. Prior to opening the book, I knew next to nothing about Eulerian numbers, let alone Narayana numbers.
Right around the time I found out I would be teaching our graduate combinatorics class during the Fall 2016 semester, I learned about Kyle’s book. I was really looking forward to teaching the class because I figured that one of the best ways to fill in my lack of formal training in combinatorics was to teach a class about it. After thumbing through Kyle’s book (and thinking, “wow, I don’t really know any of this stuff!”), I decided that I could run the class as a sort of “topics course” focusing on Eulerian numbers and Catalan combinatorics while hitting many of the core ideas of enumerative combinatorics along the way. As a bonus, I would be forced to learn lots of cool things that relate to my research interests, many of which I probably should have know more about anyway.
I’m currently in week 5 of my Topics in Combinatorics graduate course in which we are closely following Kyle’s book. Despite the fact that we’ve barely covered two chapters, I’m absolutely in love with the book and the content. It’s so much fun! I have to admit that I don’t always know which specific topics are key ideas and which are just fun side stories, but I think that’s mostly true every time one teaches a course for the first time. One of the things I really like about the themes in the book is that connects with cutting edge research topics. We’re learning about “current events” in algebraic/enumerative combinatorics.
My only minor complaint is that I wish Kyle provided less detail in the hints/solutions for the exercises in the back of the book. On the other hand, there have been a couple times where I’ve thought, “geez, there’s no way I would have ever come up with that argument without significant guidance.”
Note: The #loveyourmath 5day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>I love mathematics. However, I have not always felt this way. As a child, I was okay at mathematics but far from exceptional (and this is still true!). At some point during my youth, I developed a distaste for the subject. Perhaps more than most young children, the one question I obsessively asked over and over again was “why?” I could not stop my mind from inquiring into the nature of things. The one place that my incessant questioning was met with resistance was in math class, where the response was usually something like, “don’t ask why,” “that’s just the way it is,” or “just accept it.” After hearing this a few hundred times, I started to accept that mathematics was just a bunch of rules that needed to be memorized. This attitude lasted throughout high school and into my freshman year in college.
After graduating high school in 1993, I accepted an academic scholarship to George Mason University in Fairfax, Virginia. Like most freshman, I had no idea what I wanted to do with my life, and I certainly did not think I would major in mathematics, let alone pursue a career in mathematics. Not being fond of writing papers at that time, I began contemplating majors that would require the least amount of writing. I figured that I would major in one of the sciences and upon thumbing through the academic catalog, I soon realized that regardless of which one I chose, I would have to take calculus. So, I registered for Calculus I in the fall semester of my sophomore year, which I did not take in high school. My plan was to “just get it out of the way.” As it turns out, this class changed my life.
On this first day of class, I walked into a rather large lecture hall. Much to my disliking, my class had well over a hundred students in it. I promptly sat in the very back of the room. Soon thereafter, in walked Dr. Robert Sachs. I remember thinking to myself that if you looked up “math geek” in the dictionary, you might see a picture of my new math professor. I’m not sure when it happened, but at some point during the semester, I went from going through the motions (while doing quite well) to being completely captivated during each lecture. This math class was unlike any I had ever had. For the first time, I had a teacher, who not only understood mathematics, but attempted to explain why it works the way that it does. Someone was finally answering my “why” questions! Even more importantly, Dr. Sachs was teaching me that I could discover the answers to all of my questions independently, and that something wonderful can be gained in the process. As a student in the class, I no longer felt like the sole purpose of being there was to quickly jot down a recipe for solving a few meaningless problems. Collectively, we were on a journey of discovery and along the way I was encouraged to turn over as many rocks as I could to see what lived beneath. Just about anyone can write facts on the blackboard, but Dr. Sachs has a gift that allows him to teach effectively while conveying the beauty of mathematics.
The following semester, I made sure that I registered for Calculus II with Dr. Sachs. Yet, despite my budding interest in mathematics, I still had not even remotely considered majoring in the subject. I was a jock, not a math geek. Sometime around the middle of the semester, Dr. Sachs was returning exams and after personally handing me my exam at the back of the room, he asked me what my major was, and I probably just shrugged my shoulders. I can say with absolute certainty that I would not be where I am today if he hadn’t responded with, “you should major in mathematics.” This was not the last time he made this suggestion, as I took some convincing, but eventually I was won over. He saw in me, as I am sure he has for many students, a potential that I did not know I possessed. Eventually, I declared mathematics as my major, which I don’t think anyone would have predicted a few years earlier. Dr. Sachs permanently changed the trajectory of my life!
After a shortlived start to a master’s degree in education, I went on to receive a master’s degree in mathematics from Northern Arizona University, and after a couple years of teaching at Front Range Community College in Colorado, I returned to graduate school and completed my PhD at the University of Colorado Boulder. I currently cannot imagine myself being anything other than a mathematician and a teacher, but unlike many of my colleagues, I was a late bloomer, so to speak. I certainly didn’t aspire to be professor of mathematics when I was a child. I do have vague memories of wanting to be a photographer for National Geographic or a deep ocean explorer. In fact, I think I lucked out as being a mathematician has more in common with these two than the average person might suspect.
While I readily admit that I am peculiar, my path from despising mathematics to loving the subject has given me a perspective that many mathematicians likely do not have as most of them either excelled at mathematics, had an interest in the subject from a very early age, or both. This perspective has played a fundamental role in my teaching and has helped me relate to my students.
By the way, I don’t think Bob looks like a math geek anymore. It’s been a lot of fun to hang out with him at conferences and get to know each other as teachers and mathematicians. Thanks Bob!
]]>I gave an invited talk at the Set Theory and its Applications in Topology meeting, Oaxaca, September 1116, 2016.
The talk was on the $\aleph_2$Souslin problem.
If you are interested in seeing the effect of a jet lag, the video is available in here.
Downloads:
Abstract: Given a mathematical problem it is natural to wonder how complicated it is, but it is hard to imagine how to make this question rigorous. Borel complexity theory is an area of set theory which provides a framework to measure the complexity of classification problems in mathematics. We will introduce this theory, and show how it has been applied to classification problems in group theory, graph theory, and functional analysis.
]]>Joint work with David J. Fernández Bretón.
Abstract. We show that various analogs of Hindman’s Theorem fail in a strong sense
when one attempts to obtain uncountable monochromatic sets:
Downloads:
Every nonstandard model of Peano Arithmetic (${\rm PA}$) has a Boolean algebra of subsets of the natural numbers associated to it, called its standard system. The standard system of a model $M\models{\rm PA}$, denoted ${\rm SSy}(M)$, consists of the intersections of the definable subsets of $M$ with the natural numbers. Standard systems play a very important role in the study of nonstandard models of arithmetic. For instance, they are used to determine when models are isomorphic and when models elementarily embed. Two countable computably saturated models of ${\rm PA}$ are isomorphic precisely when they have the same theory and the same standard system (this folklore result has been variously attributed to Ehrenfeucht and Jensen, Wilmers, etc.). A countable model $M\models{\rm PA}$ $\Sigma_n$elementarily embeds into another model $N\models{\rm PA}$ precisely when $N$ satisfies the $\Sigma_{n+1}$theory of $M$ and ${\rm SSy}(M)\subseteq{\rm SSy}(N)$ (this is a variant of the famous Friedman’s Embedding Theorem showing that every model of ${\rm PA}$ has an isomorphic elementary initial segment).
Standard systems are Boolean algebras because the definable sets of any model form a Boolean algebra. Standard systems are closed under relative computability: if $A$ is in a standard system and $B$ is Turing computable with oracle $A$, then $B$ is also in that standard system. Let’s see why this is true. Suppose $M\models{\rm PA}$, $A\in {\rm SSy}(M)$, and $B$ is computed by a Turing machine $m$ with oracle $A$. Let $\bar A$ be a definable subset of $M$ such that $A=\bar A\cap \mathbb N$ and let $\bar B$ be the set computed in $M$ by the Turing machine $m$ with oracle $\bar A$. Since $M$ must see that $m$ computes $B$ on the natural numbers, it follows that $\bar B\cap \mathbb N=B$. Standard systems also have the tree property: if $T$ is an infinite binary tree coded by a set in a standard system (use your favorite coding), then there is another set in that standard system coding an infinite branch through $T$. Let’s see why this is true. Suppose that $M\models{\rm PA}$ and $T\in {\rm SSy}(M)$ codes an infinite binary tree. Let $\bar T$ be a definable subset of $M$ such that $T=\bar T\cap\mathbb N$. Since for every natural number $n$, there is an element in $\bar T$ coding a binary sequence of length $n$ such that all its predecessors are also coded in $\bar T$, this must hold for some nonstandard $c$ as well. So $\bar T$ has some nonstandard binary sequence $s$ of length $c$ as well as all its binary predecessors. Let $\bar B$ the definable set of all binary predecessors of $s$ in $\bar T$. Then $B=\bar B\cap \mathbb N$, the collection of all binary predecessors of $s$ of finite length, is an infinite branch through $T$.
A decade before the notion of a standard system was introduced by Harvey Friedman in 1973 [1], Dana Scott defined that a Scott set is a nonempty Boolean algebra of subsets of the natural numbers that is closed under relative computability and has the tree property [2]. We just argued that every standard system is a Scott set. Scott’s arguments translated into modern terms show a partial converse: every countable Scott set is the standard system of some model of ${\rm PA}$. What about uncountable Scott sets? This is Scott’s Problem, one of the most famous and longstanding open problems in the field. Is every Scott set the standard system of some model of ${\rm PA}$? In 1982, Knight and Nadel showed that every Scott set of size $\omega_1$ is the standard system of a some model of ${\rm PA}$ [3]. Thus, it is consistent, namely, by assuming ${\rm CH}$, that Scott’s Problem has a positive answer. What happens if $2^\omega=\omega_2$ or if continuum is very large? No one knows the answer! Most welldeveloped techniques in the field of models of ${\rm PA}$ break down for uncountable models. Elegant theorems for countable models are either known to be false or are open problems for uncountable models. Here are two examples. It is easy to see that every nonstandard model of ${\rm PA}$ has ordertype $\mathbb N+\mathbb A\cdot \mathbb Z$ for a dense linear order $\mathbb A$ without endpoints. For countable models, obviously, $\mathbb A=\mathbb Q$, but very little is known about the possible ordertypes of uncountable models (see [4]). The classification theorem for countable computably saturated models is known to fail for uncountable models (there are counterexample $\omega_1$like models, see [5] Chapter 10).
So what are the promising strategies for making progress on Scott’s Problem? Knight and Nadel’s result can be proved using an unpublished theorem of Ehrenfeucht, which we will call Ehrenfeucht’s Lemma. Ehrenfeucht’s Lemma states that if $\mathscr X$ is a Scott set and $M$ is a countable model of ${\rm PA}$ such that ${\rm SSy}(M)\subseteq \mathscr X$, then for every $A\in \mathscr X$, $M$ has an elementary extension $N$ such that $A\in {\rm SSy}(N)\subseteq \mathscr X$. Knight and Nadel’s result follows nearly immediately by a transfinite application of Ehrenfeucht’s Lemma. Let $\mathscr X=\{A_\xi\mid \xi<\omega_1\}$ be a Scott set. We build the model $M$ with standard system $\mathscr X$ in $\omega_1$many steps as the union of a continuous elementary chain of models $M_0\prec M_1\prec\cdots\prec M_\xi\prec\cdots\prec M$ such that $A_\xi\in {\rm SSy}(M_\xi)\subseteq \mathscr X$. Does Ehrenfeucht's Lemma hold for uncountable models $M$? Nothing is known here either. So nearly a decade ago in my dissertation, I tried to find some strong requirements on a Scott set $\mathscr X$ of size $\omega_2$ such that Ehrenfeucht's Lemma would hold for it with models of size $\omega_1$. By the elementary chain of models argument, each such Scott set would be the standard system of some model of ${\rm PA}$.
Let’s say that a collection $\mathscr X$ of subsets of $\mathbb N$ is arithmetically closed if it is a Boolean algebra closed under the Turing jump operation. Scott sets are precisely the $\omega$models of the secondorder arithmetic theory ${\rm WKL}_0$ and arithmetically closed families are precisely the $\omega$models of the theory ${\rm ACA}_0$. Given $M\models{\rm PA}$ and a Scott set $\mathscr X$ such that ${\rm SSy}(M)\subseteq \mathscr X$, there is an ultrapower construction introduced by Kirby and Paris which uses an ultrafilter on $\mathscr X$ and functions $f:\mathbb N\to M$ that are the restrictions of definable functions $F:M\to M$. It turns out that the ultrapower $N$ can be made to satisfy the requirements of Ehrenfeucht’s Lemma (given $A\in\mathscr X$, we have $A\in {\rm SSy}(N)\subseteq \mathscr X$) by including sets satisfying certain properties in the ultrafilter. If the family $\mathscr X$ is arithmetically closed, the desired ultrafilter can be obtained from a generic filter for the partial order $\mathscr X/{\rm fin}$ whose elements are infinite sets in $\mathscr X$ ordered by almost inclusion: $A\subseteq_* B$ if there is $n\in\mathbb N$ such that $A\setminus n\subseteq B$. Indeed, for a model $M$ of size $\omega_1$, the filter $G$ has to meet only $\omega_1$many dense sets. So if for instance $\mathscr X/{\rm fin}$ is proper and we are in a model with ${\rm PFA}$ (Proper Forcing Axiom) we would have the required filter right there. All this gives that in a model of ${\rm PFA}$ if $\mathscr X$ is an arithmetically closed family such that $\mathscr X/{\rm fin}$ is proper, then Ehrenfeucht’s Lemma holds for models of size $\omega_1$ with standard system contained in $\mathscr X$. It suffices to assume that $\mathscr X/{\rm fin}$ is only piecewise proper: $\mathscr X$ is a chain of arithmetically closed families $\mathscr Y$ of size $\omega_1$ such that $\mathscr Y/{\rm fin}$ is proper [6].
Enayat and Shelah showed that there is a Borel arithmetically closed family $\mathscr X$ such that $\mathscr X/\text{fin}$ is not proper [7]. I showed that it is consistent to have continuum many arithmetically closed families $\mathscr X$ of size $\omega_1$ such that $\mathscr X/\text{fin}$ is proper and continuum many arithmetically closed families $\mathscr X$ of size $\omega_2$ such that $\mathscr X/{\rm fin}$ is piecewise proper by adding these families by forcing [8]. But to this day I cannot show that there are proper or piecewise Scott sets of size $\omega_2$ in a model of ${\rm PFA}$. A few years ago when I asked a related question on MathOverflow, François Dorais, in response, came up with a completely different way of associating a partial order to an arithmetically closed family $\mathscr X$ such that the generic filter for the forcing also yields a desired ultrafilter on $\mathscr X$. Conditions in Dorais’ forcing are lower semicontinuous submeasures coded in $\mathscr X$. A submeasure is a function $\mu:P(\mathbb N)\to [0,\infty]$ such that $\mu(\emptyset)=0$ and $\mu(X)\leq \mu(X\cup Y)\leq \mu(X)+\mu(Y)$. A submeasure is lower semicontinuous if $\mu(X)=\lim_{n\to\infty}\mu(X\cap n)$. So lower semicontinous measures are determined by their values on finite sets and therefore can be coded in a Scott set. Given a Scott set $\mathscr X$, let $\mathbb P_{\mathscr X}$ be the partial order whose elements are lower semicontinuous measures coded in $\mathscr X$ ordered by $\leq_*$, where $\mu\leq_*\nu$ whenever there is $n\in\mathbb N$ such that $\mu(X)\leq \text{max}(\mu(X),n)$. Using a deep result of Dorais from [9], we can show that, under ${\rm CH}$, there are $\omega_1$many arithmetically closed families $\mathscr X$ of size $\omega_1$ such that $\mathbb P_\mathscr X$ is proper. But alas the construction breaks at $\omega_1$.
This leaves many fascinating open questions that we should all start thinking about!
Here are the slides.
@incollection {friedman:countableModelsOfSetTheory,
AUTHOR = {Friedman, Harvey},
TITLE = {Countable models of set theories},
BOOKTITLE = {Cambridge {S}ummer {S}chool in {M}athematical {L}ogic
({C}ambridge, 1971)},
PAGES = {539573. Lecture Notes in Math., Vol. 337},
PUBLISHER = {Springer},
ADDRESS = {Berlin},
YEAR = {1973},
MRCLASS = {02K15 (02H20)},
MRNUMBER = {0347599 (50 \#102)},
MRREVIEWER = {Nigel J. Cutland},
}
@incollection {scott:ScottSets,
AUTHOR = {Scott, Dana},
TITLE = {Algebras of sets binumerable in complete extensions of
arithmetic},
BOOKTITLE = {Proc. {S}ympos. {P}ure {M}ath., {V}ol. {V}},
PAGES = {117121},
PUBLISHER = {American Mathematical Society, Providence, R.I.},
YEAR = {1962},
MRCLASS = {02.72},
MRNUMBER = {0141595},
MRREVIEWER = {H. Ribeiro},
}
@article {KnightNadel:ScottSets,
AUTHOR = {Knight, Julia and Nadel, Mark},
TITLE = {Models of arithmetic and closed ideals},
JOURNAL = {J. Symbolic Logic},
FJOURNAL = {The Journal of Symbolic Logic},
VOLUME = {47},
YEAR = {1982},
NUMBER = {4},
PAGES = {833840 (1983)},
ISSN = {00224812},
CODEN = {JSYLA6},
MRCLASS = {03C62 (03C50 03D30)},
MRNUMBER = {683158},
MRREVIEWER = {S. S. Goncharov},
DOI = {10.2307/2273102},
URL = {http://dx.doi.org/10.2307/2273102},
}
@incollection {BovykinKaye:orderTypesModelsPA,
AUTHOR = {Bovykin, Andrey and Kaye, Richard},
TITLE = {Ordertypes of models of {P}eano arithmetic},
BOOKTITLE = {Logic and algebra},
SERIES = {Contemp. Math.},
VOLUME = {302},
PAGES = {275285},
PUBLISHER = {Amer. Math. Soc., Providence, RI},
YEAR = {2002},
MRCLASS = {03H15},
MRNUMBER = {1928396},
DOI = {10.1090/conm/302/05055},
URL = {http://dx.doi.org/10.1090/conm/302/05055},
}
@book {kossakschmerl:modelsofpa,
AUTHOR = {Kossak, Roman and Schmerl, James H.},
TITLE = {The structure of models of {P}eano arithmetic},
SERIES = {Oxford Logic Guides},
VOLUME = {50},
NOTE = {Oxford Science Publications},
PUBLISHER = {The Clarendon Press, Oxford University Press, Oxford},
YEAR = {2006},
PAGES = {xiv+311},
ISBN = {9780198568278; 0198568274},
MRCLASS = {0302 (03C62 03F30 03H15)},
MRNUMBER = {2250469 (2008b:03001)},
MRREVIEWER = {Constantine Dimitracopoulos},
DOI = {10.1093/acprof:oso/9780198568278.001.0001},
URL = {http://dx.doi.org.ezproxy.gc.cuny.edu/10.1093/acprof:oso/9780198568278.001.0001},
}
@article {gitman:scott,
AUTHOR = {Gitman, Victoria},
TITLE = {Proper and piecewise proper families of reals},
JOURNAL = {MLQ Math. Log. Q.},
FJOURNAL = {MLQ. Mathematical Logic Quarterly},
VOLUME = {55},
YEAR = {2009},
NUMBER = {5},
PAGES = {542550},
ISSN = {09425616},
MRCLASS = {03E35 (03E40 03H15)},
MRNUMBER = {2568765},
MRREVIEWER = {Renling Jin},
DOI = {10.1002/malq.200810015},
URL = {http://dx.doi.org/10.1002/malq.200810015},
PDF={http://boolesrings.org/victoriagitman/files/2011/08/scottsets.pdf},
EPRINT ={0801.4364},
}
@article {EnayatShelahImproperFamily,
AUTHOR = {Enayat, Ali and Shelah, Saharon},
TITLE = {An improper arithmetically closed {B}orel subalgebra of {$\scr
P(\omega)\bmod\mathsf{FIN}$}},
JOURNAL = {Topology Appl.},
FJOURNAL = {Topology and its Applications},
VOLUME = {158},
YEAR = {2011},
NUMBER = {18},
PAGES = {24952502},
ISSN = {01668641},
CODEN = {TIAPD9},
MRCLASS = {03E15 (03C55 28A05 54H05)},
MRNUMBER = {2847322},
DOI = {10.1016/j.topol.2011.08.006},
URL = {http://dx.doi.org/10.1016/j.topol.2011.08.006},
}
@ARTICLE {gitman:proper,
AUTHOR = {Victoria Gitman},
TITLE = {Proper and Piecewise Proper Families of Reals},
JOURNAL = {Mathematical Logic Quarterly},
VOLUME = {55},
YEAR = {2009},
NUMBER = {5},
PAGES = {542550},
PDF={http://boolesrings.org/victoriagitman/files/2011/08/properscott.pdf},
EPRINT ={0801.4368},
ISSN = {09425616},
MRCLASS = {03E35 (03E40 03H15)},
MRNUMBER = {2568765 (2011c:03109)},
MRREVIEWER = {Renling Jin},
DOI = {10.1002/malq.200810015},
URL = {http://dx.doi.org/10.1002/malq.200810015},
}
@article {dorais:MathiasForcing,
AUTHOR = {Dorais, Fran{\c{c}}ois G.},
TITLE = {A variant of {M}athias forcing that preserves {$\mathsf{ACA}_0$}},
JOURNAL = {Arch. Math. Logic},
FJOURNAL = {Archive for Mathematical Logic},
VOLUME = {51},
YEAR = {2012},
NUMBER = {78},
PAGES = {751780},
ISSN = {09335846},
CODEN = {AMLOEH},
MRCLASS = {03B30 (03E40 03F35)},
MRNUMBER = {2975428},
MRREVIEWER = {Wei Wang},
DOI = {10.1007/s0015301202974},
URL = {http://dx.doi.org/10.1007/s0015301202974},
}
Producing $M_n^\#(x)$ from optimal determinacy hypotheses.
Abstract: In this talk we will outline a proof of Woodin’s result that boldface $\boldsymbol\Sigma^1_{n+1}$ determinacy yields the existence and $\omega_1$iterability of the premouse $M_n^\#(x)$ for all reals $x$. This involves first generalizing a result of Kechris and Solovay concerning OD determinacy in $L[x]$ for a cone of reals $x$ to the context of mice with finitely many Woodin cardinals. We will focus on using this result to prove the existence and $\omega_1$iterability of $M_n^\#$ from a suitable hypothesis. Note that this argument is different for the even and odd levels of the projective hierarchy. This is joint work with Ralf Schindler and W. Hugh Woodin.
You can find notes taken by Martin Zeman here and here.
More pictures and notes for the other talks can be found on the conference webpage.
]]>Most of what we believe, we believe because it was told to us by someone we trusted. What I would like to suggest, however, is that if we rely too much on that kind of education, we could find in the end that we have never really learned anything.
As far as I know, the original source of this quote is on Paul Wallace’s blog. This quote also appears in the introduction of the book A TeXas Style Introduction to Proof by Ron Taylor and Patrick Rault. A Facebook post by David Failing introduced me to this wonderful quote.
]]>As a side project, I hope to find some time to do a bit of research for MIRI. I’ve discussed MIRI research in a couple of recent posts here. I plan to continue updating this blog with stuff on MIRI research and other updates on my life. I’ll miss my colleagues in New York, and I hope we keep in touch. My students are welcome to keep in touch as well.
]]>Quantilization is a form of mild optimization where you tell an AI to choose something at random from (for instance) the top 10% of best solutions, rather than taking the best solution. This helps to get around the problem of an agent whose values are mostly aligned with yours but that does pathological things when it takes its values to the extreme. In this paper, we examine a similar process, but involving two (or more) agents rather than one.
For those of you who were also at the MSFP, you can read some additional discussion of the paper here. The main idea is that Connor is working on a simulation to help test the ideas in the paper. If you’re interested in helping with the simulation but don’t have access to the forum post linked above, get in touch with me.
]]>Their research has a fair amount of overlap with mathematical logic. I’d encourage any logicians who are interested in these sort of things to get involved. It’s a very good and important cause; the future of humanity is at stake. Unaligned artificial intelligence could destroy us all in a way that makes nuclear war and global warming seem tame in comparison.
Their technical research agenda is a good place to start for a technical perspective. The book Superintelligence by Nick Bostrom is a good starting point for a less technical introduction and to help understand why MIRI’s agenda is important and nontrivial.
One area of MIRI research that I find particularly interesting has to do with a version of Prisoner’s Dilemma played by computer programs that are allowed to read each others’ source code. This work makes use of a bounded version of Löb’s theorem. Actually, a fair bit of MIRI research relates to Löb’s theorem. Here is a good introduction.
Feel free to contact me if you’d like to know more about how to get involved with MIRI research. Or you can contact MIRI directly.
]]>Los niveles de los dos cursos seran un poco differentes, pero mucho de la material sera similar.
Las notas son aquí. Los subjetos son como sigue:
Esta material es más clasica, entonces hay muchas referencias posibles. Si no ha estado la teoría de grupos antes, recomiendo el libro de Fraleigh.
La mayoría de estas referencias estan un poco avanzadas. Yo he incluido dos referencias generales (por Tao y por Tao–Vu) que contienen mucho material fondamental — malafortunadamente, el libro de Tao y Vu no es disponible en una forma gratuita en la web.
Mi primera recomendación es las lecturas de Helfgott, “Crecimiento y espansión en SL2″. Primeramente, son en español(!) pero también comenzan a un nivel bastante fácil y, rapidamente, presentan un resultado muy importante de Helfgott sí mismo, sobre crecimiento en el grupo SL(2,p).
Joint work with Ari Meir Brodsky.
Abstract. An $\aleph_1$Souslin tree is a complicated combinatorial object whose existence cannot be decided on the grounds of ZFC alone.
But 15 years after Tennenbaum and independently Jech devised notions of forcing for introducing such a tree, Shelah proved that already the simplest forcing notion — Cohen forcing — adds an $\aleph_1$Souslin tree.
In this paper, we identify a rather large class of notions of forcing that, assuming a GCHtype assumption, add a $\lambda^+$Souslin tree. This class includes Prikry, Magidor and Radin forcing.
Downloads:
Somebody asked on the MathJax user group
To my understanding MathJax supports these input formats: LaTeX, MathML, and AsciiMath. If I’m making a website and I can choose to use any of the three formats, what are some advantages of choosing each?
Since I’ve answered this so many times, I thought it might be worth copying here:
“That’s a tricky (trick?) question.
MathML is MathJax’s internal format (essentially anyway) so anything that can be done in MathJax is done through our MathML support, cf http://docs.mathjax.org/en/latest/mathml.html. While MathML is quite good for such an internal purpose, it can be difficult to create. It’s rarely written manually (much like HTML or CSS) and tools can have trouble producing highquality MathML (converters can fail, editors might produce overcomplicated MathML). MathML is the dominant format used in professional publishing workflows and thus comes with a rich toolchain out of XMLland.
MathJax’s LaTeXlike input provides a faithful implementation of the most common mathmode LaTeX commands as well as other standard packages and a few nonstandard features, cf. http://docs.mathjax.org/en/latest/tex.html. LaTeX is much easier to author by hand than MathML and provides the typical LaTeX advantages such as custom macros (for even easier authoring). It also has the benefit of a large community thanks to the wide adoption of TeX as a programming language for print layout in academic writing. LaTeX is probably the most popular format when people have a choice, so MathJax’s TeXlike input has a wide community out there. From a real TeX perspective, MathJax restricts LaTeX input to mathmode since it converts internally into MathML. Due to LaTeX’s print heritage, some constructions are hard to do (e.g., equalwidth columns are trivial in MathML but not doable with the default LaTeX macros).
AsciiMath is a lightweight markup language designed to convert well to MathML. I sometimes like comparing it to markdown – not as powerful but much more sensible to write. It does not have the expressive power of MathML but it is very easy to learn because it was designed by Peter Jipsen specifically for highschool and collegelevel students. It is less frequently used but if it’s expressive power is sufficient, I tend to recommend it.
In summary, MathML is MathJax’s internal format so anything you can do with MathJax you can do with its MathML input. LaTeX is virtually as powerful (with some edge cases), is easier to author by hand, and has a large community both from real TeXland and MathJax’s community. AsciiMath is the little brother of both MathML and LaTeX and provides a good compromise between expressive strength and human readability.
If you look beyond MathJax there are even more options, of course.”
Moving on.
On the “Getting Math Onto Web Pages” community group, Tzivya raised a big topic regarding accessibility:
I would love it the world would come to understand that accessibility is a subset of machine readability. Accessibility APIs are a specialized kind of machine. If we are working on machine readable math, we need to make sure that those specialized machines can read the math too. Otherwise we will do the work twice.
I found myself disagreeing with Tzivya (which means I’m probably wrong because she is awesome). This disagreement is mostly influenced by our work at MathJax for the past year or so, making math rendering accessible via MathJax. But the point is an important one to me because, as I expected (feared?), a few discussion on the Community Group have already brought up the problem of looking for the right™ solution instead of the realistic one.
For me, what we have now is the right solution: HTML, CSS, ARIA, SVG etc, several competing math rendering/computation/etc implementations based on these, lots of tools tangential to them. An excellent kind of mess without standards beyond what works ok for each project out there. It’s not the right™ solution but it has the potential of becoming better and better. It’s really just another part of web development; nothing else needed.
Anyway, so I wrote:
“I do dream that eventually (maybe 10 years from now?) we’ll have a thorough a11y API mapping for mathematics. At the moment, I don’t think mathematics (as a culture / language) is ready for this (though web technology probably would be).
Regarding general machine readability vs accessibility, one important difference I see is that machine readability can benefit from partial results whereas accessibility cannot.
A typical example for this might be units. If we can find a way to make units machine readable, I think we’d have a major improvement for STEM on the web. But it won’t help accessibility (much) to know that there are units in an expression if it is otherwise unintelligible.
Of course, we currently don’t have any standard or best practice for exposing units on the web. The MathWG had a very old note on units (from 2003) which suggested class=’MathMLUnit’ on MathML elements; I don’t think that’s viable approach today. Perhaps schema is a better starting point considering how successful search engines can leverage units in recipes (I could imagine lab protocols and engineering might benefit from similar methods).
For some tools it’s extremely easy to generate markup for units, e.g., Jos de Jong’s MathJS has a rich interface for handling units and could probably easily expose them in a visual output. TeX has a rich history with the physics and siunitx packages (which are, for example, partially available in MathJax as third party extensions) and heuristics seem feasible to enrich formats in general (again, MathJax can do some of that via the speechruleengine).
I think for humans we have to change our expectations. Otherwise, we’ll just end up repeating the mistakes of the past. I’ll post some thoughts on the accessibility thread later.”
And I then also wrote on the related thread:
“Today the most reliable method is still to use binary images with alt text: static images are the most reliable in terms of cross browser/platform/network conditions for visual rendering and alttext is the only way to guarantee at least some alternative rendering (e.g. aural and braille) – no matter how poor the results may be.
Don’t get me wrong, in many specific situation, there will be better ways. If you have simple content, then you can get decent visual results with HTML tags with nested arialabels. If you know you can rely on webfonts (e.g., many ebook situations) then you can use CSS with webfonts for rendering (and again nested labels). If you don’t need IE8 (sigh) then you can use SVG etc.
But in generality, binary images with alttext are still the most robust way – and that’s an extremely sorry state. I’m pretty sure we can do better but we need to identify what users need and what tools can realistically achieve today.
My first guess would be: some form of speech text, potentially enabling some level of exploration through nesting (and perhaps full exploration via JS). That’s not as bad as it sounds. SVGs with arialabels are already a close second in terms of usability (pending the ultimate demise of IE8), and like HTML they open up the opportunity of deeplabels and thus already get a certain level of exploration.
But there are other aspects. For example in the US, MathSpeak has a long history and many users of aural rendering are trained to its way of describing the visual structure of an equation. I’ve heard enough anecdotal evidence to take this very seriously – after all, that’s how visual users do it. Still, a few months ago I learned that in Germany, on the other hand, blind students might learn TeX syntax early in school (most likely because there is no tradition like MathSpeak which, after all, precedes the web by decades).
I also expect much overlap with SVG accessibility, where the challenges of summary information at a top level and exploration of details are very similar to mathematics.”
Oh, I gave a talk for Global Accessibility Awareness Day 2016 at the FernUni Hagen – in German (it’s been a while). The slides are on GitHub Pages. It’s already somewhat outdated because Wikipedia now serves mainly SVGs (generate with mathjaxnode).
Anyway, the core summary stays true:
Why is it difficult to make formulas accessible?
 Formulas compress information (extremely)
 Formulas are often visual
 Formulas are contextdependent
 Formulas are poorly authored
In other words, math accessibility sucks bad. And no solution will really help you there. But MathJax now does its best to make it suck less.
Oh, speaking of accessibility, I’m extremely disappointed that I won’t make it to role=drinks after all – but if you’re close by, why don’t you drop by?
]]>Anyway.
There was a joint meeting of CSSWG and DPUB IG on Monday and I was running late (discussing mathonweb things with Daniel Marques actually), so I missed the first 15 minutes. My mind was blown when, within 2 minutes of me sitting down, a motion was accepted to task Florian with spec’ing (specing? speccing?) out a media query for MathML support (as well as an API to flip it). I didn’t feel I was in a position to speak up, so I just sat there wondering what just happened.
The motivation seems rather natural, I suppose. As long as there’s no universal browser support for MathML, people are still stuck with providing fallbacks. In situation where they cannot load a JS library themselves (e.g., in ebooks), they have to use a fallback even if they could provide MathML.
If there was a media query, people could add both fallbacks and MathML in a standardized fashion, hiding one or the other depending on the result of that media query. In addition, an API would enable JS libraries to leverage a universal way to progressively enhance content; it wasn’t quite clear in the end, but some people seemed to hope that API could additionally be triggered by assistive technology.
This discussion started (I think) on the epub3 end, where the IDPF is currently discussing epub 3.1 and best practices; as usual, MathML features in a painfully prominent role. In epub land, the dream seems to be: you create an epub3 file once and some day down the line, when a user’s reading system finally picks up MathML support, the old content will magically improve – progressive enhancements so to speak.
Naturally, @supports
is already very helpful in all sorts of ways today which probably made it a nobrainer (and thus the quick decision). Unfortunately, I think a “media query for MathML” does not solve a single problem.
I was so late to the meeting so when the question for “any objections” came out, I did not feel I was in a position to do so. Still, in a breakout meeting later that day (about epub specifically), I voiced my criticisms to both epub, accessibility, and CSS people.
So this is, if you will, the written version of my opinion. (In case you missed that you are on my personal website, please note the use of “my” here.)
A single media query for MathML won’t help me as content provider (author, publisher, technology specialist); I also find it generally unhelpful for the web as a whole.
The problem with a single query is simple: when would a browser respond positively? When should a browser legitimately claim to have MathML support? I honestly don’t know.
MathML is a huge (and pretty vague) spec. There’s not a single browser or library that could claim complete support. MathJax is the top scorer with 85% on the MathML test suite (since MathPlayer was kicked out by IE) but that’s not saying much since the test suite is highly biased – whoever feels like it can submit the data, and in MathJax’s case that was me (who is obviously biased).
I can’t see how a single media query for all of MathML could provide people with any kind of reliable information on the frontend. Most likely, Gecko and WebKit implementations will immediately turn it on which does not help one bit – people will still have to test their content on those engines in detail.
Personally, I have already done that too many times (and keep a close eye on them) and I always come to the same conclusion: I cannot recommend using them to anyone since they are too unreliable. So I’m still stuck the same way I was before. Similarly, any publicly available crawler data I’ve seen indicates that nobody is using native MathML on Gecko and WebKit in the wild, so my position does not seem to be unique.
Of course, the CSSWG might spec out a whole set of individual media queries for every single MathML features. As unlikely as that seems, we’d just end up deeper in the rabbit hole: MathML is still extremely vague so few features are specified in enough detail (compared to CSS or SVG anyway). To take a simple example, while Gecko and WebKit might claim support for mfrac
(fractions), it’s not helping me if those fractions are laid out badly as soon as I put something mildly complex in them. So again, I’ll end up not using it.
In terms of accessibility, it seemed an API that assitive technology could trigger would not be as easy to implement in browsers (yet “easy” seemed a prerequisite given the comments from browser reps in the CSSWG). Since AT tends not to inject scripts (JAWS craziness notwithstanding), they’d need a more sophisticated feature (which is, I think, also being discussed by CSSWG, but considered much harder, i.e., unlikely).
Besides, this assumes that MathML significantly benefits accessibility. After MathJax getting deeply involved in building a suitable tool, I find this argument questionable. Talking to a11y folks, it usually comes down to “but MathPlayer!” and while MathPlayer was pretty good (albeit dead in the water now) it didn’t actually use MathML but a proprietary internal format representing the result of semantic heuristics; this makes it kind of hard to use it as an example for how great MathML is for accessibility.
I think it’s unrealistic to expect every single assistive technology to invest as much in a niche like math. I’d estimate that, at any one point in time over the past 18 years, the number of actively maintained accessibility tools with MathML support was 1 (no, neither JAWS nor VoiceOver count as “maintained” when it comes to MathML).
Further, not a single tool has ever used MathML as an internal format because it is simply insufficient – it is a XML document language for layout and is grossly unsemantic (and don’t say “but ContentMathML” now).
If people feel like exposing MathML to AT, then they can use one of the many standard tricks to ARIAhide the fallback content and visually hide the MathML. Again, in my opinion, that’s a disservice for your readers, but nobody stops you.
For me, the weirdest thing about this whole decision was its speed: that the CSSWG signed off on this idea in under 20 minutes just makes me a teeny tiny bit skeptical. It feels a lot like one big “whatevs” – browsers don’t really care but, hey, a media query is little work and if it keeps these math people off our backs, all the more reason.
The real problem remains with or without a media query: where is MathML going? As Romain commented on twitter:
@pkrautz it's real progress going on
— Romain Deltour (@rdeltour) September 19, 2016
1999 → hope MathML gets implemented
2016 → hope a declaration of nonimplementation gets implemented
Browser vendors have never worked on MathML support, MathML is no longer maintained as a spec, the MathWG is no more (did you notice?), and MathML is a bad web standard for both layout (another post) and accessibility.
I think it’s time to realize that after 18 years of not succeeding on the web, the problem might just lie with MathML itself. We don’t need it on the web (CSS and SVG are better for layout and ARIA better for accessibility) and we should stop giving browser vendors an excuse not to do anything that actually helps those developers who realize math on the web in its myriad forms today. (And the XML document world, where MathML succeeded, would be better off as well.)
Don’t get me wrong, there are many problems left for math on the web but MathML is not a silver bullet, in fact, it solves none of them. Even if it was implemented widely, we’d still need CSS and ARIA features to match. Instead of waiting for others (i.e., browsers) to solve their problems by magic, the few people with an interest (and the resources to match) will have to solve this niche problems on their own and in a way that moves the web forward as a whole.
Either way, a media query for MathML is pointless.
]]>A celebrated theorem of Shelah states that adding a Cohen real introduces a Souslin tree. Are there any other examples of notions of forcing that add a $\kappa$Souslin tree? and why is this of interest?
My motivation comes from a question of Schimmerling, which I shall now motivate and state.
Recall that Jensen proved that GCH together with the square principle $\square_\lambda$ entails a $\lambda^+$Souslin tree for all cardinals $\lambda\ge\aleph_1$. Recently, it was shown that $\square_\lambda$ may be replaced by the weaker principle $\square(\lambda^+)$. Of course, another weakening of $\square_\lambda$ is the principle $\square^*_\lambda$.
Schimmerling’s question indeed asks whether it is consistent with GCH that $\square^*_\lambda$ holds for a singular cardinal $\lambda$, and yet there exist no $\lambda^+$Souslin trees.
The first line of attacks that comes to mind here would involve Prikry/Magidor/Radin forcing to singularize a former large cardinal (e.g., this paper).
In this post, we announce a (corollary to a) theorem from an upcoming paper with Brodsky, showing that this line of attacks is a nogo.
Theorem. Suppose that $\lambda$ is a strongly inaccessible cardinal satisfying $2^\lambda=\lambda^+$. If $\mathbb P$ is a $\lambda^+$cc notion of forcing of size $\le\lambda^+$ that singularizes $\lambda$, then $\mathbb P$ adds a $\lambda^+$Souslin tree.
]]>
We consider the iteration of quasiregular maps of transcendental type from $\mathbb{R}^d$ to $\mathbb{R}^d$. In particular we study quasiFatou components, whichare defined as the connected components of the complement of the Julia set.
Many authors have studied the components of the Fatou set of a transcendental entire function, and our goal in this paper is to generalise some of these results to quasiFatou components. First, we study the number of complementary components of quasiFatou components, generalising, and slightly strengthening, a result of Kisaka and Shishikura. Second, we study the size of quasiFatou components that are bounded and have a bounded complementary component. We obtain results analogous to those of Zheng, and of Bergweiler, Rippon and Stallard. These are obtained using techniques which may be of interest even in the case of transcendental entire functions.
]]>Our objective is to determine which subsets of $\mathbb{R}^d$ arise as escaping sets of continuous functions from $\mathbb{R}^d$ to itself. We obtain partial answers to this problem, particularly in one dimension, and in the case of open sets. We give a number of examples to show that the situation in one dimension is quite different from the situation in higher dimensions. Our results demonstrate that this problem is both interesting and perhaps surprisingly complicated.
]]>We study the class $\mathcal{M}$ of functions meromorphic outside a countable closed set of essential singularities. We show that if a function in $\mathcal{M}$, with at least one essential singularity, permutes with a nonconstant rational map $g$, then $g$ is a Möbius map that is not conjugate to an irrational rotation. For a given function $ f \in\mathcal{M}$ which is not a Möbius map, we show that the set of functions in $\mathcal{M}$ that permute with $f$ is countably infinite. Finally, we show that there exist transcendental meromorphic functions $f: \mathbb{C} \to \mathbb{C}$ such that, among functions meromorphic in the plane, $f$ permutes only with itself and with the identity map.
]]>A Journey Through the World of Mice and Games – Projective and Beyond.
Abstract: This talk will be an introduction to inner model theory the at the
level of the projective hierarchy and the $L(\mathbb{R})$hierarchy. It will
focus on results connecting inner model theory to the determinacy of
certain games.
Mice are sufficiently iterable models of set theory. Martin and Steel
showed in 1989 that the existence of finitely many Woodin cardinals
with a measurable cardinal above them implies that projective
determinacy holds. Neeman and Woodin proved a levelbylevel
connection between mice and projective determinacy. They showed that
boldface $\boldsymbol\Pi^1_{n+1}$ determinacy is equivalent to the fact that the
mouse $M_n^\#(x)$ exists and is $\omega_1$iterable for all reals $x$.
Following this, we will consider pointclasses in the $L(\mathbb{R})$hierarchy
and show that determinacy for them implies the existence and
$\omega_1$iterability of certain hybrid mice with finitely many
Woodin cardinals, which we call $M_k^{\Sigma, \#}$. These hybrid mice
are like ordinary mice, but equipped with an iteration strategy for a
mouse they are containing, which enables them to capture certain sets
of reals. We will discuss what it means for a mouse to capture a set
of reals and outline why hybrid mice fulfill this task.
Hybrid Mice and Determinacy in the $L(\mathbb{R})$hierarchy.
Abstract: This talk will be an introduction to inner model theory the at the
level of the $L(\mathbb{R})$hierarchy. It will
focus on results connecting inner model theory to the determinacy of
certain games.
Mice are sufficiently iterable models of set theory. Martin and Steel
showed in 1989 that the existence of finitely many Woodin cardinals
with a measurable cardinal above them implies that projective
determinacy holds. Neeman and Woodin proved a levelbylevel
connection between mice and projective determinacy. They showed that
boldface $\boldsymbol\Pi^1_{n+1}$ determinacy is equivalent to the fact that the
mouse $M_n^\#(x)$ exists and is $\omega_1$iterable for all reals $x$.
Following this, we will consider pointclasses in the $L(\mathbb{R})$hierarchy
and show that determinacy for them implies the existence and
$\omega_1$iterability of certain hybrid mice with finitely many
Woodin cardinals, which we call $M_k^{\Sigma, \#}$. These hybrid mice
are like ordinary mice, but equipped with an iteration strategy for a
mouse they are containing, which enables them to capture certain sets
of reals. We will discuss what it means for a mouse to capture a set
of reals and outline why hybrid mice fulfill this task. If time allows we will sketch a proof that determinacy for sets of reals in the $L(\mathbb{R})$hierarchy implies the existence of hybrid mice.
Many thanks to Richard for the pictures!
]]>Abstract: Vopěnka’s Principle, introduced by Petr Vopěnka in the 1970’s, is the secondorder assertion that for every proper class $\mathcal C$ of firstorder structures in the same language, there are $B\neq A$, both in $\mathcal C$, such that $B$ elementarily embeds into $A$. In ${\rm ZFC}$, we can consider firstorder Vopěnka’s Principle, which is the scheme of assertions ${\rm VP}(\Sigma_n)$, for $n\in\omega$, stating that Vopěnka’s Principle holds for $\Sigma_n$definable (with parameters) classes. The principle ${\rm VP}(\Sigma_1)$ is a theorem of ${\rm ZFC}$; Bagaria showed that the principle ${\rm VP}(\Sigma_2)$ holds if and only if there is a proper class of supercompact cardinals, and for $n\geq 1$, ${\rm VP}(\Sigma_{n+2})$ holds if and only if there is a proper class of $C^{(n)}$extendible cardinals, where $\kappa$ is $C^{(n)}$extendible if for every $\alpha>\kappa$, there is an extendibility $j:V_\alpha\to V_\beta$ with $V_{j(\kappa)}\prec_{\Sigma_n} V$. We introduce Generic Vopěnka’s Principle, which asserts that the embeddings of Vopěnka’s Principle exist in some setforcing extension. Firstorder Generic Vopěnka’s Principle is the scheme of assertions ${\rm gVP}(\Sigma_n)$ for $\Sigma_n$definable classes of structures. The consistency strength of Generic Vopěnka’s Principle is measured by virtual large cardinals. Given a very large cardinal property $\mathcal A$, such as supercompact, $C^{(n)}$extendible, or rankintorank, characterized by the existence of suitable setsized embeddings, we say that a cardinal $\kappa$ is virtually $\mathcal A$ if the embeddings of $V$structures characterizing $\mathcal A$ exist in some setforcing extension. Unlike the similar sounding generic large cardinals, virtual large cardinals are actual large cardinals that fit between ineffables and $0^{\sharp}$ in the hierarchy. Remarkable cardinals introduced by Schindler turned out to be virtually supercompact. We show that ${\rm gVP}(\Sigma_2)$ is equiconsistent with a proper class of remarkable cardinals and for $n\geq 1$, ${\rm gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$extendible cardinals. We conjecture that the equiconsistency results can be improved to get an equivalence. This is joint work with Joan Bagaria and Ralf Schindler.
Here are links to other posts related to this work:
The obvious problem is: how should that work? How do we get this small, disparate, and sometimes divided community of math tools for the web to inform web standards and, ultimately, browser development?
Well, it’s time to find out.
A couple of people have been working towards a new effort and we’ve now formed a W3C Community Group. The name may sound funny but it’s what this group is after: Getting Math onto Web Pages. No fuss, no drama, no limitations. The focus is on how we do this today and how we can make it easier.
So now it’s up to us.
If you’re a developer of a tool that makes math work on the web today and want to help shape the future, then it’s time to step up. I know your resources are probably tight, in fact most projects out there are run by idealists, as sideprojects or chronically underfunded. I hear you.
But you built a tool because nothing was getting the job done. Standards? Same thing. We need to learn about the process, understand what we want to do and what we can do, and ultimately, help build standards that work for everyone. Otherwise, the job won’t get done.
So join the Community Group and work together to move the web forward for mathematics and beyond.
Need more information? Here’s the initial description from the CG homepage:
There are many technical issues in presenting mathematics in today’s Open Web Platform, which has lead to the poor access to Mathematics in Web Pages. This is in spite of the existing de jure or de facto standards for authoring mathematics, like MathML, LaTeX, or asciimath, which have been around for a very long time and are widely used by the mathematical and technical communities.
While MathML was supposed to solve the problem of rendering mathematics on the web it lacks in both implementations and general interest from browser vendors.
However, in the past decade, many math rendering tools have been pushing math on the web forward using HTML/CSS and SVG.
One of the identified issues is that, while browser manufacturers have continually improved and extended their HTML and CSS layout engines, the approaches to render mathematics have not been able to align with these improvements. In fact, the current approaches to math layout could be considered to be largely disjoint from the other technologies of OWP.
Another key issue, is that exposing (and thus leveraging) semantic information of mathematical and scientific content on the web needs to move towards modern practices and standards instead of being limited to a single solution (MathML). Such information is critical for accessibility, machinereadability, and reuse of mathematical content.
This Community Group intends to look at the problems of math on the web in a very bottomup manner.
Experts in this group should identify how the core OWP layout engines, centered around HTML, SVG, and CSS, can be reused for the purpose of mathematical layout by mapping mathematical entities on top of these, thereby ensuring a much more efficient result, and making use of current and future OWP optimization possibilities. Similarly, experts should work to identify best practices for semantics from the point of view of today’s successful solutions.
This work should also reveal where the shortcomings are, from the mathematical layout point of view, in the details of these OWP technologies, and propose improvements and possible additions to these, with the ultimate goal of reaching out to the responsible W3C Working Groups to make these changes. This work may also reveal new technology areas that should be specified and standardized on their own right, for example in the area of Web Accessibility.
The ultimate goal is to pave the way for a standard, highly optimized implementation architecture, on top of which mathematical syntaxes, like LaTeX or MathML, may be mapped to provide an efficient display of mathematical formulae.
Note that, although this community group will concentrate on mathematics, many other areas, e.g., science and engineering, will benefit from (and factor into) the approach and from the core architecture.
PS: We’ve also applied for a CG slot at TPAC 2016 in Lisbon for a facetoface of the CG as well as the opportunity to talk to other groups. Fingers crossed!
]]>I don’t have any analytics on this site beyond what CloudFlare collects passively. There was spike of ~800 unique visitors and then higherthanusual traffic afterwards, it might not be completely unreasonable to guess that 1000 people opened the post back then – until somebody posted it to Hacker News today (no link to save your sanity from reading HN comments) so now it’s more like 20,000 people have clicked a link to that piece. Of course, few of those will have read it, fewer still will have carefully read it. My best guess is: 3 people have read it. Does that sound about right?
Most responses were basically “meh” (especially on the twitters). Steve Faulkner is, of course, to blame for much of that twitter attention (thanks Steve!). I also received a few kind emails with responses, thanks for those. Elsewhere, Jesse McKeown wrote a short tumblr; as a former mathematician I’ll formally (get it?) object to the use of Gödel’s work.
Paul Topping’s “response” was mostly focused on his own ideas which have little to do with what I wrote. Let me respond to those few points that were about my piece. Let’s do this inline.
The first thing to note in his post is that he says that MathML is a failed web standard. By this, I believe he is only saying that it has failed as a language supported by browsers.
I had hoped my glorious <s>
tag was making the point clear. But I guess not.
He acknowledges that it is in heavy use in education, publishing, and elsewhere but I wish he’d made this distinction a bit more strongly.
Ignoring the point that I didn’t actually mention education (or “elsewhere”), I thought I had fulfilled this “wish” when I wrote: It’s also clearly a success in the XML publishing world, serving an important role in standards such as JATS and BITS. The problem is: MathML has failed on the web.
.
I’m not sure how much clearer I can make that distinction – success here, failure there.
The browser makers ignore MathML so getting rid of it won’t affect them much. Perhaps Peter is directing his message to the MathML community itself.
For what it’s worth (and before anyone needs to speculate), my piece was very broadly directed at the web community. I was probably looking for readers who follow current trends in browser standards and their development. (Shout out to Chaals!)
This one’s easy. MathML isn’t implemented in most browsers so its not used.
That argument seems rather simplistic to me. Looking at any successful new web standard out there today (e.g., picture, flexbox, grid, animation), even a partial, behindaflag implementation does not mean the standard is not being used. Instead, there’s a positive feedback loop with (often seemingly small groups of) developers. Even at the best of times (e.g., Dave Barton pushing WebKit forward for a year, Fred Wang’s crowdfunded months), developer feedback for MathML was (and is) nonexistent (cf. my example of serious bugs not even being reported).
Sure but imagine if MathML specified layout to the level that TeX does.
This is a) ignoring how badly Presentation MathML does not specify layout (in particular, compared to CSS) and b) a red herring (TeX).
This might well be the case but what’s the point here? If CSS now has what math layout needs, we’re done, right?
Yes. That’s the main point, actually.
Perhaps, but even if Presentation MathML provided sufficient semantics, most authors wouldn’t add them. The fact is MathML already provides recommended markup patterns for expressing a lot of math semantics but authors aren’t interested in adding such patterns to their math. Authors generally stop tweaking their math as soon as it looks right and can be read by a fellow human. I don’t think this will change. Even publishers are less and less interested in spending resources on marking up math with the required level of semantics. This won’t change even if MathML added missing semantics constructs and the necessary editing tools were available. Instead, everyone is moving in the opposite direction, spending less and less time and money on careful authoring.
An elegant straw man argument is still a straw man argument. I did not write about authoring or extending MathML. Good points though.
Peter acknowledges Neil Soiffer’s work on algorithmically extracting semantic information from Presentation MathML but seems to think it has hit a brick wall.
Another case of putting words in my mouth. A bit farfetched this time, since MathJax is actively doing research in this area.
In technology, when someone has a better idea how to do something they should just do it and let the market decide whether their solution is really an improvement.
To quote myself:
Today, lots of tools will let you render mathematics using CSS.
It’s possible to generate HTML+CSS or SVG that renders any MathML content […] on the server.[And obviously on the client as well.]
Since layout is practically solved […].
I tried to make a point that CSS and SVG already provide various solutions today. I also tried to make a point that MathML is not used significantly in the wild (except by conversion to HTML/CSS or SVG of course). So it seems to me that I argued that “the market” has chosen these solutions over MathML.
But I guess I wasn’t clear enough. Oh well.
No problem but a lot of work needs to be done first.
No, see above.
Peter claims MathML’s mere existence is blocking discussions. What discussions did it block?
That’s a good point even though Paul’s piece is a nice example of the point I was trying to make. Calling on the community (who is that again?) to magically fix MathML after 10 years without development instead of making room for successful solutions? That is an elegant block.
Anyway, one problem for me is that many discussions I have in mind happened privately, especially with web standards experts. But that’s no excuse for not spending a few minutes thinking about public examples; for some reasons, this example the discussion on mozilla.dev.platform is the first to come to mind (man, I was feeling righteous back then).
Another example are the specs themselves. The ARIA spec basically has a big glaring hole where math should be. Similarly, take a look at the suggestions in the ARIA best practices spec and the epub3 spec. All of them focus entirely on MathMLbased solutions without any reflection on whether these actually work in real life. The ARIA practices spec even discourages working solutions like HTMLmath using dubious arguments about the semantics of Presentation MathML. Moving on.
Paul goes on to write about generating semantic information. It’s not quite a straw man but nevertheless has little to do with my concerns about exposing semantic information on the web.
To wrap up.
Of course, Peter doesn’t believe automated semantics recognition can do the job.
See above.
Do we want that math to look identically in every browser? I believe the answer is generally “no”.
I have the impression people generally expect consistent rendering across browsers. But anecdotal evidence is, well, anecdotal.
And that’s all folks. I’ll add more as they come along.
And stay tuned for more.
Comments
Don Stolee, 20160414
Totally agree with your points raised and must admit don’t understand all of it.
We are XML publishers out of Australia and use MathML within our markup. We then publish the XML content to our HTML5 eReader (tekReader) and use MathJax to assist with the rendering.
Example here: http://tekreader.eglootech.com/book/tekReaderGuide#part22#pt2211h3
It seems to work well on modern browsers found on desktops, tablets and smartphones and we have a University in Canada using our reader.
I would hope the XML world does not drop the standard and browsers continue to support, somewhat.
Peter, 20160414
Thanks for your comment. Tekreader looks very nice.
MathML is clearly a success in the XML world so I don’t see it disappearing. I’m not suggesting that anyone should drop MathML if it works for them.
The point I was trying to make was entirely about its role on the web where other tools have made it obsolete (in the sense that it is no longer necessary to have native MathML browser implementations). Since most XML markup is converted to HTML for web delivery (e.g. OASIS tables), I don’t see a huge problem in converting MathML to HTML as well.
Does that make sense?
Don 20160414
All good Peter. Thanks for getting back to me. If I may add. I’ve been providing XML publishing systems since the early 90’s (SGML back then). All very monolithic and complex. With the advent of tablets and smartphones I see a trend in marking down XML (I call it dummy down) to HTML5. In fact my business now advocates markup using HTML5 (now with semantics) and do away with all the complexity downstream. Most of the rich markup is never used anyway (aka S1000D).
Peter 20160414
Thanks for the additional comment, Don. I’m far from your level of experience obviously, but I’ve also heard about this trend. In that context, I often point to John Maxwell’s BiB 2012 talk.
Don 20160414 Awesome! Thanks for sharing. At least I know I am not crazy!
Despite knowing for several weeks that I was going to have to stand up and say a few words about Michael in front of parents and faculty at the CEFNS Precommencement Ceremony, I put off coming up with what I was going to say until the night before. It wasn’t just procrastination and being busy that caused me to wait so long. I was so freaking nervous about doing it that my defense mechanism was to ignore it as long as possible. To most people, it might seem strange that I was so apprehensive since I spend so much time public speaking via teaching and talks at conferences and workshops. However, things like wedding toasts and short speeches at precommencement ceremonies cause me great anxiety.
When I sat down the night before the ceremony to draft what I might say, I spent equal time typing and deleting. After a more than an hour, I pretty much had nothing. For a little while I had some ideas that involved Pokémon, but then decided my “great idea” was probably a bit silly and would likely be lost on most of the audience. I decided to put it off one more day and cram the next day.
Some time in the morning, I stumbled on the blog post titled Good Mathematician vs Great Mathematician on Math with Bad Drawings, which sparked some much needed inspiration. Once I got cranking, the rest flowed pretty easily. (Thanks to Roy St.Laurent for some early feedback.) I’m pretty happy with how it turned out. There’s a bit of an abrupt transition in the middle. I had a longer version (which I didn’t save for some reason) that flowed a bit better, but I needed to keep it around 2 minutes long (and ran out of time to make improvements after nixing a few lines).
Below is what I prepared to say about Michael. I ad libbed a little bit, but for the most part followed the script. My opening is a slight modification of what appeared in Good Mathematician vs Great Mathematician. I’d also like to give a hat tip to Brian Katz as I borrowed from his call for papers for the PRIMUS Special Issue on Teaching Inquiry.
Phew! I’m glad that’s over. But I’m also glad that I had the opportunity to honor Michael.
]]>Here “abreast” means “in a group,” so the girls are walking out in groups of three, and each pair of girls should only be in the same group once. It turns out that this problem is harder than it looks. It’s not even obvious that a solution is possible. In order to gain some insight into Kirkman’s problem, we tinkered with the following simpler problems.
The audience was able to quickly determine that the answer is “no” for problems 1 and 2. For the third problem, I let the audience play around for a bit and then I showed them one possible solution using Quanta Magazine’s applet located here. After discussing what it would take to verify that a proposed solution to Kirkman’s problem was actually a solution, I showed them one of seven possible solutions.
It turns out that Kirkman’s puzzle is a prototype for a more general problem:
Such an arrangement is said to be of type $S(t,k,n)$, which is called a Steiner systems (or combinatorial design theory). For example, solutions to the original Kirkman problem are of type $S(2,3,15)$. Notice that we have abandoned the extra restriction that the sets of size $k$ are sortable into days.
One of the fundamental problems in the theory of combinatorial designs is determining whether a given $S(t,k,n)$ exists and if one exists, how many are there? For example, is $S(2,3,7)$ possible? The answer is yes and one can interpret the Fano plane as one possible solution.
Many combinations of $t, k$, and $n$ can be quickly ruled out by divisibility obstacles. For example, problem 2 above helped us to determine that $S(2,3,6)$ is not possible. For combinations that aren’t immediately tossed out, there’s no easy way to discover whether a given combination is possible. For example, it turns out that $S(2,7,43)$ is impossible, but it is for complicated reasons. However, in January 2014, Peter Keevash (Oxford) established that, apart from a few exceptions, $S(t,k,n)$ will always exist if a few divisibility requirements are satisfied. This is a big deal in the world of combinatorial design theory.
As with many of the topics I choose to give a talk on in FAMUS, I pretty much knew nothing about the topic before I started prepping for the talk. I go out of my way to emphasize that this is the case because I want the students to know that the learning never ends.
Here are the slides for my talk. Note: There are two additional problems related to Kirkman’s problem at the very end that you might find interesting.
The content of my slides was inspired or came directly from the following sources:
Most weeks in FAMUS, the host interviews a faculty member. However, this week, Dr. Derek Sonderegger gave a 30 minute talk on the merits of pursuing a graduate degree in mathematics, statistics, or mathematics education. In addition, Derek provided some details about our graduate program. We also had quite a few of our grad students in attendance that were able to chime in about their current experience.
]]>We construct a quasiregular map of transcendental type from $\mathbb{R}^3$ to $\mathbb{R}^3$ with a periodic domain in which all iterates tend locally uniformly to infinity. This is the first example of such behaviour in a dimension greater than two.
Our construction uses a general result regarding the extension of biLipschitz maps. In addition, we show that there is a quasiregular map of transcendental type from $\mathbb{R}^3$ to $\mathbb{R}^3$ which is equal to the identity map in a halfspace.
]]>The case for support document from my grant application gives details of this conjecture, its importance, and the strategies that I hope to employ to work on it.
Excitingly, the university has agreed to fund a PhD student as part of this research. I just drafted a short description of what the PhD would be about, and I’ll post this below. (Note that this description might be edited a little over the next few days. In any case, it should give an idea of what the project will be about.) If you are interested, please get in touch!
]]>This programme of research is within the study of finite group theory (although some investigation of linear algebraic groups may also be involved). The aim is to prove, or partially prove, the Product Decomposition Conjecture which concerns “conjugategrowth” of subsets of a finite simple group: roughly speaking, given a finite nonabelian simple group G and a subset A in G of size at least 2, we would like to show that one can always write G as a product of “not many” conjugates of A.
This notion of conjugategrowth has interesting connections to many interesting areas of mathematics, including expander graphs, the product growth results of Helfgott et al, bases of permutation groups, word problems and more.
In the process of working on this conjecture, the student can expect to learn a great deal about the structure of finite simple groups (especially the simple classical groups) and, in particular, will study and make use of one of the most famous theorems in mathematics, the Classification of Finite Simple Groups.
There is no registration fee, but please register your attendance or obtain any further details by contacting Nick Gill. All events are held in rooms G310 and G311. Morning tea, lunch and afternoon tea are included and complementary. There are limited funds available for dinner — please let us know if you would like to join us.
New: A list of titles and abstracts for all talks is now available.
Tuesday 23rd June 2015
09:30  coffee 
10:00  Session 1: Combinatorics and cryptography

12:00  lunch 
13:30  Session 2: Numerically modelling the atmosphere

15:30  coffee 
18:00  dinner 
Tuesday 30th June 2015
09:30  coffee 
10:00  Session 3: Operational Research

12:00  lunch 
13:30  Session 4: Group Theory

15:30  coffee 
18:00  dinner 
The meeting is supported by an LMS Conference grant celebrating new appointments and the University of South Wales.
]]>Suppose that $U$ is a hollow quasiFatou component of a quasiregular map of transcendental type. We show that if $U$ is bounded, then $U$ has many properties in common with a multiply connected Fatou component of a transcendental entire function. On the other hand, we show that if $U$ is not bounded, then it is completely invariant and has no unbounded boundary components. We show that this situation occurs if $J(f)$ has an isolated point, or if $J(f)$ is not equal to the boundary of the fast escaping set. Finally, we deduce that if $J(f)$ has a bounded component, then all components of $J(f)$ are bounded.
]]>An introduction to dimension, particularly Hausdorff dimension
The dimension of the Julia set for functions in the exponential family
The dimension of the Julia set for transcendental entire functions in general
A Julia set of dimension one; an overview of Bishop’s paper
The size of the Julia set of functions outside the EremenkoLyubich class
]]>In this article, we consider a natural definition of hyperbolicity that requires expanding properties on the preimage of a punctured neighbourhood of the isolated singularity. We show that this definition is equivalent to another commonly used one: a transcendental entire functions is hyperbolic if and only if its postsingular set is a compact subset of the Fatou set. This leads us to propose that this notion should be used as the general definition of hyperbolicity in the context of entire functions, and, in particular, that speaking about hyperbolicity makes sense only within the EremenkoLyubich class $\mathcal{B}$ of transcendental entire functions with a bounded set of singular values.
We also considerably strengthen a recent characterisation of the class $\mathcal{B}$, by showing that functions outside of this class cannot be expanding with respect to a metric whose density decays at most polynomially. In particular, this implies that no transcendental entire function can be expanding with respect to the spherical metric. Finally we give a characterisation of an analogous class of functions analytic in a hyperbolic domain.
]]>So I am happy that I have only one course each day this semester. I am teaching two courses this semester. Precalculus (Math 200) meets on Tuesdays and Thursdays at 8AM, and Elementary Algebra (Math 96) meets on Mondays and Wednesdays at 9:15 AM. (Each class meets with me a total of five hours per week.) Then on Fridays I have the set theory seminar at 10AM at the Graduate Center, or occasionally a faculty seminar at LaGuardia at 9AM where we will prepare to teach a seminar for first year LaGuardia students. I think that will be cool, because I really enjoyed my first year seminar as an undergraduate student at Grinnell.
This morning schedule is a big change for me; I have been a total night owl for the last seven years at least, rarely getting up much before noon. But I think it will be good for my health to wake up more with the sun. It might be a rough adjustment period, but it will be worthwhile. As a bonus, if all goes well, I can leave work by mid to late afternoon most days and be able to go out in the city some weekday evenings for dinner or a show. (If all doesn’t go well, I’ll be buried in grading, course preparation, administrative work, etc. and rarely get out of here until late anyway. But I am optimistic that it will be better than that.) Another nice benefit to the schedule is that I can conveniently make myself available for 45 minutes worth of office hours four days per week, so that students have a better opportunity to see me.
The elementary algebra students seem like a good group. They really seemed to appreciate the activity of sharing their feelings towards math and their expectations for the course. The videos didn’t seem to be as effective; only a few students commented on them, but the initial discussion before the videos was quite fruitful. A few students told me that they hate math, but many, I think a majority though I didn’t count, came in with positive attitudes towards math. Now it is my responsibility to help them to maintain these positive attitudes and to work hard and succeed in the class. I’m up for the challenge.
]]>
http://www.ctpost.com/news/article/Hereswhyyoushouldstudyalgebra4710461.php
]]>
Materiales:
En adición de esta página hay una otra página en Moodle con material de ayuda
para esta curso. La página se llama Ayuda Algebra Lineal y está en la sección de Matemática aplicada de la escuela de matemática. La clave para matricularse es Ayuda2014 y solo la deben usar los estudiantes la primera vez que se matriculen.
Si tiene más preguntas, se puede
Plan del curso
Semana  Materiál  Evaluación 
1  Repaso de Álgebra Lineal I  L 11/8: Tarea 1 distribuido 
2  Operadores lineales, matrices semejantes, Valores propios, polinomios característicos 
L 18/8: Tarea 1 devuelto J 21/8: Tarea 1 discutido 
3  Subespacios invariantes  L 25/8: Tarea 2 distribuido 
4  Triangulación simultánea Diagonalización simultánea Dos demostraciones difíciles 
L 01/9: Tarea 2 devuelto J 04/9: Tarea 2 discutido 
5  Sumas directas invariantes Descomposición prima 
L 8/9: Tarea 3 distribuido 
6 
No hay clases esta semana. Nótese que muchas personas tenían problemas con pregunta 8 de tarea 3. Para una discusión interesante sobre esta tema, vaya aquí. 
L 15/9: Feriado J 18/9: Tarea 3 devuelto y Examen parcial 1 
7  Subespacios cíclicos Descomposición cíclica 
L 22/9: Tarea 4 distribuido 
8  La forma racional Nuevos ejemplos de cuerpos 
L 29/9: Tarea 4 devuelto J 02/10: Tareas 3 y 4 discutido 
9  Formas y matrices Espacios producto interno 
L 06/10: Tarea 5 distribuido 
10  Propiedades de productos internos El proceso GramSchmidt 
L 13/10: Tarea 5 devuelto J 16/10: Tarea 5 discutido 
11  Proyecciones Complementos ortogonales 
L 20/10: Tarea 6 distribuido 
12  Operadores unitarios Operadores ortogonales 
L 27/10: Tarea 6 devuelto J 30/10: Examen parcial 2 
13  Operadores normales La ley de inercia de Sylvester 
L 03/11: Tarea 7 distribuido 
14  La clasificación de formas sesquilineales  L 10/11: Tarea 7 devuelto J 13/11: Tarea 7 discutido 
15  Formas cuadráticas Grupos de isometrías 
L 17/11: Tarea 8 distribuido 
16  Secciones cónicas La teoría de relatividad especial 
L 24/11: Tarea 8 devuelto J 27/11: Tarea 8 discutido 
17  Exámen parcial 3 
Si tiene más preguntas, se puede
Practical matters
Lecture notes
Exercises
I will provide full answers for the first set, thereafter answers will only be provided on request.
Background reading
No one text covers all of the material in this course. Principal texts are as follows:
Additional texts of interest:
I have ecopies of most of the texts listed above and can provide them on request.
]]>Background on expanders:
On the sumproduct phenomenon. The basic text is Tao and Vu “Additive Combinatorics”. Here are a few other links:
On growth in nonabelian groups:
Sections 4 and 5 of this paper are particularly vital. Subsequent sections of the paper use incidence theorems; we will be able to go to the full result more directly.
Expanders from groups:
Sieving:
Property T: The first construction of expander graphs was by Margulis and used property T, a representation theoretic property that holds for certain discrete groups (SL_d(Z) with d>2 for instance).
The research group at UWA is very strong in group theory and in finite geometry, hence I will emphasise these aspects of the subject. I will also assume familiarity with results from these areas.
We introduce the idea of growth in groups, before focussing on the abelian
setting. We take a first look at the sumproduct principle, with a brief
foray into the connection between sumproduct results and incidence
theorems.
We then focus on Helfgott’s restatement of the sumproduct principle in
terms of groups acting on groups.
Since Helfgott first proved that “generating sets grow” in SL_2(p) and
SL_3(p), our understanding of how to prove such results has developed a
great deal. It is now possible to prove that generating sets grow in any
finite group of Lie type; what is more the most recent proofs are very
direct – they have no recourse to the incidence theorems of Helfgott’s
original approach.
We give an overview of this new approach, which has come to be known as a
“pivotting argument”. There are five parts to this approach, and we
outline how these fit together.
The principle of “escape from subvarieties” is the first step in proving
growth in groups of Lie type. We give a proof of this result, and its most
important application (for us) – the construction of regular semisimple
elements.
We then examine other related ideas from algebraic geometry, in particular
the idea of nonsingularity.
We show how to reduce the study of exponential growth in
soluble subgroups of GL_r(p) to the nilpotent setting. We make use of ideas based on the sumproduct phenomenon, as well as some machinery from linear algebraic groups. We will not assume any background from these areas. This lecture is based on new results of the lecturer and Helfgott.
This is a background lecture preparing the way for the final lecture, where we
connect results on growth in simple groups to the explicit construction of
families of expanders. In this lecture we will define what we mean by a
family of expanders, stating (and sometimes even proving!!) background
results that will be important later.
We will also try to explain why expanders are of such interest to so many
different groups of people.
We outline the method of Bourgain and Gamburd. They were the
first to use
results concerning growth in simple groups to explicitly construct
expander graphs. Let S be a set in SL_2(Z) and define S_p to be the
natural projection of S modulo p. Now let G_p be the Cayley graph of
SL_2(p) with respect to the set S_p. Bourgain and Gamburd give precise
results as to when the set of graphs {G_p : p a prime} forms a family of
expanders. They make crucial use of the result of Helfgott (encountered in Seminar 2) which states that “all generating sets in SL_2(p)
grow”.
Course material follows.