Catalog description: An introduction to formal mathematical language, mathematical experimentation, mathematical proofs, mathematical communication, and technologies supporting the above. Core content includes sets and functions, elementary number theory and induction, and distances and topology on the real line. Additional content drawn from logic, combinatorics and probability, graph theory, and modular arithmetic.
]]>Catalog description: Topics in modern set theory may be drawn from forcing, choiceless set theory, infinitary combinatorics, set-theoretic topology, descriptive set theory, inner model theory, and alternative set theories.
]]>Hanul Jeon took some of these words, and made them into a real nice comic. Originally appearing on his Twitter account. He was kind enough to let me post it here with his permission.
Continue reading...]]>Abstract: The topics covered showcase recent advances from a variety of main areas of set theory, including descriptive set theory, forcing, and inner model theory, in addition to several applications of set theory, including ergodic theory, combinatorics, and model theory.
]]>The use of class forcing in set theoretic constructions goes back to the proof Easton's Theorem that ${\rm GCH}$ can fail at all regular cardinals. Class forcing extensions are ubiquitous in modern set theory, particularly in the emerging field of set-theoretic geology. Yet, besides the pioneering work by Friedman and Stanley concerning pretame and tame class forcing, the general theory of class forcing has not really been developed until recently. A revival of interest in second-order set theory has set the stage for understanding the properties of class forcing in its natural setting. Class forcing makes a fundamental use of class objects, which in the first-order setting can only be studied in the meta-theory. Not surprisingly it has turned out that properties of class forcing notions are fundamentally determined by which other classes exist around them. In this talk, I will survey recent results (of myself, Antos, Friedman, Hamkins, Holy, Krapf, Schlicht, Williams and others) regarding the general theory of class forcing, the effects of the second-order set theoretic background on the behavior of class forcing notions and the numerous ways in which familiar properties of set forcing can fail for class forcing even in strong second-order set theories.
]]>Unfortunately, this is not as simple as me just looking at some people's emails and choosing from them. The university has a rigorous and exhausting hiring process involving applying online, shortlisting, interviews, whatnot.
Continue reading...]]>The Set Theory workshop in Obwerwolfach was supposed to take place April 2020, but was transformed into a webinar, due to COVID-19.
Here you will find the title, abstract, slides and video of my webinar talk.
Talk Title: Transformations of the transfinite plane.
Abstract: We study the existence of transformations of the transfinite plane that allow to reduce Ramsey-theoretic statements concerning uncountable Abelian groups into classic partition relations for uncountable cardinals.
This is joint work with Jing Zhang
Downloads:
]]>
Each semester, I ask some variation of the following question on the final exams:
Describe how your perceptions of learning, especially mathematics, have changed.
So many of the responses that I received this past semester were amazing, but the following from a student (chemistry major) in my introduction to proof course really stood out to me:
I have always enjoyed mathematics, and because of that learning math was an enjoyable process and always made me excited for future courses. However, I have never had as much fun in a math class as I did in this one. To begin with, my perception of what studying mathematics means has completely changed. While manipulating numbers and formulas has its uses, mathematics is so much more. There is a very creative and artistic beauty associated with mathematical proof and logic. Like Cantor’s Diagonalization Argument, it is incredible to think that someone was able to discover such a simple yet mind blowing process. It is just such a creative way to think and I aspire to eventually be like minded in that aspect. That is getting a little off topic, but in terms of learning, this is the first class that I have attempted homework first and then discussed it in the following class. This method is so incredibly effective that I do not understand why I am just now experiencing it. By attempting things on my own first, I am able to think of so many more questions for class and I have always believed the best way to learn is by asking questions. Math can be a difficult subject, especially proof writing, but I feel that it was made much easier by developing a true understanding rather than by force feeding people step-by-step methods or something analogous to that.
Boom!
]]>Oddly enough, I am moved to write this by a Ben & Jerry's Silence is NOT an Option campaign.
Continue reading...]]>The situation in the US is long-standing and terrible, and worthy of protest. Here in the UK there is still institutional racism in many parts of society, as well as casual racism in many areas of life. We must be relentless in our efforts to end racism in all its forms.
On Saturday I’ll be going to a gathering in Caerphilly (covid guidelines allowing), to show my support for the fight against racism. I’m committed to supporting my BAME colleagues at work however I can, as well as combatting racism outside of work, in daily life.
Of course, I must also be aware that I’ve grown up in a society tainted by racism, and I can’t help but have been affected by it. I’m also committed to examining my own thoughts, ideas and behaviour to make sure I have not unwittingly internalised racism (or other forms of bigotry). The struggle continues.
Addendum: I’ve read a bunch of books on the subject of racism and thought I’d record some below, in case they are of interest:
Abstract: We introduce the computable FS-jump, an analog of the classical Friedman–Stanley jump in the context of equivalence relations on $\mathbb N$. We prove that the computable FS-jump is proper with respect to computable reducibility. We then study the effect of the computable FS-jump on computably enumerable equivalence relations (ceers).
]]>I was cooking dinner today, bucatini alla matriciana if you must know, and I realised that cooking pasta and writing papers have nothing in common. Exactly one of those things is enjoyable, and it is not the writing part of papers (I do love the research part, of course).
Continue reading...]]>As well as being a fine mathematician, Jan was a warm and witty man. Now I have PhD students of my own I know that it is not always easy to know what the right thing to do is when it comes to supervision – looking back I appreciate Jan’s gentle direction all the more.
I feel very sad knowing that Jan is no longer with us. My thoughts are with his wife, Ruth, and the rest of her family.
]]>Indeed, in my answer I also viewed this as trivial. It was tantamount to the claim: If \(\cf(\alpha)\neq\cf(\kappa)\), then every subset of \(\alpha\) of size \(\kappa\), contains a subset of size \(\kappa\) which is bounded.
Continue reading...]]>How to obtain lower bounds in set theory
Abstract: Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others.
We will outline two recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, and the second result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$. Finally, we will comment on obstacles, questions, and conjectures for lifting these results higher up in the large cardinal hierarchy.
]]>A common theme in the definitions of larger large cardinals is the existence of elementary embeddings from the universe into an inner model. In contrast, smaller large cardinals, such as weakly compact and Ramsey cardinals, are usually characterized by their combinatorial properties such as existence of large homogeneous sets for colorings. It turns out that many familiar smaller large cardinals have elegant elementary embedding characterizations. The embeddings here are correspondingly ‘small’; they are between transitive set models of set theory, usually the size of the large cardinal in question. The study of these elementary embeddings has led us to isolate certain important properties via which we have defined robust hierarchies of large cardinals below a measurable cardinal. In this talk, I will introduce these types of elementary embeddings and discuss the large cardinal hierarchies that have come out of the analysis of their properties. The more recent results in this area are a joint work with Philipp Schlicht.
]]>Abstract: We explore different ways to set up designs for experiments that compare different varieties, e.g., different doses of a drug. The three designs we explore are Latin square designs, (b,v,r,k,l)-designs, and projective plane designs.
]]>How to obtain lower bounds in set theory
Abstract: Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others.
We will outline two recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, and the second result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$.
]]>In early January 2019, I was told that I can try and apply for a new scheme in the United Kingdom called "Future Leaders Fellowship". At the time not a lot was known about it, the first round winners were due to be announced, so it was unclear what are the success rates, or the "desired candidates" might be.
Continue reading...]]>This post is a follow-up to my earlier post on matrices for classical groups.
I wanted to do an investigation of the triality automorphism of the orthogonal group $O8^+(q)$. I finally got around to this today, and have written up some rough notes which are here. One unexpected bonus was that I was able to write down matrices for the natural 8-dimensional representation of $G2(q)$ over $\mathbb{F}{q}$ and ${^3D4}(q)$ over $\mathbb{F}_{q^3}$.
Both of these families of exceptional groups lie inside $O_8^+$ groups. Their structure, when written as matrices, is surprisingly similar (surprising to me!). Although all of this is well-known to experts, I’ve not seen the matrices written down before so I was pleased that I could nut it out…
(By the way, I’m having trouble with underscores rendering correctly, so please do whatever’s necessary to make that middle paragraph make sense.)
]]>This was originally "in-blog", but I decided that the interactive version is a bit more interesting. Enjoy it while it lasts!
Continue reading...]]>It is a sad day.
]]>This is fact is used in the paradoxical decomposition theorems (which I often enjoy bringing up as a counter-argument to bad arguments that the Banach–Tarski paradox implies we need to accept that all sets are measurable as an axiom):
Continue reading...]]>In a single paragraph, analyse the squire's approach to the existential crisis of the knight, Antonius Block, in Ingmar Bergman's "The Seventh Seal". What is the role of the knight's wife in support and contrast to the squire's?
Continue reading...]]>This is an invited review of four papers by G. Sargsyan, J. Steel and N. Trang on consistency strength lower bounds for the proper forcing axiom via the core model induction.
]]>How to obtain lower bounds in set theory
Computing the large cardinal strength of a given statement is one of the key research directions in set theory. Fruitful tools to tackle such questions are given by inner model theory. The study of inner models was initiated by Gödel’s analysis of the constructible universe $L$. Later, it was extended to canonical inner models with large cardinals, e.g. measurable cardinals, strong cardinals or Woodin cardinals, which were introduced by Jensen, Mitchell, Steel, and others.
We will outline three recent applications where inner model theory is used to obtain lower bounds in large cardinal strength for statements that do not involve inner models. The first result, in part joint with J. Aguilera, is an analysis of the strength of determinacy for certain infinite two player games of fixed countable length, the second result studies the strength of a model of determinacy in which all sets of reals are universally Baire, and the third result, joint with Y. Hayut, involves combinatorics of infinite trees and the perfect subtree property for weakly compact cardinals $\kappa$.
]]>This meeting was cancelled due to the COVID-19 restrictions.
Das Unbegreifliche verstehen - die Faszination Unendlichkeit
Jeder kennt sie, sie ist immer da, aber nie so richtig - die Unendlichkeit. Aber was meinen wir eigentlich, wenn wir sagen, dass etwas unendlich groß ist? Gibt es nur die eine Unendlichkeit oder zwei oder vielleicht sogar ganz viele?
Wir werden diese Fragen und natürlich auch die dazugehörigen Antworten etwas genauer unter die Lupe nehmen. Dabei wird sich nicht nur zeigen, wozu Mathematik in der Lage ist, sondern wir werden uns auch auf den Weg zu den Grenzen der Mathematik machen, wie sie zum Beispiel von Cantor und Gödel aufgezeigt wurden. Diese Grenzen besser zu verstehen - getreu nach dem Spruch von Oliver Tietze „Wer mit dem Feuer spielen will, muss wissen, wo das Wasser steht.“ - ist noch heute Gegenstand mathematischer Spitzenforschung.
]]>Joint work with Jing Zhang.
Abstract. We study the existence of transformations of the transfinite plane that allow to reduce Ramsey-theoretic statements concerning uncountable Abelian groups into classic partition relations for uncountable cardinals.
To exemplify: we prove that for every inaccessible cardinal $\kappa$, if $\kappa$ admits a stationary set that does not reflect at inaccessibles, then the classic negative partition relation $\kappa\nrightarrow[\kappa]^2_\kappa$ implies that for every Abelian group $(G,+)$ of size $\kappa$, there exists a map $f:G\rightarrow G$ such that, for every $X\subseteq G$ of size $\kappa$ and every $g\in G$, there exist $x\neq y$ in $X$ such that $f(x+y)=g$.
Downloads:
]]>
Abstract: We begin by studying the classification problem for countable scattered linear orders. Letting $\cong_\alpha$ denote isomorphism of scattered orders of rank $\alpha$, we can define the “$\mathbb Z$-jump” of equivalence relations in such a way that the $\mathbb Z$-jump of $\cong_\alpha$ is $\cong_{\alpha+1}$. More generally, for any countable group $\Gamma$, we will define the $\Gamma$-jump of equivalence relations. After introducing the basic theory of these jump operators, we will discuss the question of when the $\Gamma$-jump is proper. In particular we will show the $\mathbb Z$-jump is proper, and hence the complexity of $\cong_\alpha$ increases properly with $\alpha$. This is joint work with John Clemens.
]]>Joint work with Gabriel Fernandes and Miguel Moreno.
Abstract. We introduce a generalization of stationary set reflection which we call filter reflection, and show it is compatible with the axiom of constructibility as well as with strong forcing axioms. We prove the independence of filter reflection from ZFC, and present applications of filter reflection to the study of canonical equivalence relations of the generalized Cantor and Baire spaces.
Downloads:
]]>
Ramsey-like cardinals originated with the study of elementary embedding properties of smaller large cardinals. Measurable cardinals and stronger large cardinals $\kappa$ are defined by the existence of elementary embeddings $j:V\to M$ with critical point $\kappa$ from the universe into an inner model. Elementary embeddings provide a unifying framework in the study of these large cardinals and set theorists have developed a toolbox of techniques for working with elementary embeddings, particularly for showing indestructibility under forcing.
Smaller large cardinals such as weakly compact cardinals and Ramsey cardinals were originally defined in terms of combinatorial Ramsey-type properties, but it turns out that they also have elementary embedding characterizations involving elementary embeddings of set-sized models.
Elementary embeddings $j:V\to M$ are connected with the existence of certain ultrafilters. The ultrapower of $V$ by a non-principal countably complete ultrafilter is well-founded, and therefore produces a non-trivial elementary embedding $j:V\to M$, and such an embedding, if its critical point is $\kappa$, can be used to obtain a non-principal $\kappa$-complete ultrafilter $$U=\{A\subseteq\kappa\mid \kappa\in j(A)\}.$$ We will say that $U$ is generated by $\kappa$ via $j$. Thus, in particular, there is a non-trivial elementary embedding $j:V\to M$ if and only if there is a non-principal countably complete ultrafilter on some cardinal $\kappa$.
The ultrapower construction with a countably complete ultrafilter can be iterated through all the ordinals to produce ${\rm Ord}$-many well-founded iterated ultrapowers. Given such an ultrafilter $U$, the first step is the original ultrapower $j:V\to M$, the second step is the ultrapower by $j(U)$, and we proceed in this manner, using the image of the original ultrafilter at successor stages and direct limits at limit stages. The successor stages are well-founded because all ultrafilters are countably complete (by elementarity) from the perspective of the model in which the ultrapower is constructed, and the direct limits are well-founded by a theorem of Gaifman [1]. Thus, we say that countably complete ultrafilters are iterable.
Elementary embeddings of set-sized models of set theory are also connected with the existence of certain ultrafilters. So let's talk about these models and ultrafilters. We say that a weak $\kappa$-model is a transitive set model $M\models{\rm ZFC}^-$ of size $\kappa$ and height above $\kappa$. A weak $\kappa$-model $M$ is a $\kappa$-model if it is additionally closed under sequences of length less than $\kappa$, the maximum possible closure for a model of size $\kappa$. Natural weak $\kappa$-models arise as elementary substructures of $H_{\kappa^+}$. Smaller large cardinals $\kappa$ are characterized by the existence of elementary embeddings of weak $\kappa$-models or $\kappa$-models with critical point $\kappa$. Given a weak $\kappa$-model $M$, we say that $U\subseteq P^M(\kappa)$ is an $M$-ultrafilter if the structure $\langle M,\in,U\rangle$, that is $M$ together with a predicate for $U$, satisfies that $U$ is a normal ultrafilter on $\kappa$. What this means is that $U$ is an ultrafilter measuring $P^M(\kappa)$ that is closed under diagonal intersections of sequences from $M$. The set $U$ is allowed to be completely external to $M$, and while normality for sequences from $M$ implies $\kappa$-completeness for sequences from $M$, an $M$-ultrafilter $U$ may not even be countably complete because $M$, having no closure, can be missing many countable sequences. Note that, while the weak $\kappa$-model $M$ is required to satisfy ${\rm ZFC}^-$, replacement almost certainly fails in the structure $\langle M,\in,U\rangle$ once we add the $M$-ultrafilter. How do we know this? Read on.
In the context of $M$-ultrafilters, we will weaken the notion of countable completeness to say only that the intersection of countably many sets from the $M$-ultrafilter is non-empty. The two notions are equivalent for ultrafilters on $\kappa$, but for $M$-ultrafilters it is too strong to require that the intersection of countably many sets in the ultrafilter is itself in the ultrafilter because the intersection may not even be an element of $M$ and therefore not measured by the $M$-ultrafilter.
We can build the ultrapower of a weak $\kappa$-model $M$ by an $M$-ultrafilter using functions $f$ on $\kappa$ from $M$. While we have a complete characterization of when an ultrafilter on $\kappa$ has a well-founded ultrapower, namely countable completeness, the situation with $M$-ultrafilters is much messier. Countable completeness of the $M$-ultrafilter suffices for well-foundedness, but this condition, as we will see, is too strong. Let's call an $M$-ultrafilter good if it has a well-founded ultrapower. There is no known elegant characterization of good $M$-ultrafilters. The existence of good $M$-ultrafilters is equivalent to the existence of elementary embeddings $j:M\to N$ because the set $U$ generated by $\kappa$ via $j$ as above is a good $M$-ultrafilter.
Next, let's talk about iterability of $M$-ultrafilters. After a moment's thought one realizes that given a good $M$-ultrafilter $U$ we cannot even perform the second step of the iterated ultrapower construction because, since $U$ is not an element of $M$, we cannot take the image of it under the ultrapower embedding. To have any hope of being iterable, an $M$-ultrafilter first needs to be internal enough to $M$ to make it possible to define successor stage ultrafilters. An $M$-ultrafilter is said to be weakly amenable if for every set $S\in M$ with $|S|^M=\kappa$, $S\cap U\in M$. The condition of weak amenability suffices to define successor stage ultrafilters. It also has another nice characterization. We will say that an elementary embedding $j:M\to N$ with critical point $\kappa$ is $\kappa$-powerset preserving if $M$ and $N$ have the same subsets of $\kappa$. It is always the case that $P^M(\kappa)\subseteq P^N(\kappa)$, but the other inclusion need not hold. Indeed, if a good $M$-ultrafilter $U$ is weakly amenable, then the ultrapower embedding $j:M\to N$ is $\kappa$-powerset preserving and conversely an $M$-ultrafilter generated by $\kappa$ via a $\kappa$-powerset preserving embedding $j:M\to N$ is weakly amenable. We will see below that while weak amenability makes it possible to define the stages of the iterated ultrapower construction, it does not guarantee the well-foundeness even of the second step ultrapower.
We are finally ready to see some elementary embedding characterizations of smaller large cardinals. The simplest such characterization belongs to weakly compact cardinals. An inaccessible cardinal $\kappa$ is weakly compact if and only if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ which has a good $M$-ultrafilter $U$. It turns out that this characterization can be strengthened in a number of ways. We can replace weak $\kappa$-model by $\kappa$-model, even by $\kappa$-model that is elementary in $H_{\kappa^+}$. Indeed, if $\kappa$ is weakly compact, then every weak $\kappa$-model has a good $M$-ultrafilter! Can we further strengthen the characterization by asking that all or at least some weak $\kappa$-models have weakly amenable good $M$-ultrafilters? The answer turns out to be no.
Let's make another definition. We will say that a weakly amenable $M$-ultrafilter is $\alpha$-good if it has $\alpha$-many well-founded iterated ultrapowers. Note that $1$-good equals weakly amenable plus good. Some observations are in order. By a theorem of Gaifman [1], only countable stages of the iteration matter. If an $M$-ultrafilter is $\omega_1$-good, then it is $\alpha$-good for every $\alpha$. By a theorem of Kunen (missing reference), a countably complete $M$-ultrafilter is $\omega_1$-good. We will say that a cardinal $\kappa$ is $\alpha$-iterable if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ which has an $\alpha$-good $M$-ultrafilter. Thus, we can rephrase the question about weakly compacts above as asking whether a weakly compact cardinal is 1-iterable. In fact, 1-iterable cardinals are limits of weakly compact cardinals, even limits of ineffable cardinals [2]. The $\alpha$-iterable cardinals for $\alpha\leq\omega_1$ form a hierarchy of strength where a $\beta$-iterable cardinal is a limit of $\alpha$-iterable cardinals for every $\alpha\lt\beta$. The $\alpha$-iterable cardinals for $\alpha$ countable are downward absolute to $L$, but $\omega_1$-iterable cardinals already imply $0^{\#}$ [3].
Next, up are Ramsey cardinals, which also have an unexpectedly nice elementary embedding characterization. A cardinal $\kappa$ is Ramsey if and only if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ which has a weakly amenable countably complete $M$-ultrafilter [4]. Can we strengthen the Ramsey elementary embedding characterization by replacing weak $\kappa$-model with $\kappa$-model or even $\kappa$-model elementary in $H_{\kappa^+}$, or even ask that it holds for all weak $\kappa$-models? The answer is no.
We say that a cardinal $\kappa$ is strongly Ramsey if every $A\subseteq\kappa$ is an element of a $\kappa$-model $M$ which has a weakly amenable $M$-ultrafilter and a cardinal $\kappa$ is super Ramsey if we replace $\kappa$-model with $\kappa$-model elementary in $H_{\kappa^+}$. Strongly Ramsey cardinals are limits of Ramsey cardinals and super Ramsey cardinals are limits of strongly Ramsey cardinals. The assertion that every weak $\kappa$-model has a weakly amenable $M$-ultrafilter is inconsistent! [2] Broadly speaking, the reason for this contrast in relationships with the introduction of weak amenability is that having the same subsets of $\kappa$ creates a great deal of reflection between $M$ and its ultrapower $N$.
Are there other natural large cardinal notions between super Ramsey and measurable cardinals? Super Ramsey cardinals require embeddings on $\kappa$-model elementary in $H_{\kappa^+}$ because such models look like an initial segment of the universe. A natural extension of the idea of having embeddings on models reflecting an initial segment of the universe is to require that the weak $\kappa$-model is elementary in some larger $H_\theta$. But this is impossible because large $H_\theta$ cannot have transitive elementary submodels of size $\kappa$. So let's define that an imperfect weak $\kappa$-model is a weak $\kappa$-model $M$ where we replace transitivity with the weaker requirement that $\kappa+1\subseteq M$, and similarly define imperfect $\kappa$-models. Now we can consider the large cardinal notion $\kappa$-Ramsey, introduced by Holy and Schlicht, where for every set $A$ and arbitrarily large cardinals $\theta$, $A$ is an element of an imperfect $\kappa$-model $M\prec H_\theta$ which has a weakly amenable $M$-ultrafilter. $\kappa$-Ramsey cardinals have a natural game characterization. The game is played by the challenger and the judge, with the challenger playing an increasing sequence of imperfect $\kappa$-models $M\prec H_\theta$ and the judge responding with nested $M$-ultrafilters. For the judge to win a game of length $\alpha$ she has to be able to play for $\alpha$-many steps, with the challenger winning otherwise. The reason the large cardinal is called $\kappa$-Ramsey is that it is characterized by the challenger failing to have a winning strategy in this game of length $\kappa$. $\kappa$-Ramsey cardinals are limits of super Ramsey cardinals. [5]
Are there natural large cardinal notions between $\kappa$-Ramsey cardinals and measurable cardinals? Yes! Remember I remarked earlier that the structure $\langle M,\in,U\rangle$, where $M$ is a weak $\kappa$-model and $U$ is an $M$-ultrafilter almost certainly fails to satisfy replacement? Well, let's define that a cardinal $\kappa$ is weakly baby measurable if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ which has a good $M$-ultrafilter such that $\langle M,\in, U\rangle\models{\rm ZFC}^-$. Similar large cardinal notions were introduced and investigated recently by McKenzie and Bovykin [6]. We recently realized with Philipp Schlicht that if $\kappa$ is weakly baby measurable, then $V_\kappa$ is a model of proper class many cardinals $\alpha$ that are $\alpha$-Ramsey. Baby measurable cardinals (McKenzie and Bovykin call them simultaneously baby measurable) would be cardinals where we replace weak $\kappa$-model with $\kappa$-model. These are even stronger, but still some ways below a measurable!
Modern set theoretic research has produced a myriad of set-theoretic universes with fundamentally different properties and structures. Multiversists hold the philosophical position that none of these universes is the true universe of set theory - they all have equal ontological status and populate the set-theoretic multiverse. Hamkins, one of the main proponents of this view, formulated his position via the heuristic Hamkins Multiverse Axioms, which include such radical relativity assertions as that any universe is ill-founded from the perspective of another universe in the multiverse. With Hamkins, we showed that the collection of all countable computably saturated models of ${\rm ZFC}$ satisfies his axioms. Countable computably saturated models form a unique natural class with a number of desirable model theoretic properties such as existence of truth predicates and automorphisms. Indeed, any collection of models satisfying the Hamkins Multiverse Axioms must be contained within this class. In a joint work with Toby Meadows, Michał Godziszewski, and Kameryn Williams, we explore which weaker versions of the multiverse axioms have 'toy multiverses' that are not made up entirely of computably saturated models.
]]>Joint work with Ari Meir Brodsky.
Abstract. In Part I of this series, we presented the microscopic approach to Souslin-tree constructions, and argued that all known $\diamondsuit$-based constructions of Souslin trees with various additional properties may be rendered as applications of our approach. In this paper, we show that constructions following the same approach may be carried out even in the absence of $\diamondsuit$. In particular, we obtain a new weak sufficient condition for the existence of Souslin trees at the level of a strongly inaccessible cardinal.
We also present a new construction of a Souslin tree with an ascent path, along the way increasing the consistency strength of such a tree’s nonexistence from a Mahlo cardinal to a weakly compact cardinal.
Section 2 of this paper is targeted at newcomers with minimal background. It offers a comprehensive exposition of the subject of constructing Souslin trees and the challenges involved.
Downloads:
]]>
Catalog description: The real number system, completeness and compactness, sequences, continuity, foundations of the calculus.
]]>Catalog description: Linear algebra from a matrix perspective with applications from the applied sciences. Topics include the algebra of matrices, methods for solving linear systems of equations, eigenvalues and eigenvectors, matrix decompositions, vector spaces, linear transformations, least squares, and numerical techniques.
]]>The other day I got to think about a little problem: split table cells.
If you search around the web for CSS solutions,
you'll mostly find more or less fiddly ones: this one from StackOverflow hacks a border by transforming it - awesome hackery. I also liked Wikipedia's solution, which takes a standard strike-through using gradients and automatically calculates the specific values for margins to keep the "sub cells" from the diagonal.
But I was thinking that we should be able to do better these days. Shouldn't we get to stop worrying about the dimensions of the content?
Grid to the rescue.
A simple 2x2 grid an inlined background SVG to draw the diagonal; column placement creates faux cells alongside auto-rows at 1fr ensure we don't cross the line. In other words, an (almost) content-agnostic split table cell.
You can go ahead and edit the text to try it out. See if you can find some edge cases where it fails (and let me know if you find a real case).
And in real life don't forget to add some non-visual hints to clarify which part is which, especially in a table head.
Happy New Year.
]]>At the end of 2009 I started my senior year as an undergraduate. I both read the first part of "Introduction ot Cardinal Arithmetic" to get a hold on the basics of set theory, and also took my first course on set theory (I'm omitting the introductory course from my freshman year since that one covered very very basic set theory). I've studied with the wonderful Matti Rubin, and it was a fantastic course. Too bad that it focused almost solely on the axioms (i.e. how the axioms are not provable from others, etc.) and that we only spent a short time dealing with actual set theoretic topics (e.g. Solovay's theorem on partitions of stationary sets, etc.)
Continue reading...]]>Abstract: The class of finite groups is an amalgamation class, which means that they admit a “Fraisse limit” - a countable structure which contains all the finite groups and which is highly self-symmetric. The limit group is called Hall’s group, it is locally finite, and has the property that any isomorphic finite subgroups are conjugate. In this talk we will give background on Hall’s group and discuss constructing automorphisms of Hall’s group.
]]>Joint work with Alejandro Poveda and Dima Sinapova.
Abstract. In Part I of this series, we introduced a class of notions of forcing which we call $\Sigma$-Prikry, and showed that many of the known Prikry-type notions of forcing that centers around singular cardinals of countable cofinality are $\Sigma$-Prikry. We proved that given a $\Sigma$-Prikry poset $\mathbb P$ and a $\mathbb{P}$-name for a non-reflecting stationary set $T$, there exists a corresponding $\Sigma$-Prikry poset that projects to $\mathbb P$ and kills the stationarity of $T$.
In this paper, we develop a general scheme for iterating $\Sigma$-Prikry posets, as well as verify that the Extender Based Prikry Forcing is $\Sigma$-Prikry.
As an application, we blow up the power of a countable limit of Laver-indestructible supercompact cardinals, and then iteratively kill all non-reflecting stationary subsets of its successor, yielding a model in which the singular cardinal hypothesis fails and simultaneous reflection of finite families of stationary sets holds.
Downloads:
]]>
Infinite decreasing chains in the Mitchell order
Abstract: It is known that the behavior of the Mitchell order substantially changes at the level of rank-to-rank extenders, as it ceases to be well-founded. While the possible partial order structure of the Mitchell order below rank-to-rank extenders is considered to be well understood, little is known about the structure in the ill-founded case. We make a first step in understanding this case by studying the extent to which the Mitchell order can be ill-founded. Our main results are (i) in the presence of a rank-to-rank extender there is a transitive Mitchell order decreasing sequence of extenders of any countable length, and (ii) there is no such sequence of length $\omega_1$. This is joint work with Omer Ben-Neria.
As this is a blackboard talk there are no slides available, you can find a preprint related to this talk here.
]]>Set theorists started forcing with class partial orders to modify global properties of the universe very soon after general forcing techniques were developed. Once Cohen showed that ${\rm CH}$ could fail, and that indeed the continuum could assume any reasonable value, it was natural to ask whether, for instance, the ${\rm GCH}$ can fail unboundedly often, or more generally what global patterns were possible for the continuum function. Since a set partial order can only affect a set-sized chunk of the continuum function, a class partial order was required to modify the ${\rm GCH}$ pattern unboundedly. Easton used a class product of set-forcing notions, with what became known as Easton support, to show that the ${\rm GCH}$ can fail at every regular cardinal, and that, in fact, any reasonable pattern on the continuum function was consistent with ${\rm ZFC}$ [1]. Since then class products and iterations, usually with Easton support, have been used, for example, to globally kill large cardinals, to make all supercompact cardinals indestructible, or to code the universe into the continuum function.
There are two approaches to handling class forcing in first-order set theory. We can use a generic filter $G$ for a class forcing notion $\mathbb P$ to interpret the $\mathbb P$-names and obtain the forcing extension $V[G]$, and then throw $G$ away. We can also work in the structure $(V[G],\in,G)$ with a predicate for $G$. While both approaches were used by set theorists to handle specific instances of class forcing, neither approach could provide a robust framework for the study of general properties of class forcing because it relegated properties of classes to the meta-theory.
Important early results on class forcing, due to Friedman and Stanley, showed that class partial orders satisfying `niceness' conditions, such as pretameness, behave like set partial orders. But despite the pervasive use of class forcing in set theoretic constructions, a general theory of class forcing was not fully developed until recently. These recent results demonstrated that properties of class forcing depend on what other classes exist around them, establishing that the theory of class forcing can only be properly investigated in a second-order set theoretic setting. The mathematical framework of second-order set theory has objects for both sets and classes, and allows us to move the study of classes out of the meta-theory.
Class forcing becomes even more important in the context of second-order set theory, where it can be used to modify the structure of classes. With class forcing, we can, for instances, add a global well-order or shoot class clubs with desirable properties. Both these forcing notions leave the first-order part of the model intact because they do not add sets. Such forcing constructions over models of second-order set theory can actually be used to prove theorems about models of ${\rm ZFC}$. One of the most important results in the model theory of set theory is the Keisler-Morley extension theorem asserting that every countable model of ${\rm ZFC}$ has an elementary top-extension (adding only sets of rank above the ordinals of the model) [2]. Given a countable model $M$ of ${\rm ZFC}$, the top-extension is built as an ultrapower of $M$ by a special ultrafilter, which we can construct provided that $M$ can be expanded to a model of ${\rm GBC}$. If $M$ doesn't already have a definable global well-order, we use class forcing to add it without altering $M$ itself.
In many ways class forcing does not behave as nicely as set forcing. Class forcing can destroy ${\rm ZFC}$; even the atomic forcing relation may fail to be definable for a class forcing notion; densely embedded class forcing notions need not be forcing equivalent; most class forcing notions don't have Boolean completions; the intermediate model theorem fails badly for class forcing, etc. At the same time, conditions on class forcing, such as pretameness, guarantee that these pathologies are avoided, and these conditions hold for most familiar class forcing notions used by set theorists, such as progressively closed Easton support products and iterations.
In this article, we survey most of the recent results developing a general theory of class forcing in the second-order setting and establishing surprising connections between properties of class forcing and the structure of second-order set theories.
Joint work with Chris Lambie-Hanson.
Abstract. Motivated by a characterization of weakly compact cardinals due to Todorcevic,
we introduce a new cardinal characteristic, the C-sequence number, which can be seen as a measure of the compactness of a regular uncountable cardinal. We prove a number of ZFC and independence results about the C-sequence number and its relationship with large cardinals, stationary reflection, and square principles.
We then introduce and study the more general C-sequence spectrum and uncover some tight
connections between the C-sequence spectrum and the strong coloring principle introduced in Part I of this series.
Downloads:
Abstract: Given a graph or tree G, the conjugacy problem for G is the conjugacy equivalence relation on the group Aut(G) of automorphisms of G. In this talk we will introduce the basic framework for studying the complexity of the conjugacy problem for G, and then use it to examine a series of examples of graphs and trees G. In particular we will demonstrate that a variety of complexities can occur, from trivial (smooth), up to maximally complex (Borel complete), and in between.
]]>In this paper we investigate the consistency strength of the statement: $\kappa$ is weakly compact and there is no tree on $\kappa$ with exactly $\kappa^{+}$ many branches. We show that this property fails strongly (there is a sealed tree) if there is no inner model with a Woodin cardinal. On the other hand, we show that this property as well as the related Perfect Subtree Property for $\kappa$, implies the consistency of $\operatorname{AD}_{\mathbb{R}}$.
]]>Laver, and independently Woodin, showed that a ground model is always definable (with a ground model parameter) in its set-forcing extensions [1], [2]. More so, the definition is uniform for all ${\rm ZFC}$-universes: there is a single formula $\varphi(x,y)$ such that whenever $V\models{\rm ZFC}$ and $V[G]$ is a set-forcing extension of $V$ by a poset $\mathbb P\in V$ of size $|\mathbb P|^V=\gamma$, then $V$ is defined in $V[G]$ by $\varphi(x,P^V(\delta))$ with $\delta=(\gamma^+)^V(=(\gamma^+)^{V[G]})$. Let's explain what the formula $\varphi(x,P^V(\delta))$ says. Let ${\rm Z}^*$ be a certain finite fragment of ${\rm ZFC}$. We will say that a transitive model $M\in V[G]$ is good if $M\models {\rm Z}^*$, $P^M(\delta)=P^V(\delta)$ and $(\delta^+)^M=(\delta^+)^{V[G]}$. The formula $\varphi(x,P^V(\delta))$ then holds of a set $x$ if $x$ is an element of a good model $M$ of height $\lambda\gg\delta$ such that $V[G]_\lambda\models {\rm Z}^*$ and the pair $M\subseteq V[G]_\lambda$ has the $\delta$-cover and $\delta$-approximation properties (we will explain what those are in a moment). Here is the reason the formula works. Any forcing extension by a poset of size less than a regular cardinal $\delta$ has the property that there are unboundedly many ordinals $\lambda$ such that both $V_\lambda$ and $V[G]_\lambda$ satisfy ${\rm Z}^*$ and the pair $V_\lambda\subseteq V[G]_\lambda$ has the $\delta$-cover and $\delta$-approximation properties. Also, if there is a good model $M$ of height $\lambda$ with $P^M(\delta)=P^V(\delta)$ and $(\delta^+)^M=(\delta^+)^{V[G]}$ such that that pair $M\subseteq V[G]_\delta\models {\rm Z}^*$ has the the $\delta$-cover and $\delta$-approximation properties, then the model is unique. Hence such a model $M$ of height $\lambda$ can only be $V_\lambda$.
The notions of $\delta$-cover and $\delta$-approximation properties are due to Hamkins [3]. Suppose that $V\subseteq W$ are transitive models of (some fragment of) ${\rm ZFC}$ and $\delta$ is a cardinal in $W$.
Because the existence of cardinalities is fundamental to the definition of the cover and approximation properties, these notions appear to rely essentially on the axiom of choice. Therefore proofs of ground model definability based on these properties cannot be easily modified to work in the absence of choice. It is not known yet whether ground definability holds in ${\rm ZF}$. There are partial positive results and no known counterexamples.
With Tom Johnstone, we adapted the approximation and cover properties arguments to show that for a cardinal $\delta$, uniform ground model definability holds in models of ${\rm ZF}+{\rm DC}_\delta$ for posets admitting a gap at $\delta$.
Theorem: There is a single formula $\varphi(x,y)$ such that whenever $V\models{\rm ZF}+{\rm DC}_\delta$ and $V[G]$ is a set-forcing extension of $V$ by a poset $\mathbb P$ admitting a gap at $\delta$ in $V$, then $V$ is defined in $V[G]$ by $\varphi(x,P^V(\delta))$.
A poset is said to admit a gap at $\delta$ if it has the form $\mathbb R*\dot{\mathbb Q}$ where $\mathbb R$ is a non-trivial forcing of size less than $\delta$ and it is forced by $\mathbb R$ that is $\dot{\mathbb Q}$ is $\leq\delta$-strategically closed. The $\delta$-dependent choice principle ${\rm DC}_\delta$, introduced by Lévy, allows us to make $\delta$-many dependent choices along any relation without terminal nodes. It asserts that for any non-empty set $S$ and any binary relation $R$, if for each sequence $s\in S^{\lt\delta}$ there is a $y\in S$ such that $s$ is $R$ related $y$, then there is a function $f:\delta\to S$ such that $f\upharpoonright\alpha \,R\, f(\alpha)$ for all $\alpha<\delta$. The principle ${\rm DC}_\delta$ is robust for forcing with posets admitting a gap at $\delta$ because they preserve it to the forcing extension (see [4]).
A very different recent partial result on ${\rm ZF}$-ground model definability is due to Usuba [5]. Usuba showed that if a ${\rm ZF}$-universe has a proper class of Löwenheim-Skolem cardinals, a notion which he introduces, then uniform ground model definability holds for it.
Theorem: There is a single formula $\varphi(x,y)$ such that whenever $V$ is a model of ${\rm ZF}$ with a proper class of Löwenheim-Skolem cardinals and $V[G]$ is a set-forcing extension of $V$ by a poset $\mathbb P\in V_\kappa$ for a Lowenheim-Skolem cardinal $\kappa$, then $V$ is defined in the extension $V[G]$ by $\varphi(x,V_\kappa)$.
Usuba's proof is another modification of the arguments using cover and approximation, where he replaces the notion of cardinality by a rough measure on sets, defining for a set $x$ that the measure $|\!|x|\!|$ is the least ordinal $\alpha$ such that there is a surjection from $V_\alpha$ onto $x$.
An uncountable cardinal $\kappa$ is a Löwenheim-Skolem (${\rm LS}$) cardinal if for every $\gamma<\kappa\leq\alpha$ and $x\in V_\alpha$, there is $\beta>\alpha$ and an elementary $X\prec V_\beta$ such that
The existence of ${\rm LS}$ cardinals is absolute throughout the generic multiverse [5]. It follows that if we can force choice over a ${\rm ZF}$-universe, then since its forcing extension has a proper class of ${\rm LS}$ cardinals, it must have a proper class of ${\rm LS}$ cardinals as well. Thus, in particular, ground model definability holds for any ${\rm ZF}$-universe, such as $L(\mathbb R)$, over which we can force the axiom of choice.
At the same time, it is known that there are ${\rm ZF}$ universes without any ${\rm LS}$-cardinals. Let's explain how. Usuba showed that if $\kappa$ is an ${\rm LS}$-cardinal, then the club filter on $\kappa^+$ must be $\kappa$-complete [5]. Asaf Karagila showed that every model $V$ of ${\rm ZFC}$ satisfying ${\rm GCH}$ has an extension to a model of ${\rm ZF}$ with the same cofinalities in which the club filter is not $\sigma$-complete on every regular cardinal $\kappa$ [6]. This model cannot have any ${\rm LS}$ cardinals because if $\kappa$ in it was an ${\rm LS}$-cardinal, then $\kappa^+$ would be a regular cardinal (since cofinalities from $V$ are preserved) with a $\kappa$-complete club filter.
One important consequence of uniform ground model definability is that it makes it possible to define in a first-order way the collection of all grounds of a ${\rm ZFC}$-universe. A priori this is a collection of classes, a second-order notion, but using ground model definability we can write down a first-order formula $\psi(x,y)$ such that for every set $a$, the class $W_a=\{x\mid \psi(x,a)\}$ is a ground of $V$ and for every $W$ ground of $V$, there is a set $a$ such that $W=W_a$. The first-order definability of the collection of grounds makes it possible to study their structure in ${\rm ZFC}$. This has led to the very fruitful subject of set-theoretic geology, initiated by Fuchs, Hamkins, and Reitz, which explores the structure of grounds of a set-theoretic universe. Ground model definability for ${\rm ZF}$ would allow us to extend the same analysis to ${\rm ZF}$-universes, and Usuba's theorem has now made such an analysis possible for a large collection of models, namely those with a proper class of ${\rm LS}$ cardinals. We will have to wait to find out whether ground definability holds fully in ${\rm ZF}$.
The first author introduced the inner model stable core while investigating under what circumstances the universe $V$ is a class forcing extension of the inner model ${\rm HOD}$, the collection of all hereditarily ordinal definable sets (missing reference). He showed in (missing reference) that there is a robust $\Delta_2$-definable class $S$ contained in ${\rm HOD}$ such that $V$ is a class-forcing extension of the structure $\langle L[S],\in, S\rangle$, which he called the stable core, by an ${\rm Ord}$-cc class partial order $\mathbb P$ definable from $S$. Indeed, for any inner model $M$, $V$ is a $\mathbb P$-forcing extension of $\langle M[S],\in,S\rangle$, so that in particular, since ${\rm HOD}[S]={\rm HOD}$, $V$ is a $\mathbb P$-forcing extension of $\langle {\rm HOD},\in,S\rangle$.
Let's explain the result in more detail for the stable core $L[S]$, noting that exactly the same analysis applies to ${\rm HOD}$. The partial order $\mathbb P$ is definable in $\langle L[S],\in,S\rangle$ and there is a generic filter $G$, meeting all dense sub-classes of $\mathbb P$ definable in $\langle L[S],\in,S\rangle$, such that $V=L[S][G]$. All standard forcing theorems hold for $\mathbb P$ since it has the ${\rm Ord}$-cc. Thus, we get that the forcing relation for $\mathbb P$ is definable in $\langle L[S],\in,S\rangle$ and the forcing extension $\langle V,\in,G\rangle\models{\rm ZFC}$. However, this particular generic filter $G$ is not definable in $V$. To obtain $G$, we first force with an auxiliary forcing $\mathbb Q$ to add a particular class $F$, without adding sets, such that $V=L[F]$. We then show that $G$ is definable from $F$ and $F$ is in turn definable in the structure $\langle L[S][G], \in, S,G\rangle$, so that $L[S][G]=V$. This gives a formulation of the result as a ${\rm ZFC}$-theorem because we can say (using the definitions of $\mathbb P$ and $\mathbb Q$) that it is forced by $\mathbb Q$ that $V=L[F]$, where $F$ is $V$-generic for $\mathbb Q$, and (the definition of) $G$ is $\langle L[S],\in,S\rangle$-generic, and finally that $F$ is definable in $\langle L[S][G],\in,S,G\rangle$. Of course, a careful formulation would say that the result holds for all sufficiently large natural numbers $n$, where $n$ bounds the complexity of the formulas used.
Without the niceness requirement on $\mathbb P$ that it has the ${\rm Ord}$-cc, there is a much easier construction of a class forcing notion $\mathbb P$, suggested by Woodin, such that $V$ is a class forcing extension of $\langle {\rm HOD},\in, \mathbb P\rangle$. At the same time, some additional predicate must be added to ${\rm HOD}$ in order to realize all of $V$ as a class-forcing extension because, as Hamkins and Reitz observed in [1], it is consistent that $V$ is not a class-forcing extension of ${\rm HOD}$. To construct such a counterexample, we suppose that $\kappa$ is inaccessible in $L$ and force over the Kelley-Morse model $\mathcal L=\langle L_\kappa,\in, L_{\kappa+1}\rangle$ to code the truth predicate of $L_\kappa$ (which is an element of $L_{\kappa+1}$) into the continuum pattern below $\kappa$. The first-order part $L_\kappa[G]$ of this extension cannot be a forcing extension of ${\rm HOD}^{L_\kappa[G]}=L_\kappa$ (by the weak homogeneity of the coding forcing), because the truth predicate of $L_\kappa$ is definable there and this can be recovered via the forcing relation.
While the definition of the partial order $\mathbb P$ is fairly involved, the stability predicate $S$ simply codes the elementarity relations between sufficiently nice initial segments $H_\alpha$ (the collection of all sets with transitive closure of size less than $\alpha$) of $V$. Given a natural number $n\geq 1$, call a cardinal $\alpha$ $n$-good if it is a strong limit cardinal and $H_\alpha$ satisfies $\Sigma_n$-collection. The predicate $S$ consists of triples $(n,\alpha,\beta)$ such that $n\geq 1$, $\alpha$ and $\beta$ are $n$-good cardinals and $H_\alpha\prec_{\Sigma_n}H_\beta$. We will denote by $S_n$ the $n$-th slice of the stability predicate $S$, namely $S_n=\{(\alpha,\beta)\mid (n,\alpha,\beta)\in S\}$.
Clearly the stable core $L[S]\subseteq{\rm HOD}$, and the first author showed in (missing reference) that it is consistent that $L[S]$ is smaller than ${\rm HOD}$. The stable core has several nice properties which fail for ${\rm HOD}$ such as that it is partially forcing absolute and, assuming ${\rm GCH}$, is preserved by forcing to code the universe into a real (missing reference)
In order to motivate the many questions which arise about the stable core let us briefly discuss the set-theoretic goals of studying inner models.
The study of canonical inner models has proved to be one of the most fruitful directions of modern set-theoretic research. The canonical inner models, of which Gödel's constructible universe $L$ was the first example, are built bottom-up by a canonical procedure. The resulting fine structure of the models leads to regularity properties, such as the ${\rm GCH}$ and $\square$, and sometimes even absoluteness properties. But all known canonical inner models are incompatible with sufficiently large large cardinals, and indeed each such inner model is very far from the universe in the presence of sufficiently large large cardinals in the sense, for example, that covering fails and the large cardinals are not downward absolute.
The inner model ${\rm HOD}$ was introduced by Gödel, who showed that in a universe of ${\rm ZF}$ it is always a model of ${\rm ZFC}$. But unlike the constructible universe which also shares this property, ${\rm HOD}$ has turned out to be highly non-canonical. While $L$ cannot be modified by forcing, ${\rm HOD}$ can be easily changed by forcing because we can use forcing to code information into ${\rm HOD}$. For instance, any subset of the ordinals from $V$ can be made ordinal definable in a set-forcing extension by coding its characteristic function into the continuum pattern, so that it becomes an element of the ${\rm HOD}$ of the extension. Indeed, by coding all of $V$ into the continuum pattern of a class-forcing extension, Roguski showed that every universe $V$ is the ${\rm HOD}$ of one of its class-forcing extensions [2]. Thus, any consistent set-theoretic property, including all known large cardinals, consistently holds in ${\rm HOD}$. At the same time, the ${\rm HOD}$ of a given universe can be very far from it. It is consistent that every measurable cardinal is not even weakly compact in ${\rm HOD}$ and that a universe can have a supercompact cardinal which is not even weakly compact in ${\rm HOD}$ [3]. It is also consistent that ${\rm HOD}$ is wrong about all successor cardinals [4].
Does the stable core behave more like the canonical inner models or more like ${\rm HOD}$? Is there a fine structural version of the stable core, does it satisfy regularity properties such as the ${\rm GCH}$? Is there a bound on the large cardinals that are compatible with the stable core? Or, on the other hand, are the large cardinals downward absolute to the stable core? Can we code information into the stable core using forcing?
In this article, we show the following results about the structure of the stable core, which answer some of the aforementioned questions as well as motivate further questions about the structure of the stable core in the presence of sufficiently large large cardinals.
Measurable cardinals are consistent with the stable core.
Theorem:
We can code information into the stable core over $L$ or $L[\mu]$ using forcing.
Theorem: Suppose $\mathbb P\in L$ is a forcing notion and $G\subseteq \mathbb P$ is $L$-generic. Then there is a further forcing extension $L[G][H]$ such that $G\in L[S^{L[G][H]}]$ (the universe of the stable core). An analogous result holds for $L[\mu]$.
An extension of the coding results shows that the ${\rm GCH}$ can fail badly in the stable core.
Theorem:
Measurable cardinals need not be downward absolute to the stable core.
Theorem: There is a forcing extension of $L[\mu]$ in which the measurable cardinal $\kappa$ of $L[\mu]$ remains measurable, but it is not even weakly compact in the stable core.
Although we don't know whether the stable core can have a measurable limit of measurables, the stable core has inner models with measurable limits of measurables, and much more. Say that a cardinal $\kappa$ is $1$-measurable if it is measurable, and, for $n < \omega$, $(n+1)$-measurable if it is measurable and a limit of $n$-measurable cardinals. Write $m_0^\#$ for $0^\#$ and $m_n^\#$ for the minimal mouse which is a sharp for a proper class of $n$-measurable cardinals, namely, an active mouse $\mathcal M$ such that the critical point of the top extender is a limit of $n$-measurable cardinals in $\mathcal M$. Here we mean mouse in the sense of [5] (Sections 1 and 2), i.e., a mouse has only total measures on its sequence. The mouse $m_n^{\#}$ can also be construed as a fine structural mouse with both total and partial extenders (see [6], Section 4).
Theorem: For all $n<\omega$, if $m_{n+1}^\#$ exists, then $m_n^\#$ is in the stable core.
Moreover, we obtain the following characterization of natural inner models of the stable core.
Theorem: Let $n < \omega$ and suppose that $m_n^\#$ exists. Then whenever $$C_1\supseteq C_2\supseteq\ldots\supseteq C_n$$ are class clubs of uncountable cardinals such that for every $1 < i\leq n$ and every $\gamma \in C_i$, \[\langle H_\gamma,\in, C_1,\ldots,C_{i-1}\rangle\prec_{\Sigma_1} \langle V,\in, C_1,\ldots,C_{i-1}\rangle,\] then $L[C_1,\ldots,C_n]$ is a hyperclass-forcing extension of a (truncated) iterate of $m_n^\#$.
The stable core, an inner model of the form $\langle L[S],\in, S\rangle$ for a simply definable predicate $S$, was introduced by the first author in [Fri12], where he showed that $V$ is a class forcing extension of its stable core. We study the structural properties of the stable core and its interactions with large cardinals. We show that the $\operatorname{GCH}$ can fail at all regular cardinals in the stable core, that the stable core can have a discrete proper class of measurable cardinals, but that measurable cardinals need not be downward absolute to the stable core. Moreover, we show that, if large cardinals exist in $V$, then the stable core has inner models with a proper class of measurable limits of measurables, with a proper class of measurable limits of measurable limits of measurables, and so forth. We show this by providing a characterization of natural inner models $L[C_1, \dots, C_n]$ for specially nested class clubs $C_1, \dots, C_n$, like those arising in the stable core, generalizing recent results of Welch [Wel19].
]]>I gave an invited talk at the 15th International Workshop on Set Theory in Luminy in Marseille, September 2019.
Talk Title: Chain conditions, unbounded colorings and the C-sequence spectrum.
Abstract: The productivity of the $\kappa$-chain condition, where $\kappa$ is a regular, uncountable cardinal, has been the focus of a great deal of set-theoretic research.
In the 1970’s, consistent examples of $\kappa$-cc posets whose squares are not $\kappa$-cc were constructed by Laver, Galvin, Roitman and Fleissner. Later, ZFC examples were constructed by Todorcevic, Shelah, and others. The most difficult case, that in which $\kappa = \aleph_2$, was resolved by Shelah in 1997.
In the first part of this talk, we shall present analogous results regarding the infinite productivity of chain conditions stronger than $\kappa$-cc. In particular, for any successor cardinal $\kappa$, we produce a ZFC example of a poset with precaliber $\kappa$ whose $\omega^{\mathrm{th}}$ power is not $\kappa$-cc. To do so, we introduce and study the principle $\textrm{U}(\kappa,\mu,\theta,\chi)$ asserting the existence of a coloring $c:[\kappa]^2\rightarrow\theta$ satisfying a strong unboundedness condition.
In the second part of this talk, we shall introduce and study a new cardinal invariant $\chi(\kappa)$ for a regular uncountable cardinal $\kappa$. For inaccessible $\kappa$, $\chi(\kappa)$ may be seen as a measure of how far away $\kappa$ is from being weakly compact. We shall prove that if $\chi(\kappa)>1$, then $\chi(\kappa)=\max(\mathrm{Cspec}(\kappa))$, where:
1. $\textrm{Cspec}(\kappa) := \{\chi(\vec{C}) \mid \vec{C}$ is a $C$-sequence over $\kappa\} \setminus \omega$, and
2. $\chi(\vec{C})$ is the least cardinal $\chi \leq \kappa$ such that there exist $\Delta \in [\kappa]^\kappa$ and $b:\kappa\rightarrow[\kappa]^\chi$ with $\Delta\cap\alpha\subseteq\bigcup_{\beta\in b(\alpha)}C_\beta$ for every $\alpha<\kappa$.
We shall also prove that if $\chi(\kappa)=1$, then $\kappa$ is greatly Mahlo, prove the consistency (modulo the existence of a supercompact) of $\chi(\aleph_{\omega+1})=\aleph_0$, and carry a systematic study of the effect of square principles on the $C$-sequence spectrum.
In the last part of this talk, we shall unveil an unexpected connection between the two principles discussed in the previous parts, proving that, for infinite regular cardinals $\theta<\kappa$, $\theta\in\mathrm{Cspec}(\kappa)$ iff there is a closed witness to $\mathrm{U}(\kappa,\kappa,\theta,\theta)$.
This is joint work with Chris Lambie-Hanson.
Downloads:
Work was full of organizational stuff - everything from reorganizing git repos to organizing people t which seems odd since I was fairly addicted to it as a kid/ya o organizing reporting data. So in many ways uneventful but for me usually a form of complementary exercise which gives me a productivity boost once all the end-of-quarter shenanigans are over.
I've been thinking about my media usage recently, in particular the role of podcasts and video hosting platforms (ok, mostly youtube at the moment).
For the past two decades, the internet gradually eliminated any need (of mine) for TV consumption. Besides the obvious (there's a ton of media with a single subscription to a streaming service, a ton more with multiple subscriptions, and virtually all media in gray areas), I was pondering "TV programming" recently. It's something that many friends still describe as pleasing (switch on the TV and just watch something) and I would usually argue against it. But recently I noticed how similar subscriptions to YouTube channels work. This is especially impressive with more original work such as Druck, the German version of Skam, which timestamps its episodes and releases them on matching times of day. Reading about their approach (and watching a season, which was quite interesting in itself and as a parent) made me realize how I use my YouTube subscriptions like TV programming. It's ephemeral, passing me by, tuning in, maybe hopping to the next one. And yet it's way better than even tivo-era TV as it gives both the ephemeral and the archival.
Similarly, podcasts (and again to some degree YouTube) have done much of the same for my radio consumption. Serious news stations like Deutschlandfunk, BBC 4, and various forms of NPR have been in my life since forever, and yet their consumption has been reduced to morning news and otherwise podcasts. In addition, podcasting covers anything from web industry to fiction, from comedy to whatever Der Weisheit is (basically 4 friends chatting). I struggled to find interesting music podcasts (as in: playing contemporary popular music) for a while but I eventually found a few I liked; YouTube music channels do the rest.
Anyhoo, just a reminder that this internet is wonderful, I guess.
I thought one way to use this format might be to just browse through Mastodon and see what I find (boosted or, gasp, original content). So here we go.
Christian posted lovely 3D printable Lᵖ norm balls thing:
David Eppstein linked to a glorious collection of Turing Complete things.
We finally went to fridays for future this week; the fucking least we can all do is to show up whenever we can and always support them.
Oh, my keyboard; I wanted to get onto that, didn't I.
]]>It felt haphazard, without a concrete plan. And that was great. In the first day, Tadatoshi Miyamoto and David (Asperó) presented two problems in the morning and in the early afternoon. We then had a discussion about them, and it was just a general discussion, that went very well.
Continue reading...]]>Anyway. I really like this idea of a weeknote which I first saw at Dave Rupert's and then at Baldur Bjarnason's. Let's give it a try and see how it feels.
Work has been mixed this week. A big chunk of in-depth work was finalizing (what feels like a countless number of) tests for a very old piece of code that never had any tests. As this code had grown into a little bit of a monster, I now feel much more in control of it and ready to rewrite/port it. I also got into the GitHub Actions beta this week which looks nice and should help automate a bunch of stuff that's being done by a much less natural GitHub app (client permitting, anyway).
Oh, and I had some interesting work helping some (print) designers wrap their heads around some web thing. That was a ton of fun and maybe it will turn into more, we'll see.
I've been having a fit of escapism and churning through Harry Dresden novels at a high pace; as re-readings go, these are still quite good. I still don't like some things but I do still like how he grew the universe, something so many fantasy series fail at. This year really has been a year of re-reading incidentally; there are worse things.
I had the yearly meeting of our daycare (which is set up as an association run by the parents). I'm the data privacy person for another year which has really been quite interesting (thanks, GDPR); after focusing on the digital side (where I'm more comfortable), it's time to focus on the physical side, where I look forward to learning a few new things. Otherwise, I was amazed how little drama we had; in the end, we seem to be good people, wanting to make things better.
Sam and I are back to regular meetings after a long summer which is just plain good. Besides good times, we are working on upgrades to mathblogging.org and boolesrings.org.
I also think it may be time to finally fix my laptop's keyboard. First the left ctrl key went (which was fun to relearn with the right ctrl key), then the tick and some numbers started to become iffy (which gets annoying) but with the letter E starting to act up I'm really coming to the end of the line. I don't know why I keep dragging it out. Let's see if I have an update in the next note, shall we?
]]>As tradition will have it, we begin our show by taking a closer look at our number.
173 is not just a prime, the sum of two squares of primes (2²+13²) and the sum of three primes (53+59+61). No, it is also a balanced prime (same gap to previous and following prime) and the 13^{th}(!) Sopie Germain prime (since 2×173+1=347 is also prime).
Alas, 173 is also an odious number, which may sound rather bad but just means it has an odd number of 1's in binary (10101101).
Now that you've warmed up, let us once again enter the decidely wonderful, balanced madness of the mathematical blogging carnival.
Likely most people (or at least the most people) will already have seen the NYT's Kenneth Chang looking into Why Mathematicians hate that viral equation; but really who needs 8÷2(2+2) when you can so easily have drama with the Oxford Comma.
In any case, make sure you head over to Over at the Art of Research where Vi Hart shared Computation for Hands, Systems for Humans, taking you on the magic carpet ride that's Vi's hands "craving computation", combining hardware, software, systems thinking, VR and a ton of other ideas.
Before you continue to Ari Rubinsztejn explains Why Tracking Space Debris is so Hard (thanks, nonlinear dynamics!), step under the cover of the Undercover economist Tim Harford who wrote on the strange power of the idea of average, both good and bad.
Of course any mathematically topic is worth a deep dive into, so head into the magical depths of the Math Vault for an extensive article on Long Division and Its Variants (for Integers) Once you're ready, jump out and get yourself back into Cantor's Paradise where Jørgen Veisdal will let you in on the mathematics of Elo ratings, a glimpse at the history of the famous ranking system.
Before you lose your king or queen, let Richard Elwes ask you a question befit Carol's Red Queen: How Many Sides Does A Circle Have? and be sure to follow him off on a tanget or two. If all those tangents twirled you around too much, switch to a classical, sold blog post by the amazing John Cook who will help you estimate vocabulary size with Heaps’ law just in case you need to verify a post-humously discovered manuscript by Jane Austen.
To ease your way out of those particular mazes, take a sip and mingle over at this month's IMA editorial, if only to catch up on the Queen’s Birthday Honours List for 2019. And if you are one of those people who frequent the always dramatic birdsite, here are a two math-focused threads for you:
Oh no, I've deliberately obscured large portions of this ruler and I need to make sure these vegetables are whole numbers of inches long or my toddler will eat me instead: a #RealWorldMaths thread pic.twitter.com/GWuZMry6Ti
— Christian Lawson-Perfect (@christianp) September 5, 2019
Unpopular take: As someone with a Master's degree in statistics and who teaches data science, I'm very much over the "data scientists are incompetent fools who just throw data in and get results from a computer with no critical thinking" takes. 1/
— Matt Brems (@matthewbrems) August 18, 2019
Also via twitter, Francis Su shared his handout with Guildelines for good mathematical writing (PDF) which he says you should feel free to share with your students.
To wrap things up, take a carousel of math blogging perfection at Math Off The Grid where Benjamin Leis's post on Cardano's Method starts from a new video from Mathologer (below), picks up a tweet by Patrick Honner throws in a podcast with Sam Vandervelde and tops it off with a pointer to Marden's Theorem to drag you into the carnival that is Wikipedia's mathematics articles.
That’s it for the beautiful month of September. Thanks to everyone who submitted a post! After almost 9 years of running
Be sure to stop by next month’s Carnival. You should submit your favorite blog posts/videos/content from the month of September. If you’d like to host an upcoming show, please get in touch with Katie.
]]>This summer we were moved to a new country. Its hard to put an exact time or date on it. Let’s say it was the evening Anne had really bad indigestion and we didn’t have any Rennies, so secretly I cycled up to Sainsbury’s and was back in 20 minutes to surprise her. We laughed. It was funny. But that’s when we were moved, without realising it.
We looked around for some weeks, without appreciating what had happened. You peer through the mist and think you recognise familiar landmarks. Familiar features. Probably its just a short walk home. Or worst case a short bus ride. Nothing too bothersome. Nothing dramatic. This is just a visit, and we’ll be home by Christmas.
But no. It turns out we’ve changed country, and there is no way back. We’re refugees here. Trying to make the best of it, trying to make something work out. There are lots of people happy to help, and they’ve sorted us out with stuff, and that’s great. It turns out there are a couple of old friends also here, who have already settled in.
We’re beginning to find our way round, though there are only a few road signs so its easy to get lost. The weather is very changeable; you’re out in the sun and feeling good and suddenly the rain starts and its cold and miserable. It can be very cold. I’ve not found a weather forecasting app.
Sometimes we wander down to the beach and look out at the ocean. Somewhere over there is the old country, where life was very different. Its hard not to think about that life, and wonder. But its not really helpful. Its best to wander back up the beach, wander back into the new country, wander back to our new life as refugees. We’re going to make this work.
(I probably talk in cliches, and just murdered a perfectly good metaphor. But this will have to do.)
]]>
The idea is to have a workshop, where we actually work. This is contrary to the normal use of "workshop" (in set theory, at least, but I believe in most mathematical areas) which means a very small conference where almost all the participants are also speaking.
Continue reading...]]>I just got back from a conference at Banff International Research Station on “Groups and Geometries”. Many thanks to Martin Liebeck, Inna Capdebosq and Bernhard Muehlherr for organising a brilliant week.
I gave a talk at the conference entitled “The relational complexity of a finite primitive permutation group”, which you can view by clicking here. The abstract of the talk was the following:
Motivated by questions in model theory, Greg Cherlin introduced the idea of “relational complexity”, a statistic connected to finite permutation groups. He also stated a conjecture classifying those permutation groups with minimal relational complexity. We report on recent progress towards a proof of this conjecture. We also make some remarks about permutation groups with large relational complexity, and we explain how this statistic relates to others in the literature, notably base-size.
This work is joint with Pablo Spiga, Martin Liebeck, Francesca Dalla Volta, Francis Hunt and Bianca Lodà.”
Erratum: at minute 39 of the talk I mis-stated a theorem. The correct statement is as follows:
Theorem (Gill, Lodà, Spiga) There exists a constant $c$ such that if $G$ acts primitively on a set $X$ of size $t$, then either
Joint work with Alejandro Poveda and Dima Sinapova.
Abstract. We introduce a class of notions of forcing which we call $\Sigma$-Prikry, and show that many of the known Prikry-type notions of forcing that centers around singular cardinals of countable cofinality are $\Sigma$-Prikry. We show that given a $\Sigma$-Prikry poset $\mathbb P$ and a name for a non-reflecting stationary set $T$, there exists a corresponding $\Sigma$-Prikry poset that projects to $\mathbb P$ and kills the stationarity of $T$. Then, in a sequel to this paper, we develop an iteration scheme for $\Sigma$-Prikry posets. Putting the two works together, we obtain a proof of the following.
Theorem. If $\kappa$ is the limit of a countable increasing sequence of supercompact cardinals, then there exists a cofinality-preserving forcing extension in which $\kappa$ remains a strong limit, every finite collection of stationary subsets of $\kappa^+$ reflects simultaneously, and $2^\kappa=\kappa^{++}$.
Downloads:
]]>
Getting a PhD in mathematics is not really about getting the PhD itself. It's more about getting much better at learning mathematics. So if you get a PhD in mathematics it will help you better your ability to study more mathematics and improve your skills.
Continue reading...]]>I attended the 2007 CUMC at Simon Fraser university and the 2008 CUMC at the University of Toronto (where I would go on to complete my PhD and then eventually work at).
In the summer of 2018, while I was a Post Doc at the University of Calgary, we hosted a “mini pre-CUMC conference” for undergrads to give their presentations ahead of time. It was so successful that I ran an expanded version of this at the University of Toronto for CUMC 2019.
I think these events and workshops are important for all students, but in particular it helps break down barriers to entry for marginalized students. With that in mind, I’m sharing my resources, thoughts and experiences about our pre-CUMC conference with the hope that other universities and colleges in Canada will benefit.
We had four main events to support students before the CUMC. All of the events were open to students from all three U of T campuses (and we even had a participant from the University of Ontario Institute of Technology in Oshawa).
About two months before the conference, the Undergraduate Math union at the University of Toronto announced that they would help some students with funding for the CUMC. Since the funding required a proposal of the talk that would be given, I volunteered to help anyone (especially beginners) who had questions or wanted advice.
Four students took me up on the offer.
It’s important to make sure that support and funding is accessible to those who need it, and that we remove any obstacles to participation. Students have many anxieties about presenting math outside their comfort zone, and those of us with experience can help by answering questions and setting people up for success.
Despite being a trained improvisor, and a reasonably outgoing person, I’ve always had challenges networking at conferences. We held a one-hour panel followed by a one-hour workshop to help students with networking and the challenges of attending a conference.
Here is the list of activities together with the learning objectives for each hour.
The panel was made up of undergrads with previous experience at math conferences. The math union helped recruit four panelists from the University of Toronto, and I got testimonials from two former students from the University of Calgary. One of the testimonials was a written document, and one was a 3 minute video.
The format of the panel worked quite nicely. We used Mentimeter to gather anonymous questions from the audience (through their devices).
Each panelist chose one of the questions to answer, then in order each other panelist could give their take on it. We went through two full rounds of this (so we got to discuss 8 questions). This allowed the panelists to speak on topics they felt they had the most expertise in, and allowed all of the panelists equal time. After the two rounds we did a final “lightning round” for any outstanding questions the audience had that didn’t get answered.
Cross-cohort bonds are important for students new to math. It demystifies the experience, and makes the “rules” explicit rather than implicit. Senior students develop their mentorship and leadership. Plus, it was fun and social, and people made new friends. Universities can be isolating places!
The workshop had lots of social interaction and discussion built into it. In particular we allowed students to practice introducing themselves to (math) strangers in a way that imitated what a real conference would be like. This gave them a low(er) stakes way to practice their social interactions. The sharing afterwards was especially valuable.
The math union organized time and space for students to give practice talks ahead of our mock conference. In the words of the organizer:
“[These are] raw practice talks, we’re doing it in small groups and you’ll get a chance to run through your talk several times and get feedback. [At the mock conference], you get a chance to present a “good copy” of your talk to a more open audience in a mock conference setting.”
These had six or so participants, and gave students a low stakes way to experiment with their presentations.
The culmination of these events was the mock conference held the Friday before the CUMC. We had 8 presentations and needed two rooms.
Here are the feedback forms I made for the event, and the TeX source files. Both of these are open access; please use them! Each audience member filled out an anonymous feedback form for each presenter. The feedback forms are structured to help the presenter find the strongest parts of their presentation. There’s also blank section at the bottom in which we encouraged each presenter to ask the audience for targeted feedback. For example, one presenter asked about the technical intensity of their talk.
It’s important to give presenters agency in their own feedback. Presenting in front of peers makes you vulnerable to criticism. Giving the presenter choices allows them to determine for themselves how vulnerable they would like to be.
Overall, I was very happy with how the events went, and students seemed to share in this impression. They were well attended, and the mock conference even drew in some faculty and other members of the math department.
I will definitely run this event in future years. Some goals for next time:
If you’re interested in running one of these workshops or mock-conferences and would like to connect with us, please let me know.
What’s your one simple trick for networking at conferences? What does your university do to help students prepare for conferences? What could I be doing to improve equity and accessibility to these events?
Special thanks to Alex Karapetyan (and the math union), Parker Glynn-Adey, Lyra Qian, Jordyn Dyck, and Kristine Bauer for your help in making these events happen.
All photos used with permission from Pixabay.
]]>
Just as a general caveat for the set theory notes, since all the students in the course were also my students in the basic set theory course that I taught with Azriel Levy (yes, that Azriel Levy, and yes it was quite an awesome experience) and there I managed to cover some fairly nontrivial things in that course, these notes might feel as if there are some gaps there, or that I skip here and there over some information.
Continue reading...]]>In the early 20th century, optimism ran high that a formal development of mathematics through first-order logic within appropriate axiom systems would set up mathematical knowledge as incontrovertible truth. With the introdution of set theory, all mathematics could now be unified within a single field whose fundamental properties were codified by the Zermelo-Fraenkel axioms ${\rm ZFC}$. But already within a few decades, discoveries were made which suggested that the concept of set was prone to relativity. Löwenheim and Skolem showed that if there is a model of ${\rm ZFC}$, then there is a countable model of ${\rm ZFC}$. This countable model would, of course, think that its reals are uncountable, demonstrating the non-absoluteness of countability. Gödel proved the Completeness Theorem, an immediate consequences of which was the existence of nonstandard models of set theory, even with nonstandard natural numbers, demonstrating the non-absoluteness of well-foundedness. Soon afterwards, Gödel's First Incompleteness Theorem showed the general limitations inherent in formal method: no computable theory extending ${\rm ZFC}$ could decide the truth of all set theoretic assertions. In the next few decades came forcing, large cardinals, canonical inner models, and a model theoretic treatment of ${\rm ZFC}$. Set theory ever since has indisputably been the study of a multiverse of universes each instantiating its own concept of set.
Cohen's technique of forcing showed how to extend a given universe of sets to a larger universe satisfying a desired list of properties. Forcing could be used to construct universes in which the continuum was any cardinal of uncountable cofinality (including singular cardinals), indeed the continuum function on the regular cardinals could be made to satisfy any desired pattern (modulo a few necessary restrictions). There could be universes with or without a Suslin line, universes with non-constructible reals, universes whose cardinal characteristics had different values and different relationships.
At the same time, set theorists started exploring the consistency strength hierarchy of large cardinal axioms asserting existence of improbably large infinite objects. Other set theoretic assertions, no matter how unrelated to large cardinals, inevitably fell somewhere inside the hierarchy. Set theorists could now combine forcing with large cardinals to produce universes in which large cardinals were compatible with other desired properties. Many of the larger large cardinals implied that the universe had transitive sub-universes (inner models) into which it would elementarily embed. Such sub-universes, obtained from large cardinals or through other considerations, joined the menagerie of mathematical worlds populating the field.
In yet another direction, motivated by Godel's construction of $L$, set theorists began exploring how to build ever more sophisticated canonical inner models, universes built bottom-up from a recipe specifying what kind of sets should exists.
Finally, a model theoretic treatment of ${\rm ZFC}$ produced some unexpected results. Given two models $M$ and $N$ of ${\rm ZFC}$, let us say that $N$ is an end-extension of $M$ if $M$ is transitive in $N$, let us say that $N$ is a top-extension of $M$ if every element of $N\setminus M$ has higher rank than all ordinals of $M$. Keisler and Morley showed that every countable model of ${\rm ZFC}$ has a proper elementary top-extension (Keisler-Morley Extension Theorem [1]). Thus, any countable model could be grown in height without changing its theory. Barwise showed that every countable model of ${\rm ZFC}$ has an end-extension to a model of $V=L$ (Barwise Extension Theorem [2]). Thus, sets could become constructible in a larger universe.
What does set theoretic practice tell us about the nature of sets? Is there the one true universe of sets, the mathematical world in which all mathematics lives, but our epistemic limitations have so far prevented us from understanding its structure? Or is there simply no one true universe, and instead the multiverse of universes encountered by set theorists is the true mathematical reality?
The Universist position in the philosophy of set theory asserts that there is the one true universe of sets. Incompleteness is simply a by-product of formal methods we are forced to use to investigate mathematics, not an indication of some inherent relativity in the nature of sets. After all, in spite of a plethora of nonstandard models of arithmetic, most mathematicians would agree that the natural numbers is the only true model of arithmetic. The Peano Axioms, actually do a rather excellent job of capturing the fundamental properties of natural numbers, they are bound by the incompleteness phenomena but number theorists don't often run into statements independent of them. This certainty with regards to number theory comes from the deep intuition we believe we hold about the nature of natural numbers. The goal of set theory should then be to obtain the same deep intuition about the nature of sets so that ${\rm ZFC}$ can be extended to a theory deciding most of the more significant properties of sets. Woodin for instance believes that such a theory will come from a construction of a sufficiently sophisticated canonical inner model compatible with all known large cardinals.
The Multiversist position in the philosophy of set theory asserts that there is simply no one true universe of sets. All universes of sets discovered by set theorists: forcing extensions, canonical and non-canonical inner models, models with and without large cardinals, have equal ontological status. They all populate the multiverse of mathematical worlds, each instantiating its own equally valid concept of set. The Multiversist position splits along the lines of just how non-absolute should the notion of set be under this view. Can the multiverse contain models of different heights? Are ill-founded models allowed? The Radical Multiversist position, espoused most notably by Hamkins, asserts that all these models belong in the multiverse. Indeed, once you adapt the position that there is no absolute set theoretic background, no background aught to be better than any other and in fact each should be as bad as the next in the following sense. In the Hamkins perspective each universe in the multiverse will be seen to be countable and ill-founded from the perspective of a better universe. In particular, under this view the universist is mistaken even about the nature of the natural numbers. The Hamkins position is captured by his Multiverse Axioms.
Realizability: If $M$ is a universe and $N$ is a definable class in $M$ such that $M$ believes that $N\models{\rm ZFC}$, then $N$ is a universe. (This includes set models, inner models, will-founded models, etc.)
Forcing Extension: If $M$ is a universe and $\mathbb P\in M$ is a forcing notion, then there is a universe $M[G]$ where $G\subseteq\mathbb P$ is $M$-generic.
Class Forcing Extension: If $M$ is a universe and $\mathbb P$ is a definable ${\rm ZFC}$-preserving forcing notion in $M$, then there is a universe $M[G]$ where $G\subseteq \mathbb P$ is $V$-generic.
Reflection: If $M$ is a universe, then there is a universe $N$ such that $M\prec V^N_\theta\prec N$ for some rank initial segment $V^N_\theta$ of $N$. (Recall the Keisler-Morley Extension Theorem.)
Countability: Every universe $M$ is a countable set in another universe $N$.
Well-founded Mirage: Every universe $M$ is a set in another universe $N$ which thinks that $\mathbb N^M$ is ill-founded.
Reverse Embedding: If $M$ is a universe and $j:M\to N$ is an elementary embedding definable in $M$, then there is a universe $M^*$ and an elementary embedding $j^*:M^*\to M$ definable in $M^*$ such that $j=j^*(j^*)$. (Every elementary embedding has already been iterated many times.)
Absorbtion into $L$: Every universe $M$ is a transitive set in another universe $N$ satisfying $V=L$. (This is stronger than the Barwise Extension Theorem.)
A way to investigate the viability of a collection of multiverse axioms, without coming up with a formal background beyond ${\rm ZFC}$, is by considering toy multiverses of ${\rm ZFC}$ models. Assuming that ${\rm ZFC}$ is consistent, a toy multiverse is some set collection of models of ${\rm ZFC}$. Does a given collection of multiverse axioms have a toy multiverse, does it have a natural toy multiverse? With Hamkins, we showed that his Multiverse Axioms, radical as their assertions may seem, have a very natural toy multiverse, namely the collection of all countable computably saturated models [3]. In fact, every model in any toy multiverse satisfying the Hamkins Multiverse Axioms must be computably saturated because every model of ${\rm ZFC}$ that is an element of a model of ${\rm ZFC}$ with an ill-founded $\omega$ is computably saturated (see the previous post).
In a recent joint work with Godziszewski, Meadows, and Williams, we consider various weakenings of the Hamkins Multiverse Axioms that have toy multiverses not consisting entirely of computably saturated models. First, we consider the Hamkins Multiverse Axioms with a weak version of well-founded mirage which asserts that every universe in the multiverse is merely ill-founded, not necessarily at $\omega$, from the perspective of a better universe.
Weak well-founded mirage: Every universe $M$ is a set in another universe which thinks that its membership relation is ill-founded.
Recall that if there is a transitive model of ${\rm ZFC}$, then the Cohen-Shepherdson model $L_\alpha$ is the minimum transitive model of ${\rm ZFC}$.
Theorem: (G., Godziszewki, Meadows, Williams) The collection in $L_\alpha$ of all models $M$ such that $L_\alpha$ thinks that $M$ is a countable model of ${\rm ZFC}$ satisfies:
Next we consider weakening the axioms to the so-called covering versions, where instead of demanding that a universe be a set inside another universe, we ask that each universe has, a cover, an end-extension that is a set in a better universe.
Covering Countability: For every universe $M$, there is a universe $N$ and a countable set $A\in N$ with $M\subseteq A$.
Covering Well-founded Mirage: For every universe $M$, there is a universe $N$ and a model $M^*\in N$ end-extending $M$ such that $N$ thinks that $\mathbb N^{M^*}$ is ill-founded.
Covering Absorbtion into $L$: For every universe $M$, there is a universe $N\models{\rm ZFC}+V=L$ and a model $M^*\in N$ end-extending $M$.
Theorem: (G., Godziszewki, Meadows, Williams) There is a toy multiverse, not all of whose models are computably saturated, satisfying:
In this post, I purposely didn't tell you what my philosophical position is on all of this. I will leave it up to the reader to guess.
Let's call a model of ${\rm ZFC}$ $\omega$-nonstandard if it has nonstandard natural numbers. In the model theory of $\omega$-nonstandard models of ${\rm ZFC}$ (as in the model theory of nonstandard models of ${\rm PA}$) one of the most important notions is the standard system introduced by Friedman in [2]. The standard system of an $\omega$-nonstandard model of ${\rm ZFC}$ is a collection of subsets of $\mathbb N$ consisting of the traces of sets in $M$ on the natural numbers: $${\rm SSy}(M)=\{A\cap \mathbb N\mid A\in M\}.$$ Standard systems are clearly Boolean algebras. Standard systems are closed under computability: whenever $A$ is in it and $B$ is computable from $A$, then $B$ is in it. To see this, fix $A\in {\rm SSy}(M)$ and let $B$ be computed from $A$ by a program $p$. The set $A=A^*\cap \mathbb N$ for some $A^*\in M$ and we can assume without loss that $A^*\subseteq\mathbb N^M$. The computation of program $P$ with oracle $A^*$ inside $M$ agrees with the actual computation of $p$ with oracle $A$ on the standard part, which is $B$. In particular, it follows that a standard system has all computable sets, and so computable theories such as ${\rm PA}$ and ${\rm ZFC}$ are (coded) in the standard system. Finally, whenever a set $T$ in a standard system codes an infinite binary tree, the standard system must have some infinite branch through $T$. Such a set $T$ must have come from a set $T^*\in M$, which has to look like a binary tree up to some nonstandard level (because otherwise the true natural numbers would be definable in $M$). So we fix some node $c$ on a nonstandard level of $T^*$ and its predecessors in $T^*$ give a branch through $T$. Observe also if $M$ and $N$ are $\omega$-nonstandard models of ${\rm ZFC}$ such that $\mathbb N^M$ is an initial segment of $\mathbb N^N$, then ${\rm SSy}(M)\subseteq {\rm SSy}(N)$. The reason is that if a set $A\in {\rm SSy}(M)$, then there is a nonstandard natural number $a\in M$ such that the characteristic function of $A$ is coded on the standard part of the binary expansion of $a$, and indeed $M$ has such numbers arbitrarily low above the standard $\mathbb N$.
An $\omega$-nonstandard model $M\models{\rm ZFC}$ is said to be ${\rm SSy}(M)$-saturated if for every type $p(\bar x,\bar y)$ (coded) in ${\rm SSy}(M)$, whenever there is $\bar a\in M$ such that $p(\bar x,\bar a)$ is finitely realizable, then $p(\bar x,\bar a)$ is realized. Since ${\rm SSy}(M)$ has all the computable sets, ${\rm SSy}(M)$-saturated models are computably saturated. But the converse holds as well. To see this, fix a type $p(\bar x,\bar y)$ in ${\rm SSy}(M)$ and let $A\in M$ such that $A\cap\mathbb N=\{\ulcorner\varphi\urcorner\mid \varphi\in p(\bar x,\bar y)\}$. Suppose that for $\bar a\in M$, $p(\bar x,\bar a)$ is finitely realizable. Consider the computable type $$p^*(\bar x,\bar y,z)=\{\varphi(\bar x,\bar y)\leftrightarrow \ulcorner\varphi\urcorner\in z\mid \varphi\text{ is a formula}\}.$$ Since $p(\bar x,\bar a)$ is finitely realizable, then so is $p^*(\bar x,\bar a,A)$, but the realization of $p^*(\bar x,\bar a,A)$ also realizes $p(\bar x,\bar a)$. Standard system saturation was introduced by Wilmers in an unpublished 1975 thesis.
We can use standard system saturation to show several key properties of computably saturated models. The standard system of a computably saturated model has all the types of its elements ($\text{tp}^M(\bar a)$ for $\bar a\in M$), and in particular its theory. Suppose $M$ is ${\rm SSy}(M)$-saturated and $\bar a\in M$. We need to show that there is $A\in M$ such that $A\cap \mathbb N=\{\ulcorner\varphi\urcorner\mid M\models\varphi(\bar a)\}$. Consider the computable type $$p(x,\bar y)=\{\varphi(\bar y)\leftrightarrow \ulcorner\varphi\urcorner\in\bar x\mid\varphi\text{ is a formula}\}.$$ Since $M$ has $\Sigma_n$-truth predicates for every true natural number $n$, $p(x,\bar a)$ is finitely realizable in $M$, and thus there is $A\in M$ realizing $p(\bar x,\bar a)$. Once we have that all types are in the standard system, we obtain a remarkable characterization of countable computably saturated models.
Theorem: Countable computably saturated models $M$ and $N$ of ${\rm ZFC}$ are isomorphic if and only if they have the same theory and the same standard system. Indeed, if $\text{tp}^M(\bar a)=\text{tp}^N(\bar b)$, then there is an isomorphism taking $\bar a$ to $\bar b$.
Proof: This is a back-and-forth argument. Let's extend the partial isomorphism taking $\bar a$ to $\bar b$ one more step. Fix $c\in M$. Then $\text{tp}(c,\bar a)\in {\rm SSy}(M)={\rm SSy}(N)$ and $p(x,\bar y)=\{\varphi(x,\bar y)\mid \varphi\in \text{tp}(x,\bar y)\}$ is finitely realizable in $N$. Thus, by standard system saturation, there is $d\in N$ such that $\text{tp}(c,\bar a)=\text{tp}(d,\bar b)$.
Corollary: Countable computably saturated models $M\models{\rm ZFC}$ have many automorphisms. Whenever $a,b\in M$ are such that $\text{tp}(a)=\text{tp}(b)$, then there is an automorphism of $M$ taking $a$ to $b$.
Computably saturated models of ${\rm ZFC}$ arise naturally as follows. If $M\models{\rm ZFC}$ is an element of an $\omega$-nonstandard model of ${\rm ZFC}$, then $M$ is computably saturated. To see this, fix a computable type $p(\bar x,\bar y)$ and a tuple $\bar a\in M$ such that $p(\bar x,\bar a)$ is finitely realizable. Also fix a set $A\in N$ such that $A\cap \mathbb N$ codes $p(\bar x,\bar y)$. Since $M$ is a set in $N$, $N$ has a truth predicate for $M$, which by absoluteness of satisfaction must be correct for standard formulas. The model $N$ sees, for every true natural number $n$, that $M$ realizes all $\varphi(\bar x,\bar a)$ such that $\ulcorner\varphi\urcorner\in A\cap n$. But then there must be some nonstandard $c\in N$ such that $N$ thinks that $M$ realizes all $\varphi(\bar x,\bar a)$ such that $\ulcorner\varphi\urcorner\in A\cap c$. Any such realization $\bar b$ realizes $p(\bar x,\bar a)$ because $A\cap c$ includes all standard assertions.
Here are some other remarkable results about computably saturated models of ${\rm ZFC}$.
Theorem:: Every computably saturated model $M\models{\rm ZFC}$ has an elementary rank initial segment $V_\alpha^M\prec M$ [3].
Proof: Consider the computable type $p(x)$ consisting of assertions $$\forall \ulcorner\varphi\urcorner\in\Sigma_n\,\text{Tr}_{\Sigma_n}(\ulcorner\varphi\urcorner)\leftrightarrow \text{Tr}_{\Delta_0}(\ulcorner x\models\varphi\urcorner)$$ for every $n$ (where $\text{Tr}_{\Sigma_n}$ is a $\Sigma_n$-truth predicate) together with the assertion "$x$ is a rank initial segment".
The following observation was the key to our theorem with Joel Hamkins that the collection of all countable computably saturated models satisfies the Hamkins Multiverse Axioms [4].
Theorem: Every countable computably saturated model of ${\rm ZFC}$ has an isomorphic copy of itself as an element which it thinks is countable and $\omega$-nonstandard.
Proof: Suppose $N$ is countable and computably saturated. The theory $\mathcal T$ of $N$ is in ${\rm SSy}(N)$ and so we can fix a theory $\mathcal T^*\in N$ such that $\mathcal T^*\cap\mathbb N=\mathcal T$. The model $N$ sees that $\mathcal T^*\cap n$ is consistent for every true natural number $n$ and therefore $N$ has some nonstandard natural number $c$ such that it thinks that $\mathcal T^*\cap c$ is consistent. Thus, $N$ has a model of $\mathcal T^*\cap c$. But then $N$ also thinks that $\mathcal T^*\cap c$ also has an $\omega$-nonstandard model $M$. Since $\mathbb N^N$ is an initial segment of $\mathbb N^M$, the two models have the same standard system and by construction they have the same theory. Therefore $M\cong N$.
For the final result I want to mention, let's recall that a truth predicate for an $\omega$-nonstandard model $M\models{\rm ZFC}$ is a subset $T\subseteq M$ such that in the structure $\langle M,\in,T\rangle$, $T$ satisfies the Tarskian truth conditions for $\langle M,\in\rangle$. In particular, $T$ must be a truth predicate for standard formulas. A set $T\subseteq M$ is a partial truth predicate if in $\langle M,\in,T\rangle$, there is a nonstandard number $c$ such that $T$ restricted to formulas of length less than $c$ satisfies Tarskian truth conditions.
Theorem: The following are equivalent for a countable $\omega$-nonstandard $M\models{\rm ZFC}$.
The same result for models of ${\rm PA}$ was shown by Kotlarski, Krajewski, and Lachlan [5], [6]. In his dissertation Bartosz Wcisło showed that a truth predicate can be used to define a partial truth predicate preserving ${\rm PA}$ and Michał Godzisgewski observed that with this new result the theorem extends to models of ${\rm ZFC}$. Note that it is impossible to extend condition $(3)$ to say that $\langle M,\in,T\rangle\models{\rm ZFC}$ because any such $M$ would be a model of $\text{Con}({\rm ZFC})$.
I’ve been re-reading parts of Carter’s “Finite groups of Lie type” and Malle-Testerman’s “Linear algebraic groups and finite groups of Lie type” with a view to understanding the theory of maximal tori in finite groups of Lie type.
In this post I want to use the theory in those books to write down the orders of the maximal tori of $A_2(q)$, ${^2A_2(q)}$ and $G_2(q)$. I wanted to also do ${^2G_2(q)}$ but, so far, I haven’t managed to write things down properly for the Ree and Suzuki groups, so I’ll exclude these from what follows.
The general set-up is as follows: $G$ is a simple linear algebraic groups, and $F:G\to G$ is a Steinberg endomorphism, i.e. some power $F^m:G\to G$ is a Frobenius endomorphism of $G$. (Some of this theory works more generally – for $G$ connected reductive – and, in particular, this can be important when one studies centralizers inside a simple LAG’s…. But I’m not going there just now.) Now a theorem of Steinberg asserts that $G^F$, the set of fixed-points of $F$, is a finite set [MT, Theorem 21.5] – such a group is an example of a finite group of Lie type.
Now the Lang-Steinberg theorem asserts that the map $L:G\to G, \, g\mapsto F(g) g^{-1}$ is surjective [MT, Theorem 21.7]. This theorem then implies that $G$ contains an $F$-stable maximal torus $T$ inside an $F$-stable Borel subgroup $B$. Since $T$ is $F$-stable, $N_G(T)$ is also $F$-stable, and so $F$ naturally acts on the Weyl group $W=N_G(T)/T$ of $G$. Similarly, $F$ acts on the character group $X:=X(T)$ via
We will need the notion of $F$-conjugacy in $W$: if $w_1, w_2\in W$, then $w_1$ is $F$-conjugate with $w_2$ if there exists $g\in W$ such that $w_1=F(g)w_2g^{-1}$.
Let $\Phi\subset X$ be the root system of $G$ with positive system $\Phi^+$ with respect to $T$ and $B$. In what follows we write $X_\mathbb{R}:=X\otimes_{\mathbb{Z}}\mathbb{R}$.
Now the first three results of [MT, Section 22.1] imply that
It is important to note that, in principle, $q$ is a fractional power of $p$ (although, in fact, it will be integral except when $G^F$ is Ree or Suzuki). Note, too, that this set-up clearly defines the real number $q$ to be associated to our finite group of Lie type – for certain families (e.g. the unitaries), the value of $q$ follows varying conventions whereas here it is clear cut.
We have set-up all the necessary parameters associated with our group of Lie type. Now let’s study the maximal tori: we follow [MT, Chapter 25]. First off, we note that, since Frobenius endomorphism commute with elements of $W$ in their action on $T$, the notion of $F$-conjugacy is the same as $\phi$-conjugacy (where $F=q\phi$).
The following principles are important:
These correspondences follow from the Lang-Steinberg theorem. More precisely the first correspondence is as follows: if $gTg^{-1}$ is $F$-stable, then it corresponds to the element $w:=g^{-1}F(g)T\in N_g(T)/T=W$. We are then able to write $T_w$ for the conjugate $gTg^{-1}$. Note that $T_1$ corresponds to an $F$-stable maximal torus in an $F$-stable Borel subgroup. Now [MT, Prop. 25.3] asserts:
Specific calculations now follow. These can be confirmed using Kantor-Seress “Prime power graphs for groups of Lie type”.
We record the size of the maximal tori for $A_2(q)$. Note that, here and below, the isogeny class does not matter – so, in this case, these calculations are valid for ${\rm PGL}_3(q)$ and ${\rm SL}_3(q)$.
We use the fact that the fundamental roots of $A_2$ – labelled $\alpha$ and $\beta$ in the diagram – form a basis for $X\otimes\mathbb{R}$. With respect to this basis we have In this case $\phi$ is trivial, so we just need to write down $q-w^{-1}$. The possibilities are as follows:
We record the size of the maximal tori for ${^2A_2}(q)$. The root system is as before, and we have the same value for $q$, but this time time $\phi$ is non-trivial.
Using the same basis as before – ${\alpha, \beta}$, we can write $\phi$ as . This is just taking $\phi$ acting on the Dynkin diagram. (Stupid comment: I’ve never cottoned on to the fact, hitherto, that $\phi$ is also an automorphism of the root system. In particular it normalizes the Weyl group which here is $W\cong D_6$. So we get $\langle W, \phi\rangle \cong D_{12}$. This is clearly true in general.)
So now we need to write down $q-(\phi w)^{-1}$. The possibilities for $w$ are as before:
Note that we need to choose different elements $w$ because the $\phi$-conjugacy classes in $W$ are different to the usual conjugacy classes.
Recall that $G_2(q)$ has order $q^6(q^2-1)(q^2-1)$.
As before, we take ${\alpha, \beta}$ as a basis for $X\otimes\mathbb{R}$, and we note that $\phi$ is trivial, and $q$ is as before. We must go through representatives for each of the conjugacy classes of $W=D_{12}$:
This yields all of the maximal tori that are listed in Kantor-Seress. However note that there are two conjugacy classes of reflections in $D_{12}$ – here they correspond to reflections in long and short roots – and so we obtain another example:
Thus it appears that there are two conjugacy classes of maximal torus of order $q^2-1$ – I guess one occurs as a split torus in a Levi factor ${\rm GL}_2(q)=\langle U_\alpha, U_{-\alpha}\rangle$, while the other occurs in ${\rm GL}_2(q)=\langle U_\beta, U_{-\beta}\rangle$. I reconcile this to Kantor-Seress by noting that they do not necessarily claim to list all conjugacy classes of tori, although in some places they do note that there is more than one conjugacy class of a certain order.
]]>Joint work with Gabriel Fernandes and Miguel Moreno.
Abstract. A classical theorem of Hechler asserts that the structure $\left(\omega^\omega,\le^*\right)$ is universal in the sense that for any $\sigma$-directed poset $\mathbb P$ with no maximal element, there is a ccc forcing extension in which $\left(\omega^\omega,\le^*\right)$ contains a cofinal order-isomorphic copy of $\mathbb P$.
In this paper, we prove a consistency result concerning the universality of the higher analogue $(\kappa^\kappa,\le^S)$.
Theorem. Assume GCH. For every regular uncountable cardinal $\kappa$, there is a cofinality-preserving GCH-preserving forcing extension in which for every analytic quasi-order $\mathbb Q$ over $\kappa^\kappa$ and every stationary subset $S$ of $\kappa$, there is a Lipschitz map reducing $\mathbb Q$ to $(\kappa^\kappa,\le^S)$.
Downloads:
Citation information:
G. Fernandes, M. Moreno and A. Rinot, Inclusion modulo nonstationary, Monatsh. Math., 192(4): 827-851, 2020.
]]>
Last week the mathematician Tuna Altinel, member of the European Mathematical Society and professor at the Université Lyon 1 in France, was arrested in Turkey after he had his passport extracted by the police. Tuna Altinel was one of the signatories of the peace petition supported by more than 2000 scientists and intellectuals against military actions towards civilians.
]]>The European Mathematical Society condemns this violation of Prof Altinel’s human rights and demands that he is immediately released and allowed to return to France to resume his teaching and research.
Completely ineffable cardinals sit atop the ineffability hierarchy. A cardinal $\kappa$ is ineffable if every coloring $f:[\kappa]^2\to 2$, coloring pairs of ordinals in 2 colors, has a stationary homogeneous subset. A cardinal $\kappa$ is $n$-ineffable if every coloring $f:[\kappa]^n\to 2$ has a stationary homogeneous subset. A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$. It is not easy to remember the definition of completely ineffable cardinals. A cardinal $\kappa$ is completely ineffable if there is a collection $\mathcal R$ of stationary subsets of $\kappa$ closed under supersets such that every coloring $f:[A]^2\to 2$ for $A\in\mathcal R$ has a homogeneous subset in $\mathcal R$. So a completely ineffable $\kappa$ has a set of stationary subsets of $\kappa$ that is closed under the operation of obtaining homogeneous sets for colorings. Completely ineffable cardinals are limits of totally ineffable cardinals. They are also totally indescribable. Completely ineffable cardinals lie relatively low in the large cardinal hierarchy. They are much weaker than Ramsey or even $\omega$-Erdos cardinals, in particular, they can exist in $L$.
I call a transitive model $M\models{\rm ZFC}^-$ (${\rm ZFC}$ with the powerset axiom removed) a weak $\kappa$-model if it has size $\kappa$ and $\kappa\in M$. Many of the smaller large cardinals, those below a measurable cardinal, have characterizations in terms of existence of elementary embeddings of weak $\kappa$-models. For example, $\kappa$ is weakly compact if (it is inaccessible and) every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ for which there is an elementary embedding $j:M\to N$ with critical point $\kappa$ and $N$ transitive. Equivalently, every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ for which there is an $M$-ultrafilter $U$ on $\kappa$ with a well-founded ultrapower. An $M$-ultrafilter on $\kappa$ is an ultrafilter on $P(\kappa)^M$, not necessarily from $M$, that is normal for sequences from $M$. A well-founded ultrapower by an $M$-ultrafilter yields an elementary embedding $j:M\to N$ with critical point $\kappa$ and conversely given an elementary embedding $j:M\to N$ with critical point $\kappa$ and $N$ transitive, we have that $$U=\{A\in M\mid A\subseteq\kappa\text{ and }\kappa\in j(A)\}$$ is an $M$-ultrafilter with a well-founded ultrapower (the ultrapower embeds into $N$).
Although we can take the ultrapower by an $M$-ultrafilter, we cannot necessarily iterate the ultrapower construction. If $U$ is not in $M$, how do we define $j(U)$? It turns out that you can do this if the $M$-ultrafilter has the additional property of being weakly amenable, that is being partially internal to $M$. An $M$-ultrafilter $U$ on $\kappa$ is weakly amenable if for every $A\in M$ which $M$ thinks has size $\kappa$, $A\cap U\in M$. A well-founded ultrapower $j:M\to N$ by a weakly amenable $M$-ultrafilter $U$ on $\kappa$ has the property that $P(\kappa)^M=P(\kappa)^N$ and if $j:M\to N$ is any embedding with critical point $\kappa$ such that $P(\kappa)^M=P(\kappa)^N$, then the derived $M$-ultrafilter $U$ (defined as above) is weakly amenable. In my dissertation, I introduced the $\alpha$-iterability hierarchy by exploring iterability properties of weakly amenable $M$-ultrafilters. A cardinal $\kappa$ is 1-iterable if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ for which there is a weakly amenable $M$-ultrafilter with a well-founded ultrapower. More generally, a cardinal $\kappa$ is $\alpha$-iterable for $1\leq\alpha\leq\omega_1$ if every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M$ for which there is a weakly amenable $M$-ultrafilter with $\alpha$-many well-founded iterated ultrapowers (once you have $\omega_1$-many well-founded iterated ultrapowers, then all the rest are well-founded as well).
The point here is that a $1$-iterable cardinal $\kappa$ is a already a limit of completely ineffable cardinals. To see this, let $M$ be a weak $\kappa$-model with $V_\kappa\in M$ for which there is a weakly amenable $M$-ultrafilter $U$ with a well-founded ultrapower. First, let's argue that for every $A\in U$ and coloring $f:[A]^2\to 2$ in $M$, there is a homogeneous set for $f$ in $U$. Let $h:M\to K$ be the (possibly ill-founded) ultrapower by $U\times U$, which exists by weak amenability. It is still the case that $A\subseteq \kappa\times\kappa$ is in $U\times U$ if and only if $[\text{id}]_{U\times U}\in h(A)$. Say $h(f)([\text{id}]_{U\times U})=1$. Then $$A=\{(\xi_1,\xi_2)\mid f(\xi_1,\xi_2)=1\}\in U\times U.$$ An easy argument shows that whenever $X\in U\times U$, then there is a set $\bar X\in U$ such that for all $\xi_1<\xi_2$ in $\bar X$, $(\xi_1,\xi_2)\in X$. So $U$ has a set $\bar A$ such that whenever $\xi_1<\xi_2$ are in $\bar A$, then $(\xi_1,\xi_2)\in A$, and so $f(\xi_1,\xi_2)=1$, meaning that $\bar A$ is homogeneous for $f$. Now let $j:M\to N$ be the ultrapower by $U$. We will argue that $\kappa$ is completely ineffable in $N$, and hence by elementarity $M$ thinks that $\kappa$ is a limit of completely ineffable cardinals, but it must be correct about it because it has the true $V_\kappa$. We work now inside $N$. Let $\mathcal R_0$ be the collection of all stationary subsets of $\kappa$. Given $R_\xi$, let $R_{\xi+1}$ be the collection of all sets $A$ in $R_\xi$ such that for every coloring $f:[A]^2\to 2$, $R_\xi$ has a homogeneous set for $A$. At limits take intersections. There must be some $\theta$ such that $R_\theta=R_{\theta+1}$ and if $R_\theta$ is not empty, then by construction it will witness that $\kappa$ is completely ineffable. But this is the case because $U\subseteq R_\xi$ for every $\xi$. Note though that a 1-iterable cardinals may not be completely ineffable itself because it is $\Pi^1_2$-describable.
Next, let's talk about one of the game Ramsey cardinals introduced by Holy and Schlicht in [1]. First, let's modify the definition of a weak $\kappa$-model to allow non-transitive $\in$-models. We need this because we want to consider models of size $\kappa$ elementary in large $H_\theta$ (collection of all sets whose transitive closure has size less than $\theta$). Let's call a weak $\kappa$-model $M$ a $\kappa$-model if it is closed under sequences of length less than $\kappa$. Now fix an inaccessible cardinal $\kappa$ and a regular cardinal $\theta>\kappa$. Consider a two player game of perfect information of length $\omega$ between the challenger and the judge, where at each step $n$ of the game, the challenger plays a $\kappa$-model $M_n\prec H_\theta$ and the judge responds with an $M_n$-ultrafilter $U_n$. Additionally, at each step the challenger needs to make sure that $M_n\subseteq M_{n+1}$ and $M_n,U_n\cap M_n\in M_{n+1}$. The judge wins the game if she is able to play for $\omega$-many steps and otherwise the challenger wins. A cardinal $\kappa$ has the $\omega$-filter property if the challenger doesn't have a winning strategy for any regular $\theta$.
Finally, let's talk about Paul Corazza's Wholeness Axiom [2]. Kunen's Inconsistency, which says that there cannot be an elementary embedding $j:V\to V$ can be formalized in several ways. The proof shows that in a model of the second-order Kelley-Morse set theory $(V,\in,\mathcal S)$ there cannot be a class $j\in \mathcal S$ which is an elementary embedding of the first-order part $(V,\in)$. We can also consider expanding the language $\{\in\}$ of set theory by a binary predicate $j$ and considering the theory ${\rm ZFC}$ in this expanded language (with separation and replacement applying to formulas with $j$) together with a scheme of assertions that $j$ is a non-trivial elementary embedding. Kunen's result can also be viewed as asserting that such a theory is inconsistent. What fragment of this theory in the language with $j$ can we salvage? Suppose we just assert that $j$ is non-trivial and elementary. This theory is equiconsistent with ${\rm ZFC}$ because if there is a model of ${\rm ZFC}$, then there is a model of ${\rm ZFC}$ that is computably saturated and hence has plenty of not just elementary embeddings, but automorphisms (see this post). How about the theory saying that $j$ is elementary and has a critical point (meaning that some ordinal $\kappa$ is moved and every ordinal $\alpha<\kappa$ is fixed)? Corazza calls this theory ${\rm BTEE}$ (basic theory of elementary embeddings). So there is a model $M\models{\rm ZFC}$ and an elementary embedding $j:M\to M$ with critical point $\kappa\in M$. It is easy to see that $\kappa$ must be at least inaccessible in $M$. The argument we gave for 1-iterable cardinals shows that $\kappa$ is $n$-ineffable in $M$ for every natural number $n$ of the metatheory because we can use $j$ to derive an $M$-ultrafilter $U$, which must be weakly amenable, and use product ultrafilters $U^n$ to obtain homogeneous sets. If we additionally assume that $\Sigma_0$-separation holds for formulas with $j$, then it follows that $j$ is amenable to $M$ (for every $a\in M$, $j\upharpoonright a\in M$), which implies that $\kappa$ is at least super $n$-huge in $M$ for every $n\in\omega$. That's quite a jump in consistency strength! It is not known whether assuming full separation for $j$ gives a consistency-wise stronger theory although it is known that weaker and stronger separation have different consequences. Stronger separation implies for example that iterate embeddings $j^n$ are definable in $M$ for every natural number $n\in M$, but this is known to fail with just $\Sigma_0$-separation (the issues here arise for nonstandard $n$).
The following theorem gives several very different characterizations of completely ineffable cardinals.
Theorem: The following are equivalent to $\kappa$ being completely ineffable.
Proof: For equivalence with (1) see [3]. For equivalence with (2), see the the argument in [4] for forcing over countable models and instead force over $V$. (3) follows from (2) by playing the game for $\omega$-many steps to build $U$. Finally suppose (3) and let's argue that $\kappa$ is completely ineffable. The same argument which shows that $\kappa$ is completely ineffable in $N$ for a 1-iterable embedding $j:M\to N$ will show that $\kappa$ is completely ineffable in $M$ and this suffices because $M\prec H_\theta$ is correct about complete ineffability.
Completely ineffable cardinals now look very similar to 1-iterable cardinals, but the true analogue is actually the $\omega$-Ramsey cardinals, another game Ramsey cardinal notion of Holy and Schlicht from [1]. A cardinal $\kappa$ is $\omega$-Ramsey if the challenger has no winning strategy in the games described as above only with the additional condition that in order for the judge to win, after the $\omega$-many steps, $U=\bigcup_{n<\omega}U_n$ has to have a well-founded ultrapower. Equivalently, for every regular $\theta>\kappa$, every $A\subseteq\kappa$ is an element of a weak $\kappa$-model $M\prec H_\theta$ for which there is a weakly amenable $M$-ultrafilter $U$ with a well-founded ultrapower. $\omega$-Ramsey cardinals are between 1-iterable and 2-iterable cardinals in consistency strength.
Theorem: Consistency of a completely ineffable cardinal implies consistency of the theory ${\rm BTEE}$.
Proof: Suppose that $\kappa$ is completely ineffable in a model $V\models{\rm ZFC}$ and move to a forcing extension $V[G]$, in which there is a generic embedding $j:V\to N$ by a weakly amenable $V$-ultrafilter $U$. Since $U$ is weakly amenable, we can iterate the ultrapower construction $\omega$-many times, obtaining iterate embeddings $j_{mn}:N_m\to N_n$ with critical point $\kappa_n$. Let $\mathcal N$ be the direct limit of this system. Standard arguments show that the critical sequence $\{\kappa_n\mid n<\omega\}$ are indiscernibles for $\mathcal N$ even for formulas with parameters $\alpha<\kappa$. (Note that $V_{\kappa+1}^{\mathcal N}=V_{\kappa+1}^V$.) Now let $M$ be generated in $\mathcal N$ by $\alpha<\kappa$ and the critical sequence $\kappa_n$ for $n<\omega$. Define $h:M\to M$ to be the elementary embedding generated by the shift of indiscernibles map taking $\kappa_n$ to $\kappa_{n+1}$. In other words, a term $t(\alpha_0,\ldots,\alpha_n,\kappa_0,\ldots,\kappa_m)$, with $\alpha_i<\kappa$, gets mapped to the term $t(\alpha_0,\ldots\alpha_n,\kappa_1,\ldots,\kappa_{n+1})$. Because the $\kappa_n$ are indiscernibles for formulas with parameters $\alpha<\kappa$, $h$ is a well-defined elementary embedding with critical point $\kappa=\kappa_0$.This squizes the consistency strength of ${\rm BTEE}$ between $n$-ineffable cardinals for every $n$ and completely ineffable cardinals.
I gave an invited talk at the 50 Years of Set Theory in Toronto meeting,
Fields Institute for Research in Mathematical Sciences, May 2019.
Talk Title: Analytic quasi-orders and two forms of diamond
Abstract: We study Borel reduction of equivalence relations (more generally: quasi-orders) in the generalized Baire and Cantor spaces, and connect it with the study of two diamond-type principles. One of these principles holds for all ineffable sets, and the other fails on all ineffable sets, but, in L, holds for all stationary non-ineffable sets.
This is joint work with Gabriel Fernandes and Miguel Moreno.
Downloads:
This is a follow-up to my previous post on a generalization of a theorem of Rodgers and Saxl.
The story starts with a beautiful talk I heard by Martin Liebeck in which he outlined a result due to him, Shalev & Tiep:
Theorem: Let $f$ be a faithful irreducible of $Sym(n)$. Then $f^{*4n}$ (i.e.tensor product $4n$ times) contains every irreducible as a constituent.
This theorem can be interpreted as giving an upper bound on the diameter of the McKay graphs of the symmetric group. I won’t pursue this point of view, but it was within this context that the LST-team were working when they proved the theorem. I’m just stating the symmetric group version of their result – they had more general statements for finite (almost) simple groups.
Notice that the LPT-bound is pretty much as good as one could hope for: if $f^{*c}$ is to contain every irreducible as a constituent (for some positive integer $c$), then one needs $(\dim(f))^c > \sum \dim(f_i)$ where the sum on the right hand side ranges over all irreducibles of $Sym(n)$. Now the theory of Frobenius-Schur indicators tells us that, since all complex reps of $Sym(n)$ are defined over the reals, then $\sum \dim(f_i)$ is equal to 1+ the number of elements of order $2$. Writing $I_n$ for this latter quantity, a result of Moser and Wyman asserts that
Now, using the fact that there exists an irreducible of dimension $n-1$, we obtain that
and we conclude that $c$ must be at least linear in $n$.
A well-known heuristic in finite group theory says that whenever one proves statements about (ordinary) characters, there is probably a statement about conjugacy classes lurking nearby (and vice versa). This heuristic sounds very wooly, but it can be made rigorous in very many different contexts, and in very many different ways.
Sure enough, there is a “conjugacy class version” of the theorem above – it was proved by Liebeck and Shalev:
Theorem: There exists a constant $d$ such that if $C$ is a conjugacy class of $G$, a finite non-abelian simple group and if then $C^k = G$.
Here we are taking products of conjugacy classes instead of tensor products of characters. But, again, the result is of the same kind – it says that by taking a product $d$ times, then you will obtain all conjugacy classes, and $d$ is as small as one could possibly ask for, up to a multiplicative constant.
So, now, recall that in the previous post, I made the following conjecture:
Conjecture: There exists a constant $c$ such that if $G$ is a finite simple group, and $S_1,\dots, S_k$ are subsets of $G$ satisfying $\Pi_{i=1}^k|S_i|\geq|G|^c$, then there exist elements $g_1,\dots, g_k$ such that $G=(S_1)^{g_1}\cdots (S_k)^{g_k}$.
A special case of this conjecture occurs when our sets $S_1,\dots, S_k$ are conjugacy classes of $G$. In this case, we obtain the following statement:
Conjecture: There exists a constant $c$ such that if $G$ is a finite simple group, and $C_1,\dots, C_k$ are conjugacy classes of $G$ satisfying $\Pi_{i=1}^k|S_i|\geq|G|^c$, then $G=C_1\cdot C_2\cdots C_k$.
I don’t know how to prove this theorem, but it’s possible that it’s not out of reach. The Rodgers–Saxl theorem that started all this off implies that the conjecture is true for the family $PSL(n,q)$ with the constant $c=12$. The theorem I proved with Pyber and Szabo implies it for groups of Lie type of bounded rank, so one is left with (some of) the classicals of unbounded rank, and the alternating groups.
But back to chacters. What would be the character–theoretic version of the previous two conjectures? The first has, if I recall correctly, been stated by the LST-team:
Conjecture There exists $C>0$ such that if $\chi$ is a non-trivial character of a finite simple group $G$ and if then $\chi^{*c}$ contains every irreducible of $G$ as a constituent.
The Rodgers-Saxl analogue of this would be:
Conjecture There exists $C>0$ such that if $\chi_1,…, \chi_t$ are non-trivial characters of a finite simple group $G$ and if
then $\chi_{1}*\cdots *\chi_{t}$ contains every irreducible of $G$ as a constituent.
Let’s think about whether proving something like this might be possible just for the special case of $G=Sym(n)$ (OK, it’s not simple, but almost).
First let me note that LST prove their result by showing that if $f$ is a faithful character of $Sym(n)$, then $f * f$ or $f *f *f *f$ always contains $\chi_{1,n-1}$, from which the result follows (one just calculates how many tensor products of $\chi_{1,n-1}$ one needs to obtain all irreducibles as constituents).
My go-to man for symmetric group rep theory is Mark Wildon. I dropped him an email with the following question.
Question: Does there exist a positive integer $N$ such that if $k> N$ and $f_1, …, f_k$ are irreducible characters of Sym(n), then the tensor product of $f_1,…, f_k$ (in that order) contains an irreducible constituent isomorphic to $\chi_{1,n-1}$?
Mark was immediately able to shed light. This question can be answered affirmatively. Indeed the following argument (which is Mark’s) shows that something much stronger is true:
Let $k$ be a field and let $G$ be a finite group. On page 45 of Alperin, Local Representation Theory, it’s shown that if $V$ is a faithful $k$G-module then there exists $N$ such that the $N$-fold tensor product $V \otimes … \otimes V$ contains a free submodule. Since $V \otimes F \cong F \oplus … \oplus F$ (with $\dim(V)$ summands) for any free $kG$-module $F$, it follows that any product with more factors (which may be arbitrary kG-modules) also contains a free submodule.
In the language of characters: if $\chi$ is the character of $V$, and $\psi$ is any other character, then any character $\chi^N \times \psi$ contains the regular character.
Corollary: there exists $M$ such that any product of any $M$ faithful irreducible characters of $Sym(n)$ contains the regular character as a constituent.
Proof: let $P$ be the number of faithful irreducible characters of $Sym(n)$. (So $P$ is 2 less than the number of partitions of $n$, unless $n = 4$.) For each faithful character, let $N(\chi)$ be the $N$ given by Alperin’s result, and let $N = \max_\chi N(\chi)$. Take $M = NP$. Then in any product of $M$ faithful characters, some character appears at least $N$ times, and so the product contains the regular character. QED
That’s a brilliant start, but it doesn’t give us any information about what $M$ can be. The same argument as I gave at the top of this post implies that $M$ must be bounded below by some linear function in $n$: so, then, it it possible that one can choose $M$ to be linear in $n$?
Here’s what Mark had to say on the subject:
Some experiments in MAGMA suggest rather intriguingly that one can take $M = n - 1$ for $Sym(n)$. This bound is sharp for $n = 3,\dots, 10$.
Is this enough evidence to make a conjecture? Hell, yeah!
Conjecture: Suppose that $f_1,\dots, f_{n-1}$ are faithful irreducible characters of $Sym(n)$. Then $f_1* \cdots* f_{n-1}$ contains the regular character as a constituent.
I think of this as a “Rodgers-Saxl type conjecture for characters”. Now, the challenge is to turn it into a theorem….
]]>Pyber, Szabó and I have recently completed a paper entitled A generalization of a theorem of Rodgers and Saxl for simple groups of bounded rank. A copy of the paper can be found on the arXiv.
Our result was inspired by a result of Rodgers and Saxl which appeared in Communications of Algebra in 2003:
Theorem: Suppose that $C_1,\dots, C_k$ are conjugacy classes in $SL_n(q)$ such that $\Pi_{i=1}^k|C_i|\geq|SL_n(q)|^{12}$. Then $\Pi_{i=1}^kC_i=SL_n(q)$.
Our result has a similar flavour.
Theorem: Let $G=G_r(q)$ be a finite simple group of Lie type of rank $r$. There exists $c=f(r)$ such that if $S_1,\dots, S_k$ are subsets of $G$ satisfying $\Pi_{i=1}^k|S_i|\geq|G|^c$, then there exist elements $g_1,\dots, g_k$ such that $G=(S_1)^{g_1}\cdots (S_k)^{g_k}$.
Our theorem differs to that of Rodgers and Saxl in three important respects, two good, one not so good: First, our result pertains to all finite simple groups $G$ of Lie type. Second, our result does not just pertain to conjugacy classes, but to subsets of the group, provided we are free to take conjugates.
The third difference is a weak point: our result replaces the constant ``12’’ in their thereom with an unspecified constant that depends on the rank of the group $G$. We conjecture that we should be able to do better, and not just for finite simple groups of Lie type, but for alternating groups as well:
Conjecture: Let $G$ be a finite simple group. There exists $c$ such that if $S_1,\dots, S_k$ are subsets of $G$ satisfying $\Pi_{i=1}^k|S_i|\geq|G|^c$, then there exist elements $g_1,\dots, g_k$ such that $G=(S_1)^{g_1}\cdots (S_k)^{g_k}$.
This conjecture seems hard! Our theorem has a rank-dependency because it makes use of the “Product Theorem” which was proved, independently by Pyber-Szabó and by Breuillard-Green-Tao. To prove the conjecture we would need to replace the Product Theorem in our argument with, um, something else… But what?!
One last remark: there is a fourth sense in which our theorem differs to that of Rodgers and Saxl – we are interested in finite simple groups, while they consider $SL_n(q)$ which is, in general, only quasi-simple.
It turns out that the distinction here is not significant: It is not hard to show that our theorem is true if and only if the analogous statement is true for quasi-simple groups (provided you require that the sets $S_i$ do not intersect the centre of $G$)… And the same is true of the conjecture stated above. So the stated conjecture would be a generalization of both our result and that of Rodgers and Saxl, albeit we don’t specify the value of $c$ as Rodgers and Saxl did.
Let me finish this post by thanking my two co-authors, Laci and Bandi. I have worked with these two guys on a previous paper, and they are both brilliant and generous with their many ideas. I hope to have the privilege of working with them more in the future.
]]>You'd think that people in category theory would like it, from a foundational point of view, it literally tells you that functions exists if you can define them! And category theory is all about the functions (yes, I know it's not, but I'm trying to make a point).
Continue reading...]]>Here's a story. I think it was at the first web standard related event that I ever attended, the W3C workshop on ebooks back in 2012. Someone (maybe Janina?) presented an example of an accessible SVG and I was blown away. My memory, flawed as it is, says it was the classic SVG tiger but it was set up in a way that demonstrated amazing exploration features, providing non-visual representations that could dive into the entirety of the graphic, starting with high-level descriptions (something like a tiger's head) all the way down to detailed nuances (left whisker, 3 of 12).
I'm prone ot get the specifics wrong so here's a different example:
So this is a house. How would you describe it? Maybe: A house with a red chimney and a blue door? That's not bad but there's more so much more to be said about this house!
These descriptions could of course all be put into one very long textual representation, e.g., as a <title>
or an aria-labeledby
construction. And that would be ok. But I find it rather limited.
This is not how a human would describe things. Imagine I'd ask you to describe it. You would not start with the gradient of the doorknob on the first go. I bet you are much more inclined to provide some information at first and get into more detail if whoever is asking wants to dive deeper.
Sometimes, we are in the position to have more information like this on the web, too.
Imagine, this house was created in an authoring environment that specializes on such drawings; it may have been drag&dropped using pre-fabricated components, each having detailed descriptions, integrating user changes such as shape or color modifications, and being able to generate composited descriptions, perhaps combining them using simple rule sets (maybe even author customizable rule sets).
The other thing you may notice is that the house is more than the sum of its parts, i.e., a description for the house (and parts thereof) may not sufficiently be represented by stringing the descriptions of the leafs together; for example, where would the with in a roof with a chimney come from? For that matter, where would house come from? Depending on the content and context, there may be additional connecting words or phrases, there may be details to drop or reveal. Maybe the fabric of the roof or the whether the door is locked can be deduced from visual styling given other context.
If you are lucky and you have more information, then you may find yourself in a situation where you want to add differing textual representations on every level of the tree, just like you would in a real conversation, and you may want a way for users to have access to all those varying levels of representation - but not all at once as that could be overwhelming.
The most important point: like all good web standards topics, this is about a general, low-level problem. (Although solving a more general problem might appeal to mathematicians, too.)
So let's try to outline what this is about. Imagine you have
Now imagine that you have
Let's start with some fairly standard observations on accessible rendering:
Unified rendering visual and non-visual rendering should not be apart. Textual representation should be intentional, reflect the intention of the author. (This does not contradict that both graphical and textual representation will likely be created with tools, even tools leveraging heuristics.)
Progressive enhancement / graceful degradation a solution should work in a way that allows to progressively enhance content. For example, a top-level textual description (e.g., using aria-label
) is a robust fallback. You may lose some convenience if that's all there is - and even some information - but it certainly isn't terrible.
Performance a solution must be performant, especially if you apply it to hundreds or thousands of content fragments.
From an author's point of view, the key affordance is precision/control. This is worth repeating: Accessibility inevitably starts with author control. If authors cannot create content in a way that they can trust to render reliably, i.e., with the precision they put into their content, then they will not care to do so.
If there's no control, the platform is failing the authors. If it's failing the authors to create accessible content, then it's failing the user because they will not receive accessible content.
This primarily means that content should be authorable in a way that does not require any heuristics on the side of rendering (visually or non-visually). Imagine AT would have to guess how many items are in a list. Or AT would have to throw computer vision at each image to guess a description. That's ok for broken content but not acceptable for good content.
There are other useful things of course - ease of authoring comes to mind. But without a solution with tangible benefits, building authoring tools or practices is never going to happen.
From a screenreader user's point of view, there are more affordances that you probably don't want to ignore.
There are many more considerations beyond this but this would be a good start.
Note: this is not a complete solution to all of the above. But I feel like it's heading in the right direction.
The codebase for this lighweight walker dubbed mathjax-sre-walker is on GitHub and for this first public summary we've tagged v2.0.0. As I mentioned in 208, this work with Volker Sorge grew out of a demo that David Tseng, Volker Sorge and Davide Cervone built at the AIM workshop in San Jose last year. A simplified demo in a codepen is embedded below alongside a recording of a quick demonstration.
For the visual user, it will provide a means of visually exploring the underlying (and often hard to discern) tree structure by putting the tree in focus and using the arrow keys.
For the non-visual user, it will additionally provide textual representations for each tree node, in sync with the visual representation. It doesn't but could (should we get separate Braille streams in ARIA) additionally provide a simultaneous rendering in specialized formats such as Nemeth or UEB, chemical Braille or others.
For the screenreader user, it will provide the top-level tree node in browse mode. When the tree's top-level DOM node is voiced, the screenreader should put in focus, triggering visual highlighting; the screenreader should also indicate the tree role to imply further functionality is available.
The user can switch to the screenreader's focus mode to use keyboard exploration with the arrow keys which is matched visually by the highlighting. When the user switches back to browse mode, they can continue naturally browsing to the next piece of content.
The first, not too relevant part: the DOM tree has lots of information in data-
attributes and in a first step we enrich the content with a secondary structure. Getting such information is of course not easy (luckily we can already automate that for equations thanks to speech-rule-engine) and this step can be done server-side. Ultimately that's not the hardest part - domain experts can build such tools - we're using Volker's speech-rule-engine for the equations (which is a marvel).
Yet all the extra information won't help if we can't make use of it on the web platform.
So how is this realized in the DOM tree? As a bunch of aria-label
s (to add textual representations) and aria-owns
to carve out the tree structure that might differ from the DOM tree; we also add a role
to most nodes. In particular, we immediately get a top-level aria-label
which serves as a fallback.
Now what we're missing is some kind of AT functionality that would give us an aria-owns
tree walker. We have built-in table walkers in screen readers already so this does not seem like a massive stretch to imagine, especially given the evolution of the tree role so far. Sadly, we do not have general purpose tree walkers (yet).
In the second part, we overcome this by adding such a walker in JS. This walker consists of a tree structure (the aria-owns
tree, generated from the embedded data for performance) and a keyboard listener. It is very close to the DOM's treewalker API and WCAG tree examples, except that we're working on the aria-owns
tree because that tree may have a different order/structure from the DOM. This walker is fairly minimal, probably ~100 lines of ES6 code if you strip it down to its minimum.
Here's a demo of v2 or you can look at the one in the repository.
See the Pen mathjax-sre-walker v2 demo by Peter Krautzberger (@pkra) on CodePen.
A side note on the chosen role
attributes. The tree role and its related roles may appear a good fit but they have been developed for specific application-like interfaces. It might be that it's smarter to use something different here, I honestly don't know.
Besides possibly being the right roles, they are also supported well across the accessibility tool chain, i.e., they happen to get the effects we'd like to see.
What are those effects?
aria-label
to provide a default textual representation, especially in browse modearia-labels
with the role treeitem
provide detailed textual representation of all relevant nodes in exploration in focus modeactive-descendant
is used to move the the focus on the accessibility treeOther roles have too many negative side effects in practice. Perhaps they shouldn't but it's often too hard to dissect if a problem comes from the ARIA specs, browser implementations, OS APIs, or screenreaders. For example, some approaches didn't work well on MathJax SVG output but worked well on the clip art house; this is probably due to use
elements.
Some other roles we've tested across screenreaders:
img
(nested) prevents explorationapplication
loses the top level label when using browse mode and it is difficult to get back to browse mode after explorationgroup
is similar to application (except easier to get back into browse mode) but works poorly with CSS renderingbutton
and math
mostly work the same as tree
(very wrong, but hey)Maybe those issues are fixable or maybe just due to my lack of understanding of specs and implementations. Of course, the mythical role=static
(text
etc.) might be very appropriate but, alas, it doesn't exist.
Personally, I don't care which role I use. Whatever role works, I'm happy to use it. Tree seems both adequate and semantically fitting, and they have a history of steady improvement.
Below is a recording with NVDA and Chrome on Windows 10.
Overall, this works well on Firefox and Chrome while Edge and Safari generally don't get your more than the top-level label, i.e., the fallback; I haven't taken the time to compile for IE11 to test it.
NVDA seems best so far, JAWS seems to have a problem tracking focus (it jumps away when getting back into browse mode / virtual cursor), and Orca struggles with CSS rendering (see update below). VoiceOver with Safari is doing its thing (treating everything as a group) but VO works well with Chrome on MacOS. On iOS and Android we get the top-level labels (except VO with CSS rendering for some reason). The current code lacks touch input because (as far as I know) neither Talkback nor VoiceOver have a way to switch into (some form of) focus mode; it could be added, perhaps the visual exploration is interesting enough. I'll be publishing more demo runs as we move along.
Overall, I'm excited about the robustness at this stage and I plan to use this at work soon(ish). I also hope to bring the discussion around standardization of tree walkers to the ARIA Working Group - it seems to align with the evolution of tree widgets (e.g., for tab focus management, positional information) and a lot of content could benefit from some defaults in AT (much like with table walkers). But first we really need separate Braille streams.
update 2019-01-24 Joanmarie Diggs was kind enough to look into the issues with CSS layout (commits 9357aa9c and 87d78dad) and Orca now matches NVDA beautifully.
]]>A finite classical group is best thought of as a group of linear operators on some vector space defined over a finite field. Which means, of course, that I can take a basis for this vector space and then represent the elements of my group as matrices.
However, I have almost never seen anyone do this in the literature. “Taking a basis” is thought of as a rather crude thing to do when doing linear algebra – one generally tries to write general arguments that do not refer to any particular basis. However, speaking for myself, when I’m trying to work out what the hell is going on inside a finite matrix group, I often end up trying to write down individual elements as matrices… And then hiding all this when it comes to writing up the paper!
This means that I have never publicly written down any of these calculations, despite using many of them over and over again. So this page is designed to be a little repository for me to note down interesting observations about such calculations… And perhaps they’ll be useful for someone else some time…
This family of groups has some weird behaviour, especially when $q$ is even. For instance, let us write $\mathcal{U}$ for the set of maximal totally singular subspaces in a formed space of type $O+$ and dimension $2m$. If $q$ is even, then, provided $(m,q)\neq (2,2)$, we can define $\Omega_{2m}^+(q)$ to be the group inducing odd permutations on $\mathcal{U}$…. If $(m,q)=(2,2)$, however, the group so defined is not $\Omega_4^+(2)$, and one needs to define it in a completely different way (see Kleidman and Liebeck, p.31).
Let $(e_1,f_1)$ and $(e_2, f_2)$ be hyperbolic pairs, and consider the basis $\mathcal{B}={e_1, e_2, f_2, f_1}$ maintaining order. Then, one can calculate directly that $O_4^+(q)$ contains elements which are written with respect to $\mathcal{B}$ in the form
these elements form a subgroup $S$ of size $q^2$; indeed, for $q$ odd, they form a Sylow $p$-subgroup of $O_4^+(q)$.
When $q$ is odd, one can look at orders to see directly that $S$ is actually a subgroup of $\Omega_4^+(q)$. Indeed, the same is true when $q$ is even, but to see this it is easiest to observe that $S$ is normalized by the element
This element clearly takes one element $U=\langle e_1, e_2\rangle \in \mathcal{U}_2$ to another element $W=\langle e_1, f_2\rangle\in \mathcal{U}_2$. What is more $U\cap W$ has codimension $1$ in both $U$ and $W$. This allows us to conclude that $g\in SO_4^+(q)\setminus \Omega_4^+(q)$ (see Kleidman and Liebeck, p. 30). Now order considerations imply that $S$ must be a subgroup of $\Omega_4^+(q)$.
Let $R$ be the set of elements whose transpose is in $S$. One sees immediately that both $R$ and $S$ lie in $\Omega_4^+(q)$ and hence so does $\langle R,S\rangle$.
If one sets $a$ to equal $0$, while $b$ ranges across $\mathbb{F}_q$, in both $R$ and $S$, then one immediately observes a copy of $SL_2(q)$ in $\Omega_4^+(q)$. The same is true setting $b$ to equal $0$. Since these two copies effectively “avoid interaction” one immediately obtains a copy of $SL_2(q)\circ SL_2(q)$ inside $\Omega_4^+(q)$. (The central product is due to the fact that both copies of $SL_2(q)$ share $-I$ as an element.)
Now one can observe that $\Omega_4^+(q)$ also contains the element
Now order considerations allow us to conclude that $\Omega_4^+(q)\cong (SL_2(q)\circ SL_2(q)):2$, and we have written down all elements of the group.
When $q$ is even, $\Omega_n^+(q)$ has a maximal subgroup – the stabilizer of a non-degenerate $1$-space – isomorphic to $Sp_{n-2}(q)$. I will write down the elements of this subgroup for $n=4$; the general case follows similarly.
First, we adjust our basis from before to be $\mathcal{C}={z_1, x_1, x_2, y_2}$ where $z_1=x_1+y_1$.
With respect to this basis, our quadratic form becomes
and we see that $z_1$ is non-singular. Now simply observe that the following elements stabilize $z_1$:
Doing the “transpose trick” like before, one immediately obtains a copy of $SL_2(q)$ in this stabilizer and (using the perfectness of $SL_2(q)$ for $q>3$ if necessary), one obtains that the stabilizer of $z_1$ in $\Omega_4^+(q)$ is isomorphic to $SL_2(q)\cong Sp_{2}(q)$.
One can do this more generally for larger $n$ – one simply has to exhibit the root groups of $Sp_{n-2}(q)$ inside the stabilizer of $z_1$ in $\Omega_4^+(q)$. There are two sorts of root group here: the first already ocur in $\Omega_{n-2}^+(q)$ and so are easy to write down in $\Omega_n^+(q)$; the second all take the form given above.
]]>Some time ago I heard an interesting story about a remarkable mathematical fact. I can’t locate the source right now, so here is the story as I understand it.
If one consults the Atlas of Finite Groups and looks at the maximal subgroups of the Mathieu group $M_{23}$, one will observe that it has two rather different maximal subgroups, both of which have order 40320. The first is of form $PSL_3(4):2$; the second has form $2^4:A_7$.
It’s important to note how very different these two groups are, despite their shared order: in particular, they have different non-abelian composition factors.
John G. Thompson investigated these groups a little further and observed the following remarkable fact: they have exactly the same element count, by order. By which I mean that they have the same number of elements of order 2; the same number of elements of order 3; and so on. (How he realised this, I really don’t know – but I guess geniuses tend to realise more things than the rest of us…)
Thompson’s observation led him to make the following conjecture: Suppose that two groups, G and H, have exactly the same element count, by order. Suppose, moreover, that G is solvable. Then H is solvable.
This conjecture can be interpreted as saying that it is, in principle, possible to recognise whether or not a group is solvable simply by knowing the number of elements in the group of each different order. Note, that I say “in principle”, as it is not proposing an algorithm for recognising solvability in this way – that is a harder question.
By way of comparison, it is easy to see that there is an algorithm to recognise nilpotency by knowing the number of elements in the group of each different order – one simply tests whether, for each prime p dividing the order of the group, the number of elements of order a power of p is equal to the highest power of p dividing the order of the group. If this is true for all p, then the group has a unique Sylow p-subgroup for each prime p dividing its order, and this property characterises nilpotency.
As I understand it, Thompson’s conjecture has been proved… but I’m not sure by whom. If anyone could point me to a source, I’d like to give appropriate credit! I don’t know if an algorithm for recognising solvability in this way has ever been written down. I’m also unable to point to the place where Thompson first made the observation about these subgroups of $M_{23}$ and where he first stated this conjecture; again any help with sources would be appreciated.
]]>The notebook is available for download here.
The jupyter notebook is available for download here. The intention is for it to be run on your free UCalgary syzygy server. The document can be read on Github, but on Syzygy it can be interacted with.
LaTeX (pronounced “Lay-Tech” or “Laa-Tech” ) is a computer language used by mathematicians and physicists to nicely format mathematical expressions. Every math student will eventually need to know how to write in LaTeX, since all math journals are written using LaTeX. It is also used in every online platform that supports math writting, such as D2L discussion posts, Github, WordPress blogs, and Mathoverflow.
Jupyter notebooks are a file type that blends HTML, formatted text (markdown), formatted mathematical formulas (LaTeX) and runnable code (Python). It is the standard way that data scientists communicate with each other online.
Source: https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook
My resource is an interactive guide to properly typing LaTeX. The format is a Jupyter notebook which is an authentic way in which LaTeX is used by data scientists. The resource could be used in any first-year course at the University of Calgary that requires written mathematics (such as MATH 265 – Calculus 1 or MATH 311 – Linear Algebra 2). The examples are geared towards students in those courses.
This was designed to be used as a resource outside of class that students could use whenever they want, as often as they want. In this sense it is a “flipped or blended” resource [3]. The nature of a jupyter notebook (it’s a fancy text file) means that students necessarily must download their own version of it and edit it as they go. This encourages them to personalize the material and take ownership of it. They can even modify the examples in the text instantly see how it changes the outcome.
In the SAMR model [2] this is firmly in the “Redefinition” category, as the Jupyter notebook simultaneously acts as information, scratch pad, code compiler, notebook, and authentic outcome. In addition, completing the exercises in a Jupyter notebook is an authentic use of the skill being trained.
The main driver of solid pedagogical underpinnings is the Universal Design guidelines [1]. There I used all nine principles to make the resource more robust and drive student learning in many different, but natural, ways. In particular, the resource is low in passive learning, and high in active learning. Each section is a couple sentences, some examples, then right into activities. The activities are of use to a student in a first year math class, and many additional, alternative explanations are offered. See my activity design post for more discussion about this.
]]>
Since I have a somewhat broad base of knowledge in Ramsey theory, I tried my best to give a short description of each of the speakers in language that makes sense to me. My view is biased, and my intent is always to show off the amazing work everyone is doing. I hope nothing comes across as negative or critical; that is not my intent.
You can find all the abstracts here, and all the videos of their talks here.
Here are four talks I picked out that I think are important and worth watching for an outsider to Ramsey Theory.
To kick off the conference, Jaroslav Nesetril gave a nice overview of the types of Ramsey theorems, problems and perspectives we are likely to see this week. He told us about how Mendel (the biologist) was the first to use the notation [LINK]. Of the many important questions related to Ramsey theory, Jarik mentioned two:
The second talk was supposed to be given by Michael Kompatscher, but he was unable to travel to Canada because of illness. So I (Mike Pawliuk) gave the second talk in his place, about connections to data science, big data, statistics and machine learning. Funny enough, I could have given Michael’s talk since it was about joint work that we did.
Matej Konecny gave the final talk of the morning, showcasing new results that capture the completion algorithms of many different classes of metric spaces. Completing large cycles to a complete metric graph is an important step in showing that a class has a Ramsey expansion. This technique has become more and more explicit in recent years.
Natasha Dobrinen started the afternoon with a chalkboard talk showcasing new results about big Ramsey degrees. She showed how she overcame the difficulties with the random graph to find the big Ramsey degrees for graphs that forbid small complete graphs. At its core she was able to sidestep a difficulty that the Sauer construction presented by starting where the Sauer construction ends.
Jan Hubicka highlighted exciting work that he’s been doing about general methods of proving the EPPA (Hrshovski property) for various classes of metric spaces. He was able to breathe life into the idea of valuations first presented elsewhere.
Finally, Macin Sabok presented work showing that the class of hypertournaments does not have EPPA. It uses generalizations of the ideas present in Herwig-Lascar, together with other ideas from algebraic topology. It does not answer the (very hard) prized problem “Does the class of tournaments have EPPA?”, but it does prove EPPA for a specific class of isomorphisms.
The day ended with a small group of us going on a fast hike of Tunnel mountain.
The morning session focused on the fruitful connections that topological dynamics has had with Ramsey theory through the KPT correspondence.
Friedrich Martin Schneider gave the first talk which presented the “Gromov-Milman” perspective on the KPT correspondence. We saw a Gromov result that is a concentration of measure result, which corresponds to a Ramsey result through the KPT correspondence. Martin then showed us a stronger version of Gromov’s result, answering a question of Pestov.
Colin Jahel, a graduate student, gave an impressive talk about the semigeneric directed graph. It was especially interesting to me since it answered an open problem from my thesis that I was unable to solve. Colin presented results that reified techniques for proving unique ergodicity, and sidestepped using probabilistic arguments.
Andy Zucker rounded out the morning by giving an alternate perspective on the work of Colin (who is a coauthor). He discussed the dynamical perspective of the amenability and metrizable flows. This work is another step in Andy giving a very clear picture of what is happening with the universal minimal flows. This is one of the clearest, most straightforward talks about this topic I’ve ever seen.
From the audience, Stevo Todorcevic mentioned a nice (classical) result that if the product of two spaces contains a copy of , then one of the factors must contain a copy of it. In this way, is an “irreducible” space.
The afternoon session featured subtle uses of Ramsey theory underlying key theorems. This made many of the results possible, even if they didn’t directly invoke Ramsey results.
Wieslaw Kubis started the afternoon by presenting results about uniform homogenity, Katetov functors and mixed sums of Fraisse classes. The mixed sum construction is a type of “bipartite” construction where each part is a Fraisse structure. While he used this construction to provide a counterexample, it is also a broadly useful construction.
Milos Kurilis followed up with a proof of Vaught’s conjecture in the case of monomorphic functions. This result quietly uses the fact that chainable uses Ramsey type-results. It was a remarkably understandable talk (to me a non-expert in model theory), despite the technical nature of the material.
In his talk Milos produced one of the most beautiful diagrams I’ve ever seen in a math talk.
The final talk of the afternoon was David Hartman, who ended the day with a high-energy punch. David separated two closely related embedding properties with lots of examples and constructions. The final payoff for the day was a nice construction of the Rado graph partitioned into finitely many pieces.
After the presentations we had a problem session, where participants in the conference shared problems of interest. Here are the mostly-complete notes I took.
Ending the day, Wieslaw Kubis lead us in a problem session about the weak amalgamation property. About half the participants showed up and many people contributed to the discussion. Finally, at 8:30 (almost 12 hours after starting) we called it a day.
Wednesday contained only talks in the morning.
Martin Balko explained the project of ordered Ramsey numbers. He started by surveying the classical results about (usual) Ramsey numbers and contrasting them with the ordered versions. In many cases the bounds on the ordered/unordered Ramsey numbers are very different, even in the case of paths.
Lionel Nguyen Van Thé followed up by reminding us about Erdos-Rado type results about canonical colourings and equivalence relations. Big Ramsey results would become a recurring theme in this workshop. This talk stirred the most discussion about how it relates to canonical functions in other areas (like algebra).
The final talk of the morning was Sam Braunfeld, who gave an overview of the classification of homogeneous finite-dimensional permutation structures. Sam compared and contrasted his results with Cameron’s 2002 classification. This type of work is of particular interest to people in structural Ramsey theory, who use these catalogues as a source of interesting examples.
In the afternoon, some of us hiked up to Sundance Canyon.
Thursday’s schedule was anticipated to be rather heavy, but the speakers were very gentle to the audience and it ended up being one of the best days of the conference.
Martino Lupini started us off by explaining his intuition for his recent results about the generalized Tetris operations relating to Gower’s theorem. In particular he showed us how he used a perspective from non-standard analysis and the idempotent ultrafilter proof of Hindman’s theorem.
Francisco Guevara Parra gave a talk about Tukey orders and local Ramsey theory. This was one of the few talks to show the application of Ramsey theory to topological groups, and infinite combinatorics.
David Chodounsky ended the morning with a problem motivated by set theoretic forcing, but of independent interest to those studying Ramsey theory. David stirred up interest in his question about Halpern-Lauchi ideals. He also gave us a survey of the landscape of HL ideals including a very nice map of the known implications.
Jordi Lopez-Abad gave us a very nice presentation of various approximate Ramsey properties. He put special effort in to give examples and pictures and it was very appreciated. We saw that some of the studied objects were “shapes” where the boundary is kind of blurry, but you can still tell the difference between a square and a hexagon. This context is “almost Euclidean”. There was a lot of motivating intuition here.
Michael Pinsker described canonical functions but, funny enough, not the “canonical functions” he originally wanted to talk about! The motivation for his talk is a nearly complete paper he wrote in 2002 that contained a critical false lemma. This lemma is an (infinite) Ramsey theory problem, and Michael was hoping to spark some renewed interest in it.
Aleksandra (Ola) Kwiatkowska gave the final talk for the day, and it was one of the best of the conference. Ola showed that an obscure (but natural!) Fraisse class (the Weiewski dendrites) negatively answers two fundamental (and related) questions in the field (Does every omega-categorical structure have a precompact Ramsey expansion?). David Evans had given an answer to this in 2016, but Ola’s example is more natural, and provides a counterexample to some other conjectures as well.
After dinner, Michael Hrusak led a working session about Ramsey-type problems on Borel Ideals. You can watch the video of it.
The final day, like Wednesday, only had talks in the morning. The theme of the morning was new frameworks and directions for Ramsey theory.
Noé de Rancourt opened the talks with a discussion about Ramsey spaces, and specifically, what kind of results can you get if you don’t have a pigeonhole principle in that space. Ramsey spaces are very nice combinatorial objects that capture the essential Ramsey behaviour of many geometric objects; it is related to combinatorical forcing. Noé showed us how to relate these to games and local Ramsey theory.
Dragan Masulovic showed us how the tools of category theory can be used to view and prove dual results in Ramsey theory. He showed us the value and type of isomorphism of categories in the Ramsey context. There was a special attention to making the results usable and practical for the non-category theorist.
Stevo Todorcevic gave concluding remarks for the conference. To be honest, I was so caught up in his talk that I didn’t take notes. I highly recommend watching his talk.
And with that, we wrapped up this iteration of the Ramsey theory meeting.
]]>
In another problem there, coming from a work with David Asperó, we asked if an \(\omega_2\)-closed forcing must preserve the property of being proper. Yasou Yoshinobu provided us with a negative answer based on Shelah's "Proper and Improper Forcing" XVII Observation 2.12 (p.826). Take \(\kappa\) to be uncountable, by forcing with \(\Add(\omega,1)\ast\Col(\omega_1,2^\kappa)\) and appealing to the gap lemma, \((2^{<\kappa})\) is a tree with only \(\aleph_1\) branches. It can therefore be specialized by a ccc forcing in that model. The iteration of these three forcing (Cohen real, collapse, specialize) is clearly proper. But now by forcing with \(\Add(\kappa,1)\) we must in fact violate the properness of this forcing, which was defined in the ground model, since the new branch is also generic for the tree and will therefore collapse \(\omega_1\).
Continue reading...]]>So naturally I could not skip on this video.
Continue reading...]]>It's all in the people.
Of course, the very best part of this workshop was the people who attended it. It's amazing to get people from NVDA, JAWS and ChromeVox into a room for a few days. It's even better when you have people from MathJax, MathLive, Desmos in the same room. It gets even better when you have publishing experts from Wiley and Pearson on board. It's incredibly much better to have the vast expertise of people such as T.V. Raman and Joanie Diggs there. But for me, the most thrilling was the educators in the room. They are the key and without them we are all lost. And I'm the first to admit the workshop didn't serve them well enough. Even more importantly, at future workshops we need to get students in the room as well. Because what the hell are we doing without them.
In extension, this is a compliment to AIM's workshop design. Providing funding not only for a workshop but for everyone's travel and accommodation was excellent but also crucial. We would never have been able to get all these people in a room. This is the right way to hold workshops, especially when inclusiveness is a huge issue.
There's a particular limitation of today's accessibility landscape: we cannot specify separate textual alternatives for voice and Braille.
Generally, not having separate streams for voice and Braille does not seem like a huge problem. As long as all accessibility needs are covered by a fixed set of standard elements that are designed for both aural and tacticle interfaces, then assistive technologies can reliably implement a split in the stream, i.e., present separate voice and Braille streams from that. For example, if you have a dedicated button element, it can represented as a btn
contraction on a Braille display and voiced as Button.
As usual, not all things can be covered by standards. Say your button is is used as a control in a game then you may want to augment the button's accessible name to include the action. So if the button opens a selection of planets to travel your to in your game, then you may want to have this voiced as planet. You can do that of course and then you might get a voicing of planet; button and something akin to pln btn
on a Braille display.
Unfortunately, you might find yourself in a situation where you need to prevent the addition of button in the voicing because it may be problematic for your aural users, e.g., users with learning disabilities may find it to be distracting noise. But now how do you identify the button on a Braille display?
For equation layout, the situation is much like that final situation. In many countries, specialized Braille formats such as Nemeth, UEB, or Marburg have been developed to represent equation layout in Braille. These are well established and there are not too difficult to create. But they differ considerably from what you would might want to render aurally (and visually). In fact, since most precede the web, they try to capture (simplified) visual layout, including all the ambiguities we face there.
For me, the greatest positive experience of the workshop was to see the group assess the problem, come to an agreement that it needs a solution, add it to the ARIA tracker, build demos and even see NVDA whip up a first implementation that we could explore by the end of the week.
This was huge.
And yet, it is the easy part. Now the long road towards a proper standard with widespread implementations lies ahead.
I brought my favorite problem to the workshop - deep aria-label
s and I was not disappointed.
Assistive technologies for equation layout (in particular for MathML) have to apply heuristics (read: guess) the semantics of an expression so that they can generate meaningful non-visual representations. This is a problem because heuristics that are hard coded into an external tool such as screenreaders cannot be altered by standard means, leaving authors without adequate means to ensure the quality of their content. (If a screenreader voices every superscripted 2 as squared and you have no way of changing that, then you're screwed.)
More importantly, since equation layout is, ultimately, only visual, a perfectly correct representation in HTML is as span
s, i.e., there are no semantics. Finally, ARIA (naturally) does not have a dictionary of equation layout terminology (let alone mathematical or scientific terminology) to use - a) because all past dictionary based approaches have failed and b) such a dictionary would have to be extensible (read: infinite) which ARIA, so far, does not really want to be (role-description
notwithstanding).
So the pragmatic answer is: you'll just have to do it yourself and use deep aria-label
s: you override every single accessible name computation by slapping a fixed label to things. Because, ultimately, this is how we read equational content - with words.
The trouble is that it's easy to add a single aria-label
at the root but it is hard to provide an explorable structure that provides a decent user experience. You'll want to provide labels at the leaf level but the state of ARIA prevents those from adequately building up an explorable tree. (And we're not even talking about refinements such as providing summaries and structural and positional information during exploration.)
At the workshop, David Tseng from Google's ChromeVox team, Volker Sorge from Speech Rule Engine and Davide Cervone from MathJax sat down nd build a first demo that tries two things
aria-owns
attributearia-active-descendant
manipulationsThis is, simply put, a fantastic step forward.
The approach builds on existing parts of ARIA and identifies reasonable, incremental improvements to it. It raises important questions on general exploration, e.g., how is there a generic aria-table
walker in every screenreader but not some basic aria-tree
walker (such as breadth or depth first)?. And yet it pragmatically builds an unobtrusive solution anyway that works at 60FPS. It works with any markup, in particular any approach using CSS or SVG markup. And to top it off, it leverages existing open-source tool to enrich pre-existing content. And while it shows just how far ahead MathJax and Speech Rule Engine are, this approach is transparent and easily used by any other equation layout library.
In terms of UX, this is also a critical step forward. The approach is should be able to provide a seamless an interaction for visual and non-visual users alike, in synchronization. Effectively, it pushes MathJax's Accessibility Extensions from client- to server-side, requiring minimal JavaScript (just a keyboard event listener) to expose the content, without live region hacks, and with a solid non-JS fallback. It provides a clear path for making even that bit of JS obsolete through natural improvements to ARIA. It opens a path to finally get rid of the horrible hackery such as JAWS did back in the day, manipulating IE's DOM to manipulate MathJax, or Texthelp is doing today by injecting JS on the client (yuck!, and also badly failing when content-security is in place).
Even better, if you combine it with the previous part (exposing specialized Braille which SRE can soon produce), then this would immediately become the by far best, universal rendering of equation layout on the web: robust, high-quality, customizable, precise. And it is a solution that will only get better as standards evolve while leaving the full control with the author (with or without aid of heuristics at authoring time).
I'll dig into this more some other time but admittedly, I'm pretty excited.
You can find the organizers' report at aimath.org but you can take one thing away: It's looking very good for accessible equation layout on the web these days. And it will only get better.
If we can continue these workshops, things will move faster for everyone. And maybe, just maybe, we can even finally move on to actual mathematics (and other STEM content) on the web.
]]>Of course it is more complicated than that.
Isn't it always?
, if you prefer.
Don't talk about "math accessibility" when you mean equation layout accessibility.
Mathematics is an ancient domain of human knowledge, formula or equation layout a visual layout technique developed primarily in the 19th and 20th century.
Mathematical content is far more than content that needs equation layout and equation layout appears in many more domains than just mathematics.
We will fail to make equation layout accessible if we think we can treat equation layout identically across mathematics, physics, chemistry, computer science, biology etc (and their various sub-fields). For example, voicing a superscript 2 as "squared" may be a reasonable heuristic for middle school mathematics but a miserable heuristic for chemistry.
We will fail to make mathematical content accessible if we only make equation layout accessible and vice versa.
Manually overriding accessible name calculations (e.g., via aria-label) on text(-like) content is generally considered a last resort and it is clearly not a long term strategy for accessibility.
But we have aria-labels because we know from experience that there's always a situation where you need them.
Currently, it is extremely hard to augment equation layout with aria-labels let alone anything beyond simple overrides. This is a problem of authoring but much more of rendering and assistive technologies. MathML-based solutions in browsers and AT are particularly bad at this and to some degree this cannot be fixed (we should get to that later).
Whatever solutions might arise in the future of equation layout, like everything else on the web, they must be able to work together with interspersed aria-labels, together with potentially many interspersed aria-labels, and ultimately with only aria-labels.
In other words, deep aria labels aka aria labels all the way down must work as well.
This is a problem as ARIA has limitations when it comes to exposing custom tree structures and making them explorable.
More general author-driven augmentation must also work. Equation layout may have a natural tree structure derived from the DOM but we must be able to modify that. Aria-owns might be a solution here but right now it appears to too limited (either by spec or implementations).
If equation layout is visually inadequate, it cannot be considered accessible. The problem is: we have no solid basis for measuring this.
While TeX-style layout is generally considered the highest quality among heavy users of equation layout, there seems to be no research evaluating this from an accessibility point of view. For example, TeX layout is largely unconcerned with K12 content and (as a print technology) has no concept for the kind of dynamic modifications we can realize on the web. While there are minor variations, e.g., elementary education preferring sans-serif fonts and requiring fonts with an open glyph for 4, it's unclear to me in how far these preferences are evidence-based (pointers very welcome); in any case, they are also deeply rooted in print and might be moot on the web, e.g., there might be better ways for get whatever effect such variations are supposed to achieve.
On the web, it probably means that equation layout must be flexible enough to allow all kinds of (user or author enabled) customizations. Some obvious questions: What should happen when the line-height changes? The letter spacing? When a transform or animation is applied to a descendant? A color changes on something with a specific color? A font (style) changes on a mathvariant? These all might be useful for accessibility purposes (e.g., with visual impairments, learning disabilities). And there are likely many more things we cannot imagine yet.
I think we greatly lack research as to what features in visual layout might be important for accessibility on the web. Without such research, we have no adequate basis on which to discuss improvements to equation layout and the standards that enable it on the web.
Heuristics are important across the assistive technology chain to recover from bad input (content). However, heuristics should not be necessary with good markup, e.g., markup that is ideal according to specs.
Equation layout is tremendously ambiguous, primarily due to its history and the limitations of print technology. For example, it is near impossible to differentiate the typical vertical fraction layout from the various other notations that share a similar vertical stack (2-3 children, depending on a line in between).
In other words, even high quality markup for equation layout requires heavy use of heuristics to guess the semantics.
Heuristics for non-visual representation of equation layout have existed for a very long time (and before the web). Today, some assistive technologies integrate heuristics for equation layout when done as MathML markup.
This is a problem because most of these heuristics are fairly bad. Heuristics for equation layout are largely of low quality at scale. It is unsurprising when (e.g.) Nemeth's math speak rules were invented for a person reading to another; such limitations could easily be overcome in that situations. At the scale of the web, it is much easier to run into edge cases where heuristics are too coarse or too fine grained. An almost universal limitation is the restrictions to individual fragments of equation layout, ignoring the context (both equational and otherwise).
Relying on heuristics in AT for good content poses a serious issue (hinted to earlier): heuristics interfere with augmentation. For example, the commonly used heuristic that every superscript 2 is voiced square
, then no override may be possible; even if it is, will an override override the phrasing (e.g., to the power of two
) or also consider the superscript position? Do you need two overrides to clarify? If a heuristic identifies (1+2+3) as a summation and provides summary information (e.g. sum with three summands
), what happens when an author augment one of the + signs (say, with aria-label="times"
)? We would need to augment both the content and the heuristics. As the saying goes: and now we have two problems.
Much like with textual descriptions of other visual content (e.g., image alttext, video captions, SVG descriptions) heuristical tools (human or machine guided) should be used at authoring time.
There is a position that I encounter every once in a while: just give non-visual users access to the visual layout and stop. That's a very appealing proposition: instead of trying to make sense of visual layout semantically, we just provide information about what letter/word is where on a two-dimensional canvas. It also appeals to a basic idea of equality: visual users only have visual access, why would non-visual users get additional information?
Unfortunately, it is a red herring: it is neither easy nor helpful nor what anyone has ever done.
It is obviously not easy on the web because layout is dynamic; if two users read a document on two different devices, they might have a very different idea as to what is where. Even within equation layout, automatic linebreaking/reflow can shift parts around, more advanced methods (e.g., MathJax's collapsing feature) can vary even more greatly.
But ignoring this larger problem, traditional equation layout has a few odd concepts that make this even more difficult. For example, movable limits can move elements from sub/superscript positions to under/overscript positions based on the context of a subexpression, without any change in the markup; reversely, this can happen when changing text content (thanks to things like the operator dictionary). Another example is the concept of embellished operators which make it difficult to identify reasonable layout blocks to describe; similarly, brackets may or may not be marked up in a way that groups open and closing brackets. Essentially, you will still need to analyze the layout semantically to identify what really belongs where and together with what because that is ultimately a question of why.
As a consequence, the idea of just describing layout is not helpful to anyone, no matter what assistive technologies they might use (even if it's just a screen). Even more so, when you have to do the same analysis as you would for identifying semantics of an expression.
What's more important is that describing layout is not what anyone has ever done (so I would surmise nobody wanted to do it). Some layout information is always ignored, other layout is always inferred as semantic. As way back as Nemeth's math speak rules for print we have had heuristics that will read any superscript 2 as squared, inferring semantics where there are none. Reversely, no assistive technology for equation layout will tell you about the dimension of a stretchy character (neither directly nor transformatively e.g. via audio cues). Again, a good example is a movable limit where rule sets get around describing (unreliable) layout in favor of heuristically determined semantics, e.g., in a sum they might voice sum from [subscript] to [superscript]
, neatly avoiding the layout. Above all, human beings do not speak equation layout as layout. Nobody says vertical bar A vertical bar
they say absolute value or determinant or something completely different, nobody says C O subscript 2
they say CO2 or carbon dioxide or possibly some more precise wording if it appears inside a checmical equation.
Of course, describing visual layout is nevertheless a decent fallback mechanism, e.g., when semantic heuristics fail but you can still recover layout information and it is important to be able to enable users to explore the layout if they need to (e.g., so as to reproduce it for their own work). But edge cases should not limit the expressiveness of accessible equation layout in general.
(An independent issue is to expose layout so that a user can guess how something was authored (e.g., when voicing gives you determinant A
, the questions may be if it was represented visually as det A
or |A|
.)
Accessibility is not a one way street, equation layout even more so. Accessibility must handle directionality on many axes.
Accessibility means to improve access to content no matter whether a user can access it with all theoretically possible human capacity or only using a small fragment thereof at a time. Due to its highly compressed form, equation layout requires more back and forth across a particular equation fragment as well as the entire document than most other forms of content. This is both a bug and a feature but either way it won't go away any time soon.
Accessibility means to improve interaction with content so as to allow all users to transfer knowledge better. Equation layout has a huge discrepancy between authoring formats and rendering. We must strive to improve this.
Acccessibility means improving the interaction between human beings. If two students explore a document, they should be able to do so together so that they can engage each other. Therefore, the effects of exploring content should be equivalent between different exploration methods. At the AIM workshop earlier this year, Sam Dooley told the story of a young blind person joyously celebrating that their parent could read my math
as they used an accessible authoring and rendering environment together.
Interaction in these multiple directions will provide more information to more people, enabling wider accessibility, whether people identify as AT users or not. More importantly, it will show the path towards what the web can really do for the knowledge traditionally represented in equation layout.
]]>Fraenkel's construction does not affect sets of ordinals, in particular the real numbers can still be well-ordered in his models. Cohen's work, however, directly breaks that. The Dedekind-finite set added is a set of reals. In particular, the reals cannot be well-ordered no more.
Continue reading...]]>We learn in class that a circle or sphere of radius r has curvature inversely proportional to its radius, that is it has curvature .
In this class we used baking cookies to illustrate how the curvature of an object can change over time. Seen from over top, a ball of cookie dough flattens out as it bakes.
This got me thinking about how exactly is the size of the ball of cookie dough related to the size of the cookie you get in the end? So I did some science.
A student in my class provided me with the following recipe:
Here’s a simple peanut butter recipe that’s safe for people with gluten and/or dairy allergies:
If change in curvature is desired:
- 1 cup peanut butter
- 3/4 – 1 cup sugar
- 1 egg
- 1 tsp baking soda
- tiny splash of vinegar
- tiny pinch of salt
Preheat oven to 350 degrees. Roll dough into balls and place on cookie sheet. Bake for ~10-12 minutes.
For those who desire nearly constant curvature:
- 1 cup peanut butter
- 1 cup sugar
- 1 egg
- pinch of salt
Preheat oven to 350 degrees. Roll dough into balls, place on cookie sheet and flatten to desired curvature with fork. Bake for ~10-12 minutes.
After mixing everything together (using 3/4 cup sugar), since I used natural peanut butter the dough was too goopy to form into balls. So I made the following additions:
For my first batch I planned to take them out after 11 minutes, but they needed additional time, so I left them in the oven for an additional 5 minutes. This could potentially introduce some p-value hacking because I changed my experiment in the middle of it. I don’t think the additional time changed the shape of the cookies, just how gooey they were in the inside.
I got the following results for Batch 1:
Diameter of dough ball (cm) | Diameter of cookie (cm) |
2 | 4.5 |
2 | 4.5 |
2.5 | 5 |
3 | 6 |
3 | 6.5 |
3 | 6 |
3.5 | 6.5 |
3.5 | 7 |
4 | 8 |
3.5 | 8 |
4 | 8 |
4.5 | 9.5 |
5.5 | 11 |
The three biggest cookies pushed into each other and didn’t spread out completely. This made them a little more square than they should have been.
Batch 2 was in for a full 16 minutes, but it needed even more time! I put them in for an additional 4 minutes.
Here are those results:
Diameter of dough ball (cm) | Diameter of cookie (cm) |
5.5 | 11 |
6.5 | 14.5 |
The biggest cookie was pretty unstable at first, but after leaving it on the pan a little longer it firmed up.
Here’s what the cookies look like stacked from largest diameter to smallest.
Of course I had to plot this data, so I did and got the following line of best fit:
In English:
The diameter of a cookie is twice the diameter of the ball of dough used to make it.
In terms of radius, since the radius is half the diameter, and they compound, we get:
The radius of the cookie is four times the radius of the ball of dough.
Since curvature is inversely proportional to curvature, we get:
The curvature of the cookie is a quarter the curvature of the ball of dough.
I think we can actually make some interesting conclusions about this.
What size cookies should I make to avoid wasted space on my cookie sheet?
It turns out that by using this relationship between the size of the dough ball and the size of the cookie, if you have a fixed amount of dough V, and a fixed area of your cookie sheet you should make:
cookies of diameter .
I hope you had as much fun as I did! Thanks for reading.
Thanks to Robert Fajber for improvements to the graph, and Jessie Lamontagne for further directions and questions.
If you’re interested in the nitty-gritty details about how I came up with those formulas, here they are.
Fix . Assume we want n cookies of diameter D ( that start with diameter d). We know . We want to space out the cookies so that their bounding squares do not overlap. These squares give us
The volume of the dough gives us
This is two equations and two unknowns. Solving that gives us the desired formulas for D and n. Then we related D back to d.
]]>So often times, it seems, it is very tempting to talk about theorems that you haven't finished writing their proofs in full. Usaully, we put "work in progress" to indicate that this is something not fully verified, not fully vetted (at the very least by ourselves).
Continue reading...]]>This is an in-depth description of the basic combinatorial and geometric techniques in graph theory. It is a very thorough and helpful document with many Olympiad level problems for each topic. (No solutions are given.)
Topics include:
A large collection of problems and topics almost all of which have solutions or hints.
Topics include:
Contains a concise list of important results together with a guided discussion to five example problems that use graph theory.
An introduction to the probabilistic method in graph theory along with 10 problems.
A list of about 30 problems and solutions in graph theory.
Topics:
This is a 4 page article that introduces Ramsey Theory for graphs and arithmetic progressions and its historical relation to the IMO.
A collection of 12 topics about coloring graphs and planes. There are many problems with solutions.
This series of slides states 7 results in extremal combinatorics that are really the same.
Topics:
Well, obviously, if choice failed then the answer is no, just by taking \(x=x\). But what if we remove that option. Namely, if the inner model is not the entire universe, then choice holds.
Continue reading...]]>My regular meetings with Sam are one of my great pleasures. Since our friendship is almost exclusively virtual, it's surprising that we have kept it alive for quite a while now. Almost weekly, we get on video to work on new ideas or and projects - or just chitchat about life, work and being young parents (or parents of young kids anyway).
Perhaps unsurprisingly given that we met at a Young Set Theory Workshop, it all started with us setting up Set Theory Talks which grew turned into settheory.mathtalks.org (and you can get a subdomain with the same semi-automatic features if you like). Nowadays, Assaf is handling the real work of grooming the site while Sam and I continue the little bits of technical support as needed. Later, I pulled Sam into maintaining mathblogging.org with me after Fred and Felix dropped out.
I suppose it was inevitable ever since I left research 6 years ago. But, really, life just got busier and hosting more complex so late last year, Sam and I decided that we do not have the time (nor the abilities) to continue hosting more and more sites. We let everyone know what's happening and helped them in their transitions.
Yesterday, we pulled the plug on all the WordPress goodness we had built over the years. And thus, Booles' Rings has passed - in its original form, a literal network of WordPress sites for academics.
Of course, none of the sites have disappeared. In many ways we're now where I wanted to get everyone to: not just researchers taking the web seriously as a fixed point of a research career, building a stable presence of one's research persona, overcoming the cacophony of ever-changing, dead-or-dying department pages where Google page rank inevitably yields the outdated ones.
No, I always wanted more: get researchers to take this platform seriously, embrace it as a medium with new (and old) idiosyncrasies. Take it seriously as a tool that you should wield confidently and, in need, wield independently, no matter what. No more lock in.
And this is where Booles' Rings is now. The people are still here, the site now merely works as lightweight connection (and an aggregator). And no matter how we all approach this medium, it's fine. Whether it's self-hosted WordPress installations or statically generated sites, whether slow-churning long form or near-daily activity, whether research-only or life's breadth. The point is not that one thing is better than the other. The point is that we are on the web, our shared and world wide web - and that we're here to stay.
Even though I left research years ago, I still love to follow this community. I look forward to the next 7 years of Booles' Rings.
]]>This morning I came back to something I had drafted after Joanmarie Diggs proposed a session on a particular hack (but the group didn't end up focusing on this in the unconference-style workshop setting).
One of my go-to examples when explaining that Presentation MathML is devoid of semantics is the <mfrac>
element. While it clearly hints at being a fraction, the spec itself clearly states that it is not, semantically, a fraction but that it may be used for completely different things that visually look like fractions, e.g., binomial coefficients or the Legendre symbol; in fact, you can find many even less fraction-like examples (such as logical deductions) in the wild because a vertical stack with a properly aligned line is simply a neat layout feature.
Since Presentation MathML never specifies semantics, let's look how Content MathML encodes fractions. The spec would have you write something like <cn type="rational">22<sep/>7</cn>
. It's a terribly good example for how Content MathML is a bit too strong in its abstraction for human communication (also, check the transcription to Presentation MathML). As an aside, if you need more examples of why <mfrac>
is not meaningful, just search that section.
Anyway, at the workshop Joanie had proposed the following. It turns out, Firefox is too lazy ahem too performance-oriented to sanitize invalid ARIA roles. This allows you to experiment with made-up properties fairly easily (assuming you can modify your screenreader of choice).
So for example, you could slap an aria-math
attribute to your markup and this would show up in OS-level accessibility inspectors such as aViewer or accerciser. What Joanie had in mind (I believe) is that we could have tried to expose additional information this way so that Joanie could hack something into ORCA and then get Mick and Reef to modify NVDA or David to modify ChromeVox (and maybe even hear what Glen thinks of it from a JAWS perspective). And yes, all these incredible people were actually there in person.
Since an idea that I had proposed to the group (exploring web components for mathematical documents) also didn't stick, I thought I'd combine the two when I get the chance. Luckily, I had a long flight home.
Et voilà, a custom element fraction that adds aria-math
roles to itself magically (using fraction
, numerator
, denominator
andfraction-line
).
See the Pen AIM Workshop custom element: fraction by Peter Krautzberger (@pkra) on CodePen.
It's not much and not really a "document"-level element as I was thinking about (then again, Joanie had hoped for improving an <mfrac>
directly) but it's a nice (non-functional) concept, and perhaps helpful when thinking about ARIA role-description
.
On the other hand, I find the act of reading the scholarship of math education to be dreadful and unpleasant. It is filled with jargon and hero-worship.
That being said, I’ve been extremely lucky to have great mentors and colleagues to bounce ideas off of. I’ve collected some of this advice in a Reddit post, which I’ll recreate here.
Here is some vocabulary that is commonly used when discussing math pedagogy, or pedagogy in general. In general the literature is pretty annoying and frustrating; there’s lots of jargon and lots of stuff is too-high level.
So, this one has been on the back burner for a while. And it actually started as two separate projects that merged and separated and merged again.
Continue reading...]]>So I recently wrote about a fragment of mathematical content and a big part of it was the problem of stretchy braces. After building the "plain" HTML+CSS example at the end (re-using an extremely clever solution from the upcoming MathJax v3), I kept thinking: this should be easier. Luckily, this year I'm dedicating a chunk of my spare time to the MathOnWeb Community Group's new task force focused on CSS, looking for (old and new) ideas that might help simplify equation layout using CSS.
So one thing led to another and I found myself coming back to an old thought of mine.
Stretchy characters like those braces, what are they really? Like, really really?
Let's look at what they are called. As a matter of fact, they are called various things but the most generic term is possibly bracket. However in the context of equation layout, the more common terminology might be delimiter and fence. In particular, MathML provides an <mfenced>
tag (though for various reasons the equivalent <mrow>
+<mo>
constructions tend to be preferred by most tools).
Now both brackets, fences and delimiters sound awfully similar to a very common concept. Where do you usually put up a fence? Where do you delimit something? At a border. It's a small idea, obviously, but what if we could solve the problem of stretchy constructions using borders?
What if somebody else already has?
Well, you could go visit codepen and simply search for brace and, lo and behold, you find 4 perfectly fine specimens in CSS. Turns out, designers love pretty things, who'd have thunk.
If you dig a little deeper, you'll end up with basically three approaches.
The first one (with several interesting forks) is by Lauren Herda.
See the Pen Single-Element Curly Brace by Lauren Herda (@lrenhrda) on CodePen.
It is really pretty -- look Ma, a single div! (Except that it doesn't quite work on Chrome since an <hr>
gets overflow:hidden
from the user agent style sheet.)
That was fun. Let's do two more: one from Jakob Christoffersen
See the Pen curly braces css by Jakob Christoffersen (@MasterThrasher) on CodePen.
and one from @mexn:
See the Pen CSS Curly Brace by Markus (@mexn) on CodePen.
Both are slighly more complicated than the first one. Instead of the radial gradient for the middle piece, they both use 6 elements with border-radius (though the last one has only two elements with pseudo-elements). If you dive into their forks, you'll find lots of interesting variations, too.
The point is: this problem has in a very real sense actually been solved in CSS and you can do lots of fun variations yourself.
Such as this one
See the Pen stretchy brace by Peter Krautzberger (@pkra) on CodePen.
or this one
See the Pen stretchy brace, single-div by Peter Krautzberger (@pkra) on CodePen.
(Fun fact: using percentages in the border radius leads to some really cute behavior across sizes.)
Now you might say it hasn't solved the real problem. Here are a couple of counterarguments:
It has no character! Gasp! It's true that in typical print equation layout engines you'll still have a character there. Well, you could just add a hidden one, no?
It doesn't work well on small sizes! In typical print equation layout, you'll see several sizes of a brace being used for smaller heights (with possibly slight design variations for readability) after which the layout would switch over to a stretchy constructions (made up of several glyphs stitched together). This is a very interesting problem to solve. And you know what? This touches on one of the hottest topics of CSS discussions in the past few years: it is a perfect use case for container queries. Go add a use case and push the web forward for everyone!
But perhaps current CSS is sufficient and someone will find a clever approach to achieve a similar effect. As I mentioned above, percentages in border radius have a neat effect; there is a lot of room to play with once you stop thinking about everything in terms of print traditions.
It's not semantic! Gosh. What exactly does a (stretched) brace represent, semantically speaking? And, should you have decided to imbue it with such rich meaning yourself, are you really unable to expose the relevant information using the web platform's rich accessibility stack? No? Excellent - you should file a bug with ARIA and help push the web forward for everyone!
It can't look like font x! Some fonts have a really tricky curly brace with basically an S shape in each half. I admit my CSS-foo is not good enough to do that. But besides the fact that a better designer might find a solution, I find the trade-off acceptable. And if there's a limitation in CSS, please file a bug with the CSS WG and help push the web forward for everyone!
It can't do delimiter y! There are quite a few brackets, some more complex than others (Mathematical left white tortoise shell bracket anyone?) but few of those are used in stretchy ways and fewer still occur often (for comparison, the STIX-2 fonts support ~30 delimiters). I really don't have a problem with such edge cases remaining difficult for the time being if we can solve a practical problem for 99% of use cases. And if you do, ... you know what to do.
So let's do two more, the most important ones:
Parentheses,
See the Pen Stretchy parenthesis by Peter Krautzberger (@pkra) on CodePen.
and square brackets
See the Pen Stretchy brackets by Peter Krautzberger (@pkra) on CodePen.
See now, that wasn't so hard?
I suspect that if we work a bit harder to unstuck ourselves from the traditions of (print) equation layout engines, then we might just find a lot of interesting solutions like this; solutions that help make equation layout on the web as easy as as designing a good page layout with CSS; solutions that work with the grain of the web; solutions that perhaps even lack but help identify (and resolve) shortcomings in the web platform that affect a much wider community; solutions that help move the web forward.
PS: I've started a little collection on codepen. Ping me if you see something that might fit!
]]>But I want my own problems page, and it's my site. So to celebreate the new website, I created just that. For the first couple of problems, I've chosen to focus on the axiom of choice. And I don't think that I have much choice, but to keep that interest running. But I can promise that this is not the only type of problems that I will add there.
Continue reading...]]>It is a static website, because I am tired of the WordPress format for a long long time now. So for the occasion, I also got a new domain, karagila.org. Isn't this nice? The only domain and all the links should work, at least for the foreseeable future. So there's nothing to worry about linkrot for now. But please do update your links!
Continue reading...]]>This is part of a series of posts aimed at helping my mom, who is not a scientist, understand what I’m up to as a mathematician.
Lately, Artificial Intelligence (AI) has made some remarkable milestones. There are computers that are better than humans at the strategy board game GO and at Poker. Computers can turn pictures into short moving clips and can “enhance” blurry pictures as in television crime shows. They can also produce new music in the style of Bach or customized to your tastes. It’s all very exciting, and it feels pretty surreal; remember back when Skype video calling felt like the future?
I’m going to give you a broad overview for how these types of AI work, and how they learn. There won’t be any equations or algebra.
Before we jump into the computer stuff, let’s make our very first AI. Well, this will be more “I” than “AI”, because I want you to play a game. You are going to be the “AI” that’s going to learn a task!
I want you to play Zrist for about 5 minutes (or longer if you like it). It’s a fun little platform game. See how far you can get. My best score was 37 400. We’ll use this experience to help describe how AI works. Okay, go play now!
Welcome back! I hope you had fun playing that game.
I want you to think about these questions, and give an answer to each of them. (It’s not a test, there are no wrong answers.)
We’ll come back to your answers in a moment. For now, I want you to watch a bit of a video of an AI (called Mar I/O) learning to play the original 1985 Super Mario Brothers. Watch maybe the first 4 or 5 minutes, and then skip to the middle of the video. You only need to watch a little bit to get the sense of what’s going on.
(If you like this, you can watch a livestream of Mar I/O’s attempts to beat the game level by level.)
First of all, this program starts off only knowing a couple of things:
Here are some things it doesn’t know:
If you’re interested, Mar I/O is a Recurrent Neural Net. There are other types of AI, but this is the we’ll look at today.
So at first it tries random stuff to increase its fitness score: jumping, standing still, ducking, running left, and none of these seem to increase its fitness. Then, when it presses right, mario starts progressing in the level and its fitness score goes up.
This is called training the AI. It measures its progress against a fitness score, and it reinforces behaviour that increases that score. i.e. It starts to favour pressing right because that seems to increase its fitness score.
This works great until it gets to the first enemy and mario runs right into it and dies. After a couple more tries, it starts to experiment some more (just like it was trying random things at the beginning of the level). Around the 2:20 mark of the video, Mar I/O presses the jump button right before the enemy and successfully clears it, allowing mario to move further right and increase its fitness score.
To recap:
Let’s go back to the platform game you played and look at how you learned to play the game.
What was the goal of the game?
How did you know you were doing well at the game?
I asked you to get as far in the level as you could; that was your goal. The game kept track of it by telling you your current high score. That was your fitness score!
How did you adapt to the rules changes? Did you get them on the first try?
If you’re anything like me, when the rules changed for the first time you thought, “Oh crap, what’s this?”, and then promptly died when the screen said “Mode: lag”. What were you supposed to do?! No one told you what to do!
When my character turned invisible, the screen stopped scrolling and I wasn’t sure what to do. At that point I just pressed buttons until it started to scroll again; i.e. I tried random things when I got stuck. As I continued to get stuck and unstuck, I recognized that I was getting stuck at the short walls, and that jumping over them saved me then. Trying the same trick saved me again when I was invisible. i.e. I was training on the short walls.
This is very similar to how Mar I/O trains and learns.
For comparison, here’s a video of one of the best Mario players in the world, CarlSagan42, taking 18 hours to beat an extremely difficult fan-made level. (Warning: there are a bunch of swear words.)
Notice a couple things:
These are all in common with Mar I/O.
How did you make decisions about what to do next? (What did you look for, and what did you ignore?)
In Zrist, you were probably looking for gaps (to jump over), those horrible red death blocks, and big walls to slide under. For each of these you developed a reaction: “When I see a gap, then I press C (to jump over it)”.
For each of these you had to remember a task: If I see a gap, then I jump over it.
For AI like Mar I/O, it stores these tasks by associating visual cues and inputs with button presses. For example, when it sees a wide open space it learns to press the right button. When it sees a gap in the ground it learns to press the A button (to jump).
Now Mar I/O doesn’t have any extra code which tells it “this is what a pit looks like” or “this is what a pipe is” or anything like that, (although it can see enemies as black tiles, it doesn’t know what an enemy is).
Each time it succeeds at increasing its fitness score it strengthens the connections between the visual cues and the sequence of button presses that got it there. Each connection like this is stored in the AI as an “artificial neuron”. So when you were playing Zrist, you probably developed a neuron relating to gaps (“If gap, then jump”), one for tall walls (“If tall wall, then slide”), and many others.
The very cool thing about modern AI is that you typically don’t need to tell it what or how many artificial neurons to make ahead of time, Mar I/O adds neurons as it learns. It’s just like how you didn’t need to know how many types of obstacles you would face in Zrist, you built up a list as you went. This is very powerful!
The flip side to this is that after Mar I/O learns to beat a level, we humans will have a hard time understanding what it’s using to make its decisions. It won’t always be clear to us what visual elements (called “features”) it’s using to make its decisions.
Hopefully you see some of the parallels between the way AIs learn things and the way humans learn things. There are a lot of similarities. Mimicking human learning has been very useful for creating AIs.
I’m going to point out a couple other ways that humans learn that help illustrate ways in which AI can learn.
Have you ever driven somewhere familiar and then forgotten how you got there? You were on autopilot. Similarly, have you ever been doing something with your hands, like playing the piano, but when you stop to think about what you’re actually doing, the task suddenly becomes much harder. This sort of muscle memory is very similar to what Mar I/O is doing. It learns sequences of moves and button presses, but there is no underlying reasoning.
I skipped over a big part of Mar I/O’s learning, which is that it actually contains many different “styles” of players (called species); it’s not just a single mario learning. After each species completes about 10 attempts at beating the level, we rank the species by which achieved the highest fitness. We then delete the bottom 10% of the species and replace them by blending some of the best species (in a process called breeding). This ensures that if one of the mediocre species discovers something useful (like shooting fireballs can kill enemies) it still has a chance to give that idea to the best performers. Similarly, the best performers get to share their ideas with the mediocre performers.
One of these processes is called a generation. For easy levels, Mar I/O only needed 40 or so generations. For difficult levels, Mar I/O needed over 250 generations! It can take a long time for these random mutations to produce helpful effects.
If this feels a lot like evolution, well that’s because it is! These AI learn by evolving and refining their strategies. This is a very deep and powerful idea, but I’ve already gone on long enough, so I’ll save it for another time.
The advancement of AI evokes many feelings: Awe and wonder, but also fear and skepticism. So I’ll end this post talking about what the future might look like.
AI are machines. The term artificial intelligence might better be described as artificial skill. Mar I/O is only able to maximize a fitness score. It’s quite good at that, but that’s the only thing it can do. This AI is highly specialized to Super Mario Brothers. While it’s possible that the underlying Mar I/O code can be adapted to other games (like Mario Kart), it requires human knowledge, judgement and skill to adapt it to other settings.
We don’t expect that Mar I/O will turn ever turn into a killer robot. At its core, Mar I/O is a (complicated) machine that presses buttons and is good at increasing a number (its fitness score).
If you want to learn more about AI, here are some good resources based on your background.
I just want nice pictures and videos. NO MATH!:
I am comfortable with the topics described here, but want a bit more substance:
I have a degree in math or computer science and want all the details. Leave no stone unturned:
Well. Actually no. When I was a dewy eyed freshman, I had taken all my classes with 300 students from computer science and software engineering (Ben-Gurion University has changed that since then). Our discrete mathematics professor was someone who was renowned as somewhat careless when it comes to details in questions and stuff like this (my older brother took calculus with the same professor about ten years before, one day he didn't show up to class, when my brother and two others went to see if he is at his office, he was surprised to find out that today is Tuesday).
Continue reading...]]>In the mean time, here’s a nice graph. It answers a question posed on Reddit that uses Chromatic numbers to solve a real life problem!
Here’s another irrelevant picture.
]]>It's also difficult because most people in this field like this confusion, especially if they have a stake in it. It's obviously a better sales pitch to say you're helping all of STEM even if you're actually working on a set of (arguably tricky) visual/print layout techniques. I don't want to sound too cynical here; for many people it does come from the heart, they think they are helping STEM this way and it is what drives them. Besides, as they say, you cannot change others only yourself.
These days I spent much more time on the document level and, mostly, on mathematical documents. That brings up a slew of interesting problems but many are too ephemeral to share. The other day I had a particularly interesting piece of content as it highlights some aspects of the problem of this identification.
In this paper you find the following
The layout captured in this image combines a label (5.4) with an ordered list of three mathematical statement, one of which include a sublist of two items. Of course, these statements include quite a few bits of equational content but those aren't that important here. Instead, what's interesting is that a stretchy brace is used a visual cue that connects the single label with the list of statements, aligning its center with the label and extending to the height of the list.
How do you realize this kind of layout on the web? (And, for that matter, in LaTeX?) Before answering that, it's worth to dive a little deeper.
There are two conflicting details here. On the one hand, the label (as per source and context) is actually an equation label. This means the authors intended this list of statements (each being a self-contained sentence with several equational elements interspersed) to be treated as a single piece of equational content. Much like tables, images, or (since we're in a math paper) theorem environments, this is an important piece of structural information and should not be lost.
On the other hand, the list is (nested) ordered (text) list and it is encoded as such by the authors. This is obviously an important piece of structural information and should not be lost.
And that's a bit of a problem both for the web and for LaTeX: there's no system for equation layout with a concept for ordered list built-in. And there's no text layout system with stretchy braces.
If you look in the TeX source of the paper, you'll see how this was hacked using \parbox
. On the web, you have a harder time since in practical terms you can't really do this kind of hack of switching from equation layout to text layout. In theory (i.e., HTML5 spec dream land), you could try something like this
<math side="left">
<mtable>
<mlabeledtr>
<mtd>
<mtext>(5.4)</mtext>
</mtd>
<mtd>
<mo>{</mo>
<mtext>
<ol>
...
</ol>
</mtext>
</mtd>
</mlabeledtr>
</mtable>
</math>
Now this won't work that well in real life. But the real question for me is: is that even correct? (in which sense)? This is a <math>
element consisting really only of text while the purely visual brace is the only element with "semantic" markup. Hm...
I find this one interesting because the problem is a case of visual layout clouding one's judgement. You want to use stretchy braces, so in TeX you need math mode and the rest follows pretty "rationally", no matter the hackiness. After all, it's print; no need to care about anything but the looks.
On the one hand, there's the gut reaction to say that authors should not do things like this. This may be based on the simple principle that, when you need to hack around a lot, you're probably doing something wrong.
A less toxic response may be to criticize the content structure: should this really be an equation label? Isn't it more like a theorem-environment anyway? If not, should this enumeration not be numbered as sub-equations? And isn't the brace a legacy from organizing content on a blackboard rather than something for print layout to mimic (let alone web layout)?
If I was one of the authors, I'd probably respond grumpily: how dare you question that this is the best (perhaps not good but best) way to represent this particular piece of mathematical content that I arrived at after years of study of a deep and complex research topic?
And they'd be right because this really only evades the two actual problems: the confusion of "equation" and "mathematical fragment" and the problem of stretchy characters.
On the one hand, it's clear that this is a (complicated) unit of mathematical information. It must be treated as one. And while I would argue it is not an equation/formula (and certainly not in the sense of "equational layout" let alone MathML's idea of it), if the authors want to count it as such, there should be a way. But on the web we're severely limited when it comes to marking anything "an equation", especially when it structures like regular lists come into play.
From a layout perspective is, however, the only notable problem is the stretched brace. It has no meaning here (if it ever has); it's merely a stylistic element to help visually connect a list with a label. It is not "mathematics" or even "equational" in any sense of the word. And yet with the current state of web technology, the only way to realize it is by using tools specialized for precisely equation layout (and usually with misleading "semantics" to boot).
But we should be able to do this, no?
Here's an example (using a technique of pure CSS stretchy braces developed by Davide Cervone for MathJax v3).
See the Pen case study: arxiv.org/1412.8106 by Peter Krautzberger (@pkra) on CodePen.
I read up on the changes in the HTML 5.3 working draft and realized that my HTML5-ish example above (using an ordered list inside MathML) is not even valid HTML - oh my! As it turns out, the integration of MathML into HTML states that only phrasing content is allowed inside MathML token elements (and lists are not phrasing content). Well, one more reason never to use MathML on the web - but you already knew that.
]]>]]>Pass on what you have learned. Strength, mastery. But weakness, folly, failure also. Yes, failure most of all. The greatest teacher, failure is.
]]>
First off, there's Equations ≠ Math (Or: Equation layout as a print artifact) (archive.org). This somewhat of a continuation (and hopefully a refinement) on #196.
You should also totally register for my upcoming workshop on equation rendering in ebooks at Ebookcraft in March!
]]>I've never been one for looking back at the end of a year. But since the last year was complex (and this one is set up to be equally so) I thought maybe I should motivate myself by looking ahead to the things I want to write about this year (including things in my actual schedule for 2018).
Ok, maybe stop here; it's a lot already.
]]>Suppose that f is a transcendental entire function. In 2014, Rippon and Stallard showed that the union of the escaping set with infinity is always connected. In this paper we consider the related question of whether the union with infinity of the bounded orbit set, or the bungee set, can also be connected. We give sufficient conditions for these sets to be connected, and an example a transcendental entire function for which all three sets are simultaneously connected. This function lies, in fact, in the Speiser class.
It is known that for many transcendental entire functions the escaping set has a topological structure known as a spider’s web. We use our results to give a large class of functions in the Eremenko-Lyubich class for which the escaping set is not a spider’s web. Finally we give a novel topological criterion for certain sets to be a spider’s web.
]]>
The Fatou-Julia iteration theory of rational and transcendental entire functions has recently been extended to quasiregular maps in more than two real dimensions. Our goal in this paper is similar; we extend the iteration theory of analytic self-maps of the punctured plane to quasiregular self-maps of punctured space.
We define the Julia set as the set of points for which the complement of the forward orbit of any neighbourhood of the point is a finite set. We show that the Julia set is non-empty, and shares many properties with the classical Julia set of an analytic function. These properties are stronger than those known to hold for the Julia set of a general quasiregular map of space.
We define the quasi-Fatou set as the complement of the Julia set, and generalise a result of Baker concerning the topological properties of the components of this set. A key tool in the proof of these results is a version of the fast escaping set. We generalise various results of Marti-Pete concerning this set, for example showing that the Julia set is equal to the boundary of the fast escaping set.
]]>
]]>The impediment to action advances action. What stands in the way becomes the way.
Don’t fear failure. Not failure, but low aim, is the crime. In great attempts it is glorious even to fail.
This quote appears on page 121 of Striking Thoughts: Bruce Lee’s Wisdom for Daily Living. For more great quotes, check out the Wikiquote page for Bruce Lee.
]]>]]>The mathematician does not study pure mathematics because it is useful; he studies it because he delights in it, and he delights in it because it is beautiful.
Six months after I had turned in my dissertation, I have finally received the approval on the damn thing.
Continue reading...]]>We survey the dynamics of functions in the Eremenko-Lyubich class, Among transcendental entire functions, those in this class have properties that make their dynamics markedly accessible to study. Many authors have worked in this field, and the dynamics of class functions is now particularly well-understood and well-developed. There are many striking and unexpected results. Several powerful tools and techniques have been developed to help progress this work. We consider the fundamentals of this field, review some of the most important results, techniques and ideas, and give stepping-stones to deeper inquiry.
]]>
As I wrote last time, the usual way to describe MathML's double-spec is this: Presentation MathML is for layout and Content MathML is for semantics.
Last time I wrote about how semantics are effectively absent from MathML on the web. Unfortunately, layout does not fare much better.
So at first the spec will tell you that's absolutely not true:
Presentation markup [...] is used to display mathematical expressions; and Content markup [...] is used to convey mathematical meaning.
So you will naturally start by thinking Presentation MathML is what you're after regarding equation layout (not mathematics).
The spec, however, throws you a curveball:
MathML presentation elements only recommend (i.e., do not require) specific ways of rendering; this is in order to allow for medium-dependent rendering and for individual preferences of style.
So Presentation MathML spec is about layout but not actually specifying how that should work.
This is obviously a problem when you want to see standards-compliant implementations in all major web browsers (even if it's just 4 engines). Usually (say with CSS or SVG), you can assume that a standard ensures developers that they are able to get consistent results across systems. Of course any standard will have gaps and edge cases but then, at least, specs can be clarified and either fixed in both standards and implementations or a standard can be identified as problematic (and ideally a less inconsistent standard can replace it).
However, this is not some kind of accident and you can easily find many statements in the same vein throughout the spec. For example, the section for <mfrac>
says effectively nothing about the spacing between numerator, fraction line, and denominator.
Or you get gems like this one from <mscarries>
This means that the second row, even if it does not draw, visually uses some (undefined by this specification) amount of space when displayed.
In contrast, start with any random part of contemporary CSS, e.g., flex container to start down the rabbit hole that are the result of quite meticulous discussions of layout specifics.
In other words, Presentation MathML does not even want to give you the same (messy) path to improvements as we're used to on the web (and we're still ignoring the practical problem that the Working Group is dead in the water so no fixes can be made).
At this point you might be wondering how that could be possible. After all, ther are plenty of equation rendering enginens out there that handle MathML. How do you reconcile this?
I think it is fairly simple (yet no less problematic). Presentation MathML assumes an implementor already knows how equation layout is supposed to work, in fact reading the spec you will get the feeling that it assumes you already have an equation layout engine at your disposal and you are merely adding MathML support, interpreting it in your engine.
in other words, Presentation MathML does not specify layout but is an abstraction layer, an exchange format for equation layout engines, a format that a rendering engine can (easily) make sense of within its already existing system.
(And yes, you could troll MathML enthusiasts by saying that Chrome and Edge support all layout requirements of the MathML spec. But please don't.)
Since I considered the value of Presentation MathML's semantics in the previous post, it's only prudent to double check the value of Content MathML for layout. Unsuprisingly, Content MathML really does not want to help either. The spec speaks quite clearly:
[...] encoding the underlying mathematical structure explicitly, without regard to how it is presented aurally or visually,
So no visual layout nowhere.
By the way, it seems easy to misunderstand this point in the spec. Of course we can render MathML content - lots of tools do. But what no tool can rely on is the MathML spec when it comes to deciding on how to render Content MathML content visually. As I already mentioned, few rendering engines are "MathML-based" because they literally cannot be, they need to base their layout decisions on a more reliable source.
The other side of that coin is that you might disagree how to visually render Content MathML. In real life (at MathJax), we've actually had one or two complaints over the year how our Content-to-Presentation conversion is wrong
.
This is really just the core, the fundamental issue around MathML layout on the web. Even if you make the assumption that an equation layout engine should be added to browsers, there are more problems. And then we're still not talking about the problems of the shoddy implementations in Gecko and WebKit. Let's see if I'll get around to that. For now, let's continue the 10,000 ft view a bit longer.
]]>Here is the video:
Continue reading...]]>Yet, the fact that mathematical objects are real is the daily experience of mathematicians (though few would ever claim this, because they are much too cautious). I’d like to try to explain this experience. Since I am not a philosopher, there will be no robust philosophical arguments. I will not discuss ontology. Try not to be disappointed.
Imagine you were an astronomer. (No, go on. Give it a go.) You point your telescope up in the air and – lo – a new star appears. You call a friend, and tell her the news. She points her telescope in the same place and – lo – the same star. You write up your discovery, and a team of astronomers in Belgium train their more powerful telescopes on the same spot, and describe the colour and size of the star. You have another look, and see they are correct. An international team in Chile use radioastronomy to discover that your star is actually two stars, orbiting around each other. It is later discovered that there is a large exoplanet orbiting one of these stars.
Now – I guess – it could be argued that there is no star. It could be argued that you invented it, and then let everyone else know how to do the same. The star is some sort of socially constructed illusion. In my view this is a purest nonsense. There is a real star, it is really out there. That, after all, is the belief of (most) astronomers. Otherwise, we might as well give up the whole astronomy thing altogether.
So I am getting to my point. Thanks for being patient.
My point is that this is also the daily experience of mathematicians. Let’s suppose I am studying transcendental dynamics (as I do), and I study a new set which seems of interest (well, you never know). I email a colleague, and they confirm the set looks as I said, and maybe they spot something else; perhaps it has dimension one, or is dense in the plane, or something technical like that. We write a paper. A team of Belgian mathematicians read our paper, and note that, in fact, our set has other interesting properties. They email us and we find that this is indeed the case. More papers follow, and then someone (in Chile, perhaps) observes that our set is actually the union of two interesting sets, and gives some further properties of each. When we look into it, we see that this is indeed the case. This is how (pure) maths is done.
Essentially this story (for it is a story; I have not discovered any sets of interest to Belgians) is no different to the story about the star. And it is very difficult not to believe the punchline is the same; the set exists ‘outside our heads’, just as the star exists ‘outside the heads of the astronomers’. (I’m not trying to claim mathematical proof here; I’m just trying to communicate how it feels to do mathematics).
A real-life example of this story is the famous Mandelbrot set. This was first discovered in the 1970s, when it was very difficult to draw a picture of it. But mathematician talked unto mathematician, and more and more properties were discovered. Technology has moved on, and now highly detailed pictures exist. It is a remarkable object: for example, the set is so intricate that if you try to draw a line around the edge, you will find that your ‘line’ is actually two-dimensional. It is even more intricate than the coast of Norway. Nonetheless, all mathematicians would agree they have been studying ‘the same thing’ all this time.
So it seems undoubtedly true that mathematical objects exist. I am as confident in the existence of the Mandelbrot set, or the sine function, or Riemann surfaces of genus zero as I am in the existence of Belgium. When we study mathematical objects, we discover them – we do not invent them. There are thing that exist that are not material objects.
You may feel that this is silly, because if they exist, then where is their home? (It is probably not Belgium). How do we see them? What are they made of? These are a good questions.
]]>
This one's slightly tricky. And I also have a confession to make. In the first two parts I pretended I've written about MathML when I really only wrote with half of it in mind.
One problem of the MathML spec in general is that it's really two, quite distinct specs: Presentation MathML and Content MathML.
Now the common description is: Presentation describes layout and Content describes semantics. I think one of the problems for MathML in general is that it is not that easy.
So obviously that's wrong. After all there is Content MathML and it specifies an enormous amount of semantics. Such an enormous amount actually that you can express lambda calculus. You also get a whole bunch of fantastic elements (for <reals>
) and on top of that built-in, infinite extensibility via content dictionaries. So you can do quite literally everything in Content MathML.
So what's the problem?
It's the simplest and most practical problem: Content MathML plays no significant role in real world documents. You can find it in niche projects (such as NISTS's hand-crafted DLMF), you can find it hidden in commercial enclosures (such as Pearson's assessment system where I wonder why you'd need its expressiveness), you can also get it by exportig it from computational tools (Maple, Mathematica etc.). But in real world documents, it's non-existent.
I can't really tell you why that is. Perhaps like most formal abstractions of mathematical knowledge, it ignores the practicalities of humans communicating knowledge. Perhaps, when it comes to its computational prowess, it probably fails on the web because it cannot compete with the practicality of JavaScript or server-based computation (à la Jupyter Notebooks).
I also have heard repeatedly that it's simply too difficult to create. And from my limited experience with MathJax users it doesn't help that the spec itself warns people that it encodes structure without regard to how it is presented aurally or visually
, i.e., it's sometimes not clear how Content MathML should be rendered.
Ultimately, lack of content (pardon the pun) makes Content MathML of little relevance on the web. (An interesting but separate question might be whether the way Content MathML expresses semantics fits into the style that HTML has adopted in recent years; another time perhaps.)
But there's actually a second problem for MathML and semantics on the web here: Presentation MathML.
It's easy to think that Presentation MathML specifies at least some semantics. And if it specifies some, maybe it's a good basis to build upon. After all, how semantic was HTML really, back in the day?
For example, there's the <mfrac>
element and you might think it specifies a fraction. Unfortunately, you'd be wrong. The spec itself speaks of fraction-like objects such as binomial coefficients and Legendre symbol
which are about as far from fractions as you can think of. Of course you can find even more egregious examples in the wild such as plain vectors encoded with mfrac
. Similarly, <msqrt>
does not represent square root but root without index and it is used accordingly in the wild (while <mroot><mrow>...</mrow><none/></mroot>
constructions are practically unheard of).
The point is that you can't complain about some kind of abuse of markup because Presentation MathML does not make this kind of a distinction.
Now for a long time, I thought there might just be enough semantics in Presentation MathML to get away with. Working with Volker Sorge and his speech-rule-engine and integrating SRE's semantic analysis into MathJax meant a deep dive into what kind o structure you can find in Presentation MathML. And as amazing as its heuristics are, it becomes clear how brittle they remain and how quickly you find (real world) examples that break things. This isn't to say you can't guess the meaning of a large selection of real world content. It just makes it clear that you are working with a format void of semantic information. (And we're not talking about tricking machine learning models here, just run of the mill content.)
When you get down to it, I would say that there are effectively only two elements in Presentation MathML that appear reliably semantic in the real world: <mn>
and <mroot>
. And even these examples are stretching it. For for the former, the spec suggests that <mn>twenty one</mn>
is sensible markup. For the latter, it seems to be mostly accidental that roots simply haven't been sufficiently abused in the literature (yet) and thereby retain a unique place of being a visual layout feature that is used consistently to describe (many different concepts of) "rootness". (For the record, there's also <merror>
which is pretty solid, semantically speaking; just not very mathematical.)
There are other, more indirect signs of the failure of MathML to specify semantics. For example the absence of typical benefits of semantic content such as usable search engines or knowledge management tools. But that's a very different problem to discuss.
Anyway, so MathML that specifies semantics could exist but does not. On to layout.
]]>One advantage of MathML on the web is that it's XML, i.e., it looks a lot like HTML and SVG and does not require a lot of extra tooling (e.g., parsers). In addition since you can preserve its structure when converting to HTML or SVG, you can can hack MathML markup to improve the result on the web, e.g., by adding CSS or ARIA.
Still, being XML is obviously not enough to make anything a good web standard.
Obviously this depends a lot on what qualities you are after but I've found it to be a common misconception that MathML is somehow universally superior to other ways of marking up equations. That misconception is getting it backwards.
Like any exchange format, MathML's design is more that of a least common denominator between document systems and, in particular, between visual rendering engines for equational content. By definition, this means it is the least expressive, least flexible, and least powerful format.
A good exchange format would of course be a great thing to have and it can still be very powerful if the ecosystem's diversity is not too great. Unfortunately, that's not the case for MathML where rendering engines for equational content exist and vary considerably between ancients like troff or TeX, modern word processors, computer algebra systems, and more.
So while it is easy to create MathML from other equation input formats it is effectively dumbed down in the process. Reversely, it is not easily interpreted in another system without significant loss of information. This is of course nothing special, just look at binary image formats or text processing. But this is a problem for MathML because it is designed for this purpose; however, it neither reaches the quality of, say, SVG as an exchange for vector graphics, nor does it provide real-life advantages over, say, subsets of LaTeX notation (e.g., in jats4reuse) or even ASCII-style notation.
A particular example of this loss of information is that importing MathML into other systems, while often possible, is rarely re-usable. This is a bit like importing a binary image format into another editor; yes it works, but there are limits to how well you can edit the import without re-doing the whole thing. To give a simple example, David Carlisle's pmml2tex provides perfectly nice visual output in print but rather unusual TeX markup.
The fact that after 20 years there are virtually no rendering systems out there that use MathML internally indicates that MathML fails to provide a decent solution for another basic use case.
After these basic, to some degree social problems, let's talk about core problems of the spec itself next.
]]>And that was fine. All three options are roughly equivalent, in the sense that they present you the material in a very structured way (or they at least intend to). You don't reach the definition of \(\aleph_0\) because you defined what is equipotency and cardinality. You don't reach the definition of a derivative before you have some semblance of notion of continuity. Knowledge was built in a very structural way. Sometimes you use crutches (e.g. some naive understanding of the natural numbers before you formally introduce them later on as finite ordinals), but for the most part there is a method to the madness.
Continue reading...]]>After finishing MathML as a failed web standard last year, I've been meaning to write a follow-up to discuss fundamental issues I see with MathML as a web standard. I found it very difficult, even painful to do so. Over the past few years I realized that most people simply don't know much about both MathML and modern web technology. I don't claim I'm a great expert myself but running MathJax for the past 5 years has given me some ideas.
Caveat Emptor. The problems I hope to outline may seem to be a general rejection of MathML as a whole; that's not what I'm after. It'd actually be silly to try to bash MathML because it is simply too successful. I also actually kind of like MathML, despite its many horrors; I think it was a great idea 20 years ago and it's still useful to hack it to get to better things.
Primarily, what follows is the result of me trying to understand why MathML failed on the web. I think there are a few key reasons for its failure. My motivation is to form an opinion on whether MathML is salvageable as a web standard or fundamentally unfit to be part of today's web technology (and should then best be deprecated).
The success outside of the web is an important factor as it limits how much MathML can realistic change. So let's start there.
MathML is the dominant format for storing equations in XML document workflows today. It's a reasonable assumption that the vast majority of equational content today is available in (or ready to convert to) MathML: virtually all STEM publishers use MathML in their workflows, major tools like Microsoft Word (favored throughout education) use formats intentionally close to MathML, and most other forms of equation input can be converted more or less easily.
MathML has a long history as a W3C standard and it's natural to think that MathML's success is somehow connected to the web's success.
However, that's not the case (except perhaps by making an ultimately empty promise). The<math>
tag was first proposed in HTML 3.0 in 1995 but was remove from HTML 3.2 in 1997. It was transformed into one of the first XML applications and MathML was born in 1998 and lived in XML/XHTML limbo for the next decade. Finally, MathML returned to HTML proper with HTML 5 in 2014.
It should seem obvious that because MathML was not part of HTML (or any other web standard implemented by browsers), it could not have succeeded because of the web's success. Instead, it was MathML's success outside of the web that allowed it to survive and eventually make it back into HTML5.
So there naturally was a disconnect. Unfortunately, even when MathML came back in HTML5, that disconnect remained effectively unchanged. A simple example is the timeline. MathML 3's first public working draft was published in 2007, the year HTML WG was just being re-chartered to bring together HTML5 (which took 7 years). The difference between the early working drafts of MathML 3 and the eventual REC (in 2010) seems to include little fundamental change (lots of details being hashed out but the core seems in place pretty early on). Only a handful of changes were made between 2010 and 2016 (when the Math Working Group shut down). It seems only mild hyperbole to say that MathML 3 was effectively done before the HTML5 was really getting started.
Overall, it seems clear from the various specs that the return to HTML5 had not much influence on MathML — or vice versa. For example, there is no hint of giving MathML the "CSS treatment" that HTML got (e.g., clarifying HTML layouts like tables via CSS) nor is there a sign that HTML and CSS ever considered what MathML brought to the table in terms of semantics and layout. This disconnect (and the lack of interest in overcoming it later on) is likely the root cause for MathML's failure.
I think one of the reasons why this disconnect was not overcome is the success of MathML and where that success occurred.
If you speak to early adopters of MathML, you will notice that MathML's success was due to its efficacy in print workflows (with rendering to binary images perhaps being a nice extra in the pitch). That's what XML workflows were producing and while the web was a nice thing to hope for, if MathML hadn't done a good job in print, it would not have gone anywhere in XML-land. This also means that MathML suffers from the general problem of equational content (shameless self-plug).
I suspect this success made the MathML community a bit blind to the fact that the web platform was moving away from any common ancestry there may have been, especially on the implementation level but perhaps more importantly in terms of being a rapidly growing technology being practiced by a similarly growing group of specialists (aka web developers).
A sign of this effect is that (especially among non-experts) it seems many people confused the hopes of MathML in HTML5 with a promise and in extreme cases some sort of moral obligation for browser vendors to implement MathML support natively. In retrospect, I think there may have been a short window where things could have turned out differently (and I hope I'll get to that idea later on). More likely, my brain is playing tricks on me because I shared that hope.
In any case I find the history to be rather odd, overall. A failed web standard became successful in print production and that success was so significant that it was reintroduced to HTML.
What I think is often missed when discussing MathML is how the success outside the web took its toll on the MathML specification. Its development was focused almost entirely on legacy (print) content and completely detached from the direction random twists and turns of the more successful web standards (first and foremost HTML and CSS). Still, MathML neither tried to align its own direction with the platform nor did it try to take inspiration or to influence those developments.
Finally, I think the particulars of print (and image) rendering of MathML has produced a crucial misconception about MathML: the fact that MathML works well in those settings does not imply that MathML works well as a web technology.
Next I'll try to step a bit back and maybe talk about some of the basics of the spec.
]]>There is still much work to be done. The main work to be done has to do with comments. Currently, there is no way to comment on posts here. Indeed, since this is a static site, dynamic features like comments are not available straight out of the box. An indirect consequence of this is that old comments haven’t been migrated from WordPress. I’m keeping the old WordPress site alive in its current state until comments have been migrated.
Another item to do is to integrate the new site into Boole’s Rings. Currently, most of the Boole’s Rings sites are WordPress based. Peter Krautzberger moved to GitHub Pages a two years ago; it seems from my current experience that migrating is much easier now. I don’t know how well integration will work but the experiment itself is well within the spirit of Boole’s Rings.
These next steps will happen over the next weeks (… or months! … or years?) In the mean time, there are a few small intermediate steps like setting up a custom domain name. Anyway, you can track my progress on GitHub…
]]>When I look back at some of the proofs I wrote when I started work on my PhD, I realise how much I have learned. My supervisors – who were very gracious, very helpful, and very dedicated – used to cover my early work in red ink. I then learned how to write a proof through an iterative (and very painful) process, in which I would write something, receive the red ink, fix those problems, receive further red ink, and so on. I became very familiar with red ink. Very, very familiar.
In this note I’d like to comment on how one might spot problems oneself, rather than depend on one’s supervisors in this way. This is not a trivial task, but a really important one. Perhaps I can offer a few pointers which might be of help.
Let’s suppose you have proved a result. You’ve written it all up to your own satisfaction, and wish to share your achievement with your fellows. I began to make a list of the things you should do, but it was very long, exceedingly tedious, and all boiled down to the word check. Which is a bit boring. So let’s try the following, which is less prescriptive if possibly less all-encompassing. It’s just three words. How hard could that be?
First forget. In developing your proof you, no doubt, came up with all sorts of ideas and intuitions and implications and pictures. You have to (somehow) now lay these all to one side. Your reader will not have any of this in front of them, so you have to be sure that none of your work now depends or uses anything other than the words in front of you. (Incidentally, the best way to do this is to put your proof to one side for a few months, and then come back to it. You’ll be astonished how terrible it will look).
Second focus. Focus on the words in front of you, and what they say. This is easier said than done; because you expect your words to say one thing, you will tend to interpret them in that way. Try not to. Look at what is written and nothing else.
Third check. Read what you have written, word by word, sentence by sentence, and ask yourself the question “why on earth does that follow?” Notice the negation; if you expect things to be wrong you are more likely to spot mistakes than if you expect them to be correct. In my personal experience they are probably incorrect.
I could probably make a list of common mistakes, but it really is hard to make that interesting. So I will highlight just three (three is a useful number here):
The word “clearly”: It is very easy to make the mistake of writing “clearly XYZ” when what you mean is “XYZ seems pretty darned obvious to me but I can’t quite work out why”. If you can’t work out why XYZ is true, chance is that is isn’t.
Things that are true but don’t actually follow: This is a very easy mistake to make; you write something like “Since X, then Y” and assume it is OK because Y really is true. But you are not asserting here that Y is true, and that is not what you need to check. You need to check that Y follows from X and nothing else!
Failure to satisfy all necessary conditions: If you use another result (maybe a book result, or a lemma of your own from earlier) you need to be sure that all the conditions are checked. This is especially true of a book result – if that says something like “If A, B, C, D and E, then F”, then there is no chance to use this result if only A, B, C and D are true.
Yes, this is all amazingly tedious. Yes, this is a very lengthy process. No, there is no alternative (apart from asking a friend to check). Yes, you will be a better mathematician when you can do all this. No, I do not claim to be able to do this all the time myself. Yes, I welcome feedback and other suggestions.
]]>Hamkins' multiverse is essentially taking a very ill-founded model and closing it to forcing extensions, thus obtaining a multiverse which is more of a philosophical justification, for example every model is a countable model in another one, and every model is ill-founded by the view of another model. The problem with this multiverse is that if we remove the requirement for genericity, then everything else can be satisfied by the same model. Namely, \(\{(M,E)\}\) would be an entire multiverse. That's quite silly. Moreover, we sort of give up on a concrete notion of natural numbers that way, and this seems a bit... off putting.
Continue reading...]]>When people speak about math content in the context of the web they usually mean equational content (or simply equations). That is, they don't mean content in a mathematical field (which often enough does not qualify as equations), they simply mean something that looks like an equation.
Now you might argue that an equation in physics is still basically mathematical content but in reality both mathematician and physicist will frequently disagree with you (and each other, possibly explosively so). You quickly get to the edge when considering chemical equations and if you want to classify the nonsense notations in the life sciences you might question your sanity.
It's not hard to understand why this is. For example, most typesetting tools with support for equations will have some kind of math mode for them. But I think it's worth while differentiating the two so I'll try my best to stick to equational content. On the one hand, the importance of math on the web is often exaggerated because it is really non-mathematical equational content that's the majority (and even that is a blip on the radar). On the other hand, it does not help to confuse a field of study with what effectively comes down to a layout tradition.
Also, sorry-not-sorry for misleading you with the title here.
The fundamental problem of equational content is that, well, that it's simply pretty terrible all around. It's convoluted,extremely compressed, archaic, and generally undecipherable. It destroys academic careers by the millions and it can often only be understood when you can see it written live (i.e., animated). At its best equations are like good abstract drawings, at worst (usually?) they're deafening gibberish.
Stray thoughts.
One. I always thought Bret Victor's (in)famous Kill math was largely wrong about the specifics of his criticism (for one, he seems to dismiss the incredible power of compression that differential equations exhibit - along with the obvious problems that stem from compression). But he is of course utterly right with his incredible work exploring how modern media like the web allow for a much richer expression of human thought, one that opens the content up to more people, often by adding means of interacting with it, especially means for untrained people (like tiny humans).
Two. Every once in a while I've wondered: what if Tim Berners-Lee had given the web some basic building blocks for equations. Just a fraction and a square root; maybe instead of image renditions of print equations we'd have immediately seen the same creativity applied to equations as there was with hacking general layout (1px GIF anyone?). Of course, that's hopelessly romanticizing the evolution of the web. Why can't I stop wondering.
Three. On and off (and I've come full circle on this several times) I've wondered whether math is ahead of other sciences on the web. I mean the <math>
tag was proposed in fricking HTML 3. So is math ahead? Maybe. But then why is scientific content so much more vibrant and transformative on the web compared to math?
The most obvious flaw of equational content is that it's deeply rooted in print. Given the limitations of print technology, equational content has needed to adopt bad practices for such a long time that many people consider them good.
I'm not (just) thinking about the problem of general comprehension as it is too tainted by poorly trained practitioners on all levels. Sure, equational content is often more difficult to parse than necessary but that's not different from poorly phrased prose.
The main problem is the tradition of abusing print technology to get more and more variations of notation squeezed into the medium. The constant abuse of sub- and superscripts is a great example; if you need to add a variant of an object you've already introduced in your notation, just slap some sub/superscripts around it, et voilà, a new object.
The abuse of letters with different fonts is another horror in equational content. If you have ever run into a paper where a dozen variations of G
appear, denoting a convoluted set of somewhat related concepts, you'll know this horror well. Unbelievably enough, Unicode has deemed this abuse of notation important enough that we now have such wonders as the Unicode point mathematical bold italic G in the Mathematical Alphanumeric Symbols
Block.
Another historic accident are stylistic separations. For example, in print it's abhorred to make math content bold when the surrounding content is bold (e.g., in a heading) yet on the web people complain that an equation in a link doesn't get the correct text decoration (what would that be??).
Obviously, there's little point in criticizing the historic development of equational content. Given that print was mostly limited to (at best) grayscale with a limited character set, naturally people had to be creative. It is amazing what this accomplished.
The real problem comes up when pretending that this tradition should do more than vaguely inform a medium such as the web. The web so far developed without much influence from equational content. It has adopted a rather different approach to separating content and presentation and the traditions of equational content are essentially incompatible with the web's approach.
I can find no argument for why the web stack should bend over backwards to accommodate these mostly quite bad traditions of equational content for print. This is perhaps similar to the situation of CSS paged media.
Obviously, it's not like you shouldn't be able to put traditional equational content on the web - you should (and you can very well today). But I've come to think it's perfectly fine, in fact, it is appropriate that this continues to be a difficult problem. For example, traditional equational content is almost always inaccessible (without heuristic algorithms, i.e., guessing around); it's basically a bunch of glyphs placed in a weird 2D patterns (like above and below a line which in turn is magically centered on some baseline and may or may not indicate it corresponds to the notion of a mathematical fraction). Pretending that this is a basis for accessible rendering on the web strikes me as foolish (or ridiculously zealous).
If you think that all equational content should be limited to the traditions of the print era, fine. I think humanity can do better on the web. Though I think we would need to acknowledge that the (print) traditions enshrined in equational content are flawed and should (and invariably will) be replaced with better concepts and narratives that are appropriate for this medium.
]]>There is a nontrivial percentage of the population which have some sort of color vision deficiency. Myself included. Statistically, I believe, if you have 20 male participants, then one of them is likely to have some sort of color vision issues. Add this to the fairly imperfect color fidelity of most projectors, and you get something that can be problematic.
Continue reading...]]>Still, the things you can do well, you obviously should. And yet, every once in a while, somebody throws you a curveball and you just have to shout: This is why we can't have good things!
.
The other day on a client project, the QA specialist pointed out that the content was consistently using <em>
where it should be using <i>
. Can we fix that?
The semantics of these and related HTML5 tags is a bit subtle, but there is a difference and it should be easy to just replace one with the other, right? Right? Famous last words.
At first sight, this was easy. The HTML came out of some JATS-like XML, which was using <italic>
elements. So map to <i>
, right? But hold on, you'll say, HTML5 reinterpreted <i>
to no longer indicate layout but semantics; it now indicates a change of voice. Unfortunately, JATS's <italic>
is focused on the typographic aspects, so it does not really help. The again, it could help a little bit more because <italic>
allows for a toggle
attribute to indicate emphasis. Sadly, the actual XML did not provide that information.
Since the piece of the tool chain that turned <italic>
into <em>
was actually my doing, I was clearly at fault. However, I had my reasons. Namely, that all of this came from a LaTeX source and in this real world LaTeX content, \emph{}
and its brethren were the dominant source for <italic>
. So clearly that should be <em>
in the end?
Now of course, almost all LaTeX authors don't give a damn beyond getting that PDF to look how they want it, so while they mostly use \emph{}
-like macros, they mix it freely (and inconsistently) with \textit{}
and its brethren. So the conversion (written by an absolute expert) rightly says screw it, all I can say is it wants italics here
, thus merging them both together.
It's my job to dig deeper than that so I took the time to look through the actual content available. Not the TeX, not the XML but the actual writing.
Lo and behold, the actual text use is pretty different: by far, most occurrences of <em>
happened in the context of quick, inline definitions. Invariably, you find these in introductions of mathematical research articles where you include commonly known definitions from a field so as not to cause bloat (because publishers and editorial boards continue to care more about page numbers than well documented research results).
A definition does not really fit either <i>
or <em>
. The closest you get in the spec, is an example of using <i>
to reference a past definition.
<p>The term <i>prose content</i> is defined above.</p>
To make matters worse, there is of course an entirely different element that fits perfectly:
The
<dfn>
element represents the defining instance of a term.
Perfect match for the vast majority of the content in question. So we should switch everything over, right?
The answer is, of course, no. Not because some content would end up with the wrong semantics (scroll to top) but because that was not the only use I found: almost without exception, the samples includes the use as a definition alongside the use as <em>
or <i>
.
And that is why we can't have good things.
All of this is about as surprising as finding a handwritten table of contents in a Word document. TeX is for print layout and font styles are used for all manners of cruelty. The question I had to answer with my client was: can we do anything about it?
In the end, beauty lies in the eye of the beholder and semantics in the eyes of the reader. We did, in fact, switch to <i>
with the plan to expose more information from the original source regarding emphasis so we can gather more data on its usage. Fundamentally, this won't help because it doesn't solve the problem of inline definitions. Still, some analysis might reveal pragmatic improvements down the line.
In the end, it's not hard to argue that a definition that is well known in the field and that is done inline in the introduction of an article is more like the kind of reference to a definition as in the above example from the spec (in fact, often enough it is done in the vicinity of a bibliographic reference). Of course, we're still conflating \emph
and \textit
.
Now zealots idealists will argue that authors "just" have to learn to use semantic macros in TeX. After all, there are plenty of "semantic" LaTeX packages out there; just start writing good markup already!
Besides the lack of pragmatism, the only viable solution I can see would be a LaTeX package matching specifically HTML5 markup. After all, we have the tags and they have established definitions; any "semantics" beyond that will only cause issues down the line (what if a tag is introduced to HTML but with a slightly different meaning?). Even then, it doesn't solve the social problem at the heart of so many publishing technology issues: who would make the effort and use it? It's extra work and does nothing for print; why would an author do extra work when they think print rules?
I think only someone interested in creating HTML output would make the effort. And at that point you have to ask: Why would those authors bother with an archaic programming language like TeX to write HTML? They will find it invariably easier to just write HTML or their favorite lightweight markup for creating HTML, especially given the speed at which HTML-to-PDF solutions are improving). Building tools for LaTeX to solve this would just create extra work but help nobody. Just build better tools for writing HTML.
Doch das ist eine andere Geschichte und soll ein andermal erzählt werden.
]]>This past semester I taught the course for the second time. You can find the syllabus, list of problems, etc. for the Spring 2017 semester by going here. On the students’ final exam, I asked them which problem was their favorite from the semester. Below is the list of problems that they mentioned including the number of votes that each received. The level of difficulty of the problems covers the spectrum. Some of these are not easy. Have fun playing!
A while back I wrote a similar post that highlighted 15 fun problems from the first time I taught the course. You’ll notice that there is some overlap between the two lists.
]]>As tradition decrees, we shall begin our show by taking a closer look at our number.
146 is an octahedral number (and thus a figurate number).
Even more amazingly 146 is an untouchable number which means it cannot be expressed as the sum of all the proper divisors of any positive integer (including itself). Can you guess how many untouchable numbers there are? Of course, infinitely many and, of course, this was first proved by Paul Erdős. But did you know that the only known proof that 5 is the only odd untouchable number depends on a stronger version of the Goldbach conjecture? Amazing!
Now that you've warmed up, let us enter the magnificent, magnetic madness of the mathematical blogging carnival.
If you have any affinity to football (the real kind, not the funny American stuff), then start off with Nira Chamberlain who reviews the mathematical simulation model he built for his favorite team - you know, like any normal awesome football fan would do.
Next, follow Sean and Jamidi to the depths of the chalkdust magazine where they spoke with one of the great mathematical storytellers, Marcus du Sautoy.
Beware now, lest you be pulled into the enchanted world of The Mathemactivist who can draw a Hilbert Curve by hand.
Come now, and follow us to the trickster's lair where Tom rocks math takes a closer look at three fun numbers to tell you things you didn't realize you ever wanted to know. From here, follow us to the depth of the mathvault and let Scott Hartshorn lure you with an introduction to statistical significance after which all your paper-nerd needs will be met by Nick Higham, who looks at the benefits of dot grid paper (including, of course, a LaTeX template).
Before you leave, be sure to witness the spectacle of John Cook taming the Weibull distribution and connecting it with Benford’s law. And as an encore, John will take you far from the equation systems you solved in algebra when you were a kid to the "simple" generalization that can be solved using a Gröbner basis (which, as so many things in mathematics, were not actually discovered by Gröbner).
And if you still can't get enough, be sure to check out the many fabulous results of Christian Lawson-Perfect's call for proof-in-a-toot.
That’s it for the beautiful month of May!
Be sure to stop by next month’s Carnival, hosted by Lucy at Cambridge Mathematics. You should submit your favorite blog posts/videos/content from the month of June. If you’d like to host an upcoming show, please get in touch with Katie.
]]>Title: Euclidean Ramsey Theory 2 (of 3).
Lecturer: David Conlon.
Date: November 25, 2016.
Main Topics: Ramsey implies spherical, an algebraic condition for spherical, partition regular equations, an analogous result for edge Ramsey.
Definitions: Spherical, partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
In the first lecture we defined the relevant terms and then established that all (non-degenerate) triangles are Ramsey. In this lecture we will compare the property of being spherical with being Ramsey. In this lecture we will show that Ramsey implies spherical (or more precisely, that non spherical sets cannot be Ramsey).
Definition. A set is spherical if there is an such that .
Typically will be finite, but this is not formally required.
The proofs are those of Erdos et Al, and go by establishing a tight algebraic condition for a set being spherical.
Let where and ; it is a line segment with three points equally spaced.
“The reason is you can take a `spherical shell’ colouring.” These shell colourings are very important.
This doesn’t work for `cube colourings’ (i.e. using a different norm) since by Dvoretsky’s Theorem, hyperplane slices of cubes basically look spherical.
Proof. Fix . Define the colouring by . (You’re taking spherical shells of radii .)
[Picture]
By the Cosine rule we get and . So we get .
Suppose that have the same colour. This means that there is an such that and and , where each .
Putting this into our cosine law info gives
which is a contradiction since the left is but the right is strictly between and .
Eventually we will relate the condition of a set being spherical with a tight algebraic condition. With this in mind, we examine when algebraic conditions can yield Ramsey witnesses. We start with a general discussion of partition regular equations.
For example,
Exercise. If the equation is translation invariant then you get a corresponding density result.
Use this to show that you always get a non-trivial solution.
First an example.
Example. .
We can homogenize this equation by replacing the variables. Use and . This gives the equation .
Basically, these are the only types of partition regular equations.
The number of colours is equal to the number of variables.
This is a strong result of the equation not being partition regular. You can’t have a monochromatic solution, you can’t even have all the paired variables agree!
The idea is to colour whether you are in a certain interval.
Proof. Fix . Colour with if for some integer .
If , then where .
So
Here the first sum is an even number, and the second is , a contradiction.
Now we increase the number of colours to deal with a more general equation.
Proof. Fix . By dividing by it suffices to consider .
Let be the () colouring from Lemma 1.
Define .
Now if , then .
So where .
If this happens for all , then we have a contradiction identical to the one in Lemma 1.
In the original paper there was a similar lemma but it had a worse bound on the number of colours. This improvement was observed by Strauss a little later.
Note that these equations are not susceptible to the “translation trick” since .
The following is the main technical lemma. The proof is purely algebraic.
For readability, we will write instead of . We will make use of the following useful fact:
Proof of . Assume that is spherical and satisfies the first equation. We will show the second equality fails.
Say has centre and radius .
For each we have:
So we must have for each . So by multiplying by and adding up we get
By using the special case of the useful identity, we get:
We know the first sum is by our above calculations, and by assumption we know
a contradiction.
Proof of . Assume is not spherical, and moreover that it is minimal (in the sense that removing any one point makes it spherical). In particular, is not a non-degenerate simplex. So there is a linear relation
Assume that . By minimality, is spherical, and is on a sphere with centre and radius .
Thus
So
here the second sum is , and the first, by minimality, is
which isn’t since the distances of and to are different.
We are now in a position to put everything together.
Proof. Assume is not spherical. So there are constants and a vector such that
and
Technical exercise. Any congruent copy of satisfies the same equations.
(Use the fact that congruence is formed by rotations and translations. The translations will spit out terms like .)
In every non-zero coordinate of use the colouring from Lemma 2, and set . This will give no monochromatic solution to
This is the end of this lecture’s material on point-Ramsey. We shift gears a little now.
Instead of colouring points, we can colour pairs of points. This leads to the notion of edge Ramsey. We mention two results in this area.
Proof. Suppose the vertex set is not spherical. Colour the points, using , so that no copy of has a monochromatic vertex set.
Now colour the edge with .
Each edge has the same colour and must contain two distinct vertex colours. So the edge set is bi-partite.
This gives us an analogous theorem to the theorem that Ramsey implies spherical.
The proof is a variation on what we’ve seen.
See lecture 1 for references.
]]>This morning I woke up to see that my paper about the Bristol model was announced on arXiv. But unbeknownst to the common arXiv follower, this also marks the end of my thesis. The Hebrew University is kind enough to allow you to just stitch a bunch of your papers (along with an added introduction) and call it a thesis. And by "stitch" I mean literally. If they were published, you're even allowed to use the published .pdf (on the condition that no copyright infringement occurs).
Continue reading...]]>In 1970 G. R. MacLane asked if it is possible for a locally univalent function in the class to have an arc tract, and this question remains open despite several partial results. Here we significantly strengthen these results by introducing new techniques associated with the Eremenko-Lyubich class for the disc. Also, we adapt a recent powerful technique of C. J. Bishop in order to show that there is a function in the Eremenko-Lyubich class for the disc that is not in the class .
]]>
A surprisingly large number of open-source software (OSS) projects is run by volunteers. And I don't mean that "hello world" code you pushed to GitHub (which probably makes up 99% of all OSS repositories), I mean the many successful open-source projects that provide the fertile soil other (small and large) software projects are built on.
In other words, the majority of OSS is run by people privileged enough to spend hours on end to produce something that they then give a way for free. Whether or not OSS developers do it out of conviction, it's often a problem when people end up using privilege-based OSS without realizing it.
The most obvious problem is that privilege-based OSS can essentially go away at any moment. You don't have to look to extreme cases (left-pad, anyone?) to see this happen; projects simply slowly die. You might praise OSS for the fact that anyone can pick up the code and fork it if need be, but in reality dead, privilege-based OSS is more like an unfinished construction site; it's easier to start from scratch and thus the cycle repeats.
However, this is so obvious, it's not really a problem, I think. In any case it's not what I mean.
There's a lot to be said in favor of developing OSS out of conviction. It frequently helps people and adds diversity to the ecosystem. The trouble is that privilege-based OSS can be highly toxic.
One toxic variant is "Silicon Valley style OSS" where developers do not act out of conviction but more out of necessity to get ahead in a questionable job market ("GitHub is your resume"-kind-of-thing). If your hipster company hires people only due to their volunteer OSS credentials, then you are effectively hiring them by their privilege, creating a toxic environment and reducing diversity.
Reversely, you have the toxicity of people relying on OSS software not being willing to contribute to the development of OSS because privileged people make it work. Just the other day I was talking with a potential client who described how they use pandoc in production. If you do this at scale, then you're basing the integrity of your production workflow on how much John MacFarlane could procrastinate over the years.
For OSS developer, this can turn into a toxic reality because users often think they deserve access to the developer's privilege. That is, they can become highly aggressive when they find a bug in the OSS software they're using, especially when it impacts them. This gets extreme when we're talking about companies and use of privilege-based OSS in production. Company employees quickly try to exert pressure on OSS projects to fix things -- yet refuse to actually contribute to development any which way or even acknowledging the work that went into a piece of software that they themselves chose to build upon.
Obviously, there are other ways of doing OSS software development. There's transparency-driven OSS (e.g., security related tools, browsers), there's shared-burden OSS (e.g., joining forces to lower costs), there's donation-based, crowd-sourced, and bounty-driven OSS and many others -- Nadia Eghbal lists a few in her lemonade-stand on GitHub. Also ask about governance models.
Long story short, if you're using open-source software, especially in a professional context, make sure to check what model it's based on. Also, don't be toxic.
These thoughts were far from original.
This is where I have an issue with the "hire people for their side projects" mentality.
— Stewart Scott-Curran (@stewartsc) May 25 2016
Wider scope
Overall, the mathematical community does not value open source mathematical software in proportion to its value, and doesn't understand its importance to mathematical research and education. I would like to say that things have got a lot better over the last decade, but I don't think they have. My personal experience is that much of the "next generation" of mathematicians who would have changed how the math community approaches open source software are now in industry, or soon will be, and hence they have no impact on academic mathematical culture. Every one of my Ph.D. students are now at Google/Facebook/etc.
Organisations in “the open space” are often community driven. Groups come together to solve a problem, and in a few cases they succeed. Most fail, and most fail pretty early. Those that survive the initial phase often experience massive growth, sometimes beyond the wildest dreams of those who started them. This brings some challenges.
Sustainability is a big one: too many of these organisations lurch from grant to grant, depending on the largesse of philanthropists or government funders. Most of these eventually fail or stagnate. Some negotiate this transition by turning private and obtaining VC or Angel funds. Eventually most of these are sold off to incumbent players, and gradually lose the central thread of openness and just becoming part of the service background in their space. Nothing wrong with that but they’re no longer really part of the open community at the end of this process.
But some organisations succeed and find a model: donations, memberships, advertising, fee for service have all been successful in different spaces. These can grow to be sizeable companies, ones that need professional staff and business discipline to manage complex operations, significant infrastructures, and substantial financial flows and reporting. No multi-million dollar a year organisation is going to run for very long on volunteer labour, at least not where those volunteers need to work for a living.
Passion can also be a problem, as well as being a driver. Without that passion and without that community nothing gets done. Indeed without the passion many not-for-profit organisations wouldn’t be able to attract staff at the rates that they can reasonably pay. The community is a core asset.
Still, there's now a small but clear core within the CG together with a useful group of "lurkers". I think this year we're entering the productive stage for this community group.
The dominant interest of the core group (i.e., the people actually doing work) is accessibility. What surprised me somewhat was that the core group seems to be in agreement that MathML is not suitable for accessibility, not just because it is effectively deprecated on the web but also because of its inherent limitations. (If you care for nuance and read on, this doesn't mean MathML isn't a decent intermediary for creating accessible web content.)
My own focus has been on "deep labels" which will now tie nicely into our work at MathJax for our recent grant from the Simons Foundation. The idea is quite simple, really.
Thus I've been building and testing demos that work with what we've got -- HTML and SVG enriched via ARIA.
While I'm currently building manual prototypes, obviously one eye is on our work on the speech-rule-engine, i.e., keeping automation of the process in mind. Similarly, I've been trying to think about potential improvements to standards that might give us much larger improvements / simplifications (but that's for another post).
At the same time, while automated analysis of content will only improve, I think manual overrides will continue to be critical. Whether it's to fix a poor result from the heuristics or whether it is to customize content (e.g., to match author preferences).
Obviously, I didn't want to enrich the output but the input. Given that these demos work with MathJax, the natural starting point is MathML (since that's MathJax's internal format). But MathML isn't really special or better than any other format; whatever input format your favorite tool uses, the same methods should be applicable (though some things will undoubtedly be harder/easier to do in other formats).
MathML in itself lacks the means to provide meaningful information to the accessibility tree; at most, it can present (pretty vague) layout information, combined with some misleading information on semantics (e.g., thinking that <mfrac>
always indicates some kind of fraction). But MathML has the benefit of being XML so we can easily add ARIA attributes without running into practical issues.
Here's a very simple but typical example: a common notation for the derivative of a function is a dot above it. In MathML, this is usually realized as an <mover>
.
<math>
<mover>
<mi>x</mi>
<mo>˙<!-- ˙ --></mo>
</mover>
</math>
You might be tempted to think that the "real" solution would be some kind of semantic markup (e.g., using <diff>
) but in the real world, the content is what it is and you want to enhance it.
Now even the simplest MathML accessibility tool should have the sense to voice the Unicode content ("x, dot above") but it might also try to convey the layout information of an mover
("x with dot over it"). But it shouldn't try anything beyond that because the markup does not provide more information than that. In reality, those few tools with decent heuristics will easily cause issues, e.g., any superscripted 2 is read as "squared".
Unfortunately, a dot above can mean other things besides "derivative of", depending on the context and content -- if you ever run into a dot above an equal sign or a digit you'll probably guess that the dot does not represent the concept of a derivative of (then again someone probably used it that way so have fun figuring that one out).
So it's a mess.
Let's use what ARIA has given us to make it less of a mess: a simple and efficient means of providing meaningful textual alternatives for visual presentation:
<math>
<mover aria-label="derivative of x">
<mi>x</mi>
<mo>˙<!-- ˙ --></mo>
</mover>
</math>
This is obviously a very simple example. The most immediate questions are probably:
I believe the answer to both is yes.
The main demo I built is work in progress. It is available on Codepen and I recently started versioning it as a gist.
The demo covers several examples that hopefully already cover many common situations and I'll continue to work on them.
A lot of tweaking happened once I started to test this in screenreaders in earnest.
One of the first problems I ran into is what James Teh described in Woe-ARIA: it's not always clear what AT should expose when we muck about by aria-labeling things like this.
Inevitably, I also needed a common accessibility hack, "off-screen" rendering of content. As a simple but extremely important example, you need this when facing the fact that, in MathML's <mfrac>
the fraction bar is only implicit and thus lacks an node we could attach a label to (arguably the biggest WTF collision between traditional math rendering aka print and web markup).
I currently favor a somewhat convoluted solution:
<mrow aria-label="screen-reader only"><mpadded width="-1em"><mphantom><mtext>M</mtext></mphantom></mpadded></mrow>
The main advantage is backward compatibility and re-usability because this should render in any MathML renderer without (many) side-effects. It also (in part) gets us around the "ARIA-woe" or the fact that an empty <span>
with aria-label
should be ignored.
So far I've tested NVDA, JAWS, VoiceOver, Orca, and ChromeVox in several browsers. Some recordings are already available in a dedicate playlist on MathJax's YouTube channel. Since I didn't want to add commentary, they are a bit difficult to follow so the summary below should be helpful.
aria-label
s completelyOSX El Capitan
Orca 3.20, Ubuntu 16.10
JAWS 17, Windows 7
ChromeVox v53
As you can see, the results are mixed. For each combination of AT+browser+OS, there's some combination that works roughly as expected but that's about it. SVG seems a clear winner despite VO's reluctance; I need to exploretitle
/desc
a bit further (which has different support levels).
Still, I think the situation is already better than what MathML can give you today, in particular because the few significant issues are nothing particular to MathML or math, they're just annoying SVG or HTML accessibility issues, many of which can be easily fixed (as opposed to implementing good math support based on MathML). The fact that MathML accessiblity tools fail to support aria-labels is not surprising, of course, and a typical example of how MathML support (as little as it is) continues to fall further and further behind HTML and SVG. And that's a good thing.
Now some might see this "fixed" enrichment as a step back compared to MathJax's Accessibility Extensions (using speech-rule-engine on the client) because the extensions can provide numerous speech rules and verbosity settings as well as summary information. I would disagree. I've never been a fan of varying speech rules (just like I wouldn't be a fan of AT re-arranging a sentence). Also, speech rules mostly differ by newer ones being more refined than older ones.
Verbosity is simply a general accessibility problem and it should be dealt with in generality (as it already is, e.g., for punctuation). Summary information is a great problem but really a limitation of current web technology and something that's just as needed for infographics or data visualization as it is for mathematics. We do not need isolated solutions here either.
Simple: more testing.
On the one hand, testing more AT combinations and evaluating other approaches. On the other hand, creating more and complex samples.
Others on the MathOnWeb CG have tried different approaches and so we will also work on getting feedback from the accessibility community in general, in particular figuring out how improved standards might help us.
For me personally, the goal is to develop a strategy for next year's work at MathJax where we want the speech-rule-engine to add deep labels directly. I think that would solve the last major piece of the puzzle for math on the web in its current form. Then we can finally leave the legacy approaches with isolated standards and tools behind to focus on moving the web forward as a whole.
]]>Theorem. Suppose that \(\kappa\) is regular and uncountable, and \(\pi\colon\kappa\to\kappa\) is a bijection mapping stationary sets to stationary sets. Then there is a club \(C\subseteq\kappa\) such that \(\pi\restriction C=\operatorname{id}\).
Continue reading...]]>I was reminded of this old note yesterday. This snippet goes back to JMM 2016 when I had coffee with Izabella Łaba. Of course, Izabella is one of my favorite bloggers (starting all the way back when procrastination made us launch mathblogging.org -- shout out to Felix, Fred, and Sam!) but she is also a kick-ass researcher who amongst the many great things she does happens to sit on the editorial board of the (then newly fandangled) arXiv overlay journal Discrete Analysis otherwise known as "that Tim Gowers journal thing".
Discrete Analysis is probably the most relevant arXiv overlay journal in mathematics (ok, I admit I didn't search around much for other noteworthy ones) and the gut reaction when it comes to arXiv overlay journals (and Discrete Analysis in particular) seems to be: "What if it fails?". But like jumping in the Matrix, failure really wouldn't mean anything.
Instead, I've been wondering more about "What if it succeeds?". Of course that's because I expect it to succeed but in either way I don't think people think a lot about that. Arguably, I'm not awfully qualified but then again anyone can go through Kent Anderson's list of 96 things Publishers Do. Most of these, I'm guessing, you don't care about as an arXiv overlay journal so perhaps Cameron Neylon's shorter list is more on point. Ultimately, I think, it is simple: what does a journal need to succeed? High-quality papers.
Quality comes in many forms but basically there are two areas: scientific quality and production quality. Scientific quality includes, at least, attracting papers the community will approve of, attracting authors that impress the community, and an editorial board that can spot the former and attract the latter. Of course, those are not at all separate but papers make journals influential, journals make authors influential etc pp. (And no, merit does not come into play, don't be silly.) I can't really judge it (not being a research mathematician anymore, let alone a discrete analysis person) but the editorial board looks to be full of influential, high-profile people and the first paper was Terry Tao's solution to Erdős's discrepancy problem; so it seems likely that part will work.
Production quality includes, at least, typography, copy-editing, archiving, and marketing. Discrete Analaysis can probably make that work as well as they care because, as Gowers pointed out, they expect they won't have to. That might seem arrogant to anyone with even a bit of knowledge from the trenches of academic publishing, but I think they're probably right in expecting they won't have to. I admit that is in part speculation, but I would expect that a high profile math journal can probably expect both their authors to have spent more time on their manuscript (more pre-submission review from peers, more iterations from themselves as the result is "big" etc) and they can probably expect their editors to work harder (they actually give a damn about the paper they read b/c the result is interesting, they have themselves higher expectations thus provide more detailed reports, they have simply more experience and relevant skills etc). And marketing, well, it's that Gowers journal thing, remember?
So this all looks great. Got the goods, can compete.
Except there are a few things that I think are terrible flaws; in no way fatal flaws (quite possibly the opposite) but ones with negative side effects that worry me.
To start with, overlay journals do the silly extreme libertarian thing of pretending the infrastructure they use doesn't cost anything. Even if the costs of the current technology might be very small, overlay journals will have to stick to the cheapest available tech, ignoring (let alone helping) the transformation of scientific communication.
A more important problem is: can this scale? I don't think it can (not much anyway). Research quality obviously doesn't scale well -- if everyone is a top journal, nobody is. Regarding production quality in "lesser" journals, I don't think authors will invest much in their manuscripts and reviewers will be less likely to have the skills or invest extra time. It still might work if journals started to rely on a more iterative process where post-publication feedback leads to revisions. (I mean, traditionally published journal articles can be awful piles of unedited crap, why expect more from an overlay journal, amiright?) But on the one hand, the community would have to accept that, i.e., it would require a much more significant change in scientific culture, and on the other hand people would have to, well, read papers and give feedback -- where the average number of readers for a math research paper is probably less than 1. Seems unlikely. So we might get elite journals that can get away with this model commercially but anyone else is screwed; not a fan.
The third problem I see is more severe as it relates to the structure of scientific communities: who watches the watchers? Years ago I wrote that my biggest problem with academic communities (and the greatest strength of its publishing system) lies in its power structure: the key to power lies with editorial boards which are predominantly aristocratic. Society-driven journals actually have democratic oversight for their editorial boards (as mild as its effect might be) and even commercial publishers have shareholder oversight, as "unscientific" as their interest may be. But overlay journals have nobody watching them. You might argue the free market will take care of it but it might just be that journals are clubs and that scholarly communication is more like general taxation.
And that combination worries me. The unique ability of elite overlay journals to succeed commercially (as in: providing a valuable product) combined with a lack of checks and balances might lead to an imbalance that cannot be corrected.
But what do I know. Maybe such journals will realize the risk associated with their success and take responsibility for their actions and their effect on the community at large. And then maye they will focus on innovation and on reproducibility of their model for average ("mediocre") journals that the majority of researchers publish in. I've seen crazier things.
]]>So the next order of business is finding a position for next year. So far nothing came up. But I'm open to hearing from the few readers of my blog if they know about something, or have some offers that might be suitable for me.
Continue reading...]]>Registration for the 2017 Southwestern Undergraduate Mathematics Research Conference (aka SUnMaRC) is now open! Northern Arizona University is hosting this year’s conference on March 31-April 2, 2017. We are excited to announce Kathryn Bryant (Colorado College), Henry Segerman (Oklahoma State University), and Steve Wilson (NAU, emeritus) as our invited speakers.
The goal of the conference is to welcome undergraduates to the wonderful world of mathematics research, to develop and foster a rich social network between the mathematics students and faculty throughout the great Southwest, and to celebrate the accomplishments of our undergraduate students. We encourage undergraduate students from all years of study to participate and give presentations in any area of mathematics, including applications to other disciplines. However, while we do recommend giving a talk, it is not a requirement for conference participation. To register for the conference and to submit a title and abstract for a student presentation, visit the 2017 SunMaRC Registration page.
The conference began in 2004 as the Arizona Mathematics Undergraduate Conference. In 2008, the conference changed to SUnMaRC to recognize the participation of institutions throughout the southwest.
If you have any questions about this year’s SUnMaRC, please contact one of the conference organizers:
]]>Matti was a kind teacher, even if sometimes over-pedantic.
Continue reading...]]>One of my former students, Andrew Lebovitz, recently posted a link on Facebook to a Nature article that summarizes a paper, titled The classical origin of modern mathematics, which completed a comprehensive analysis of the MGP database. One of the interesting findings was that the individuals in the database fall into 84 distinct family trees with two-thirds of the world’s mathematicians concentrated in just 24 of them.
After reading the Nature article, I was motivated to see if I could figure out whether I belonged to one of the 24 families. It wasn’t obvious to me how I would do this without manually clicking on my advisor (Richard M. Green), then my advisor’s advisor, etc. This was slightly more complicated than I expected because there were quite a few ancestors with 2 advisors, so I had to navigate down multiple paths. As I clicked around, I drew out my family tree in a notebook.
Here is what I discovered. My longest branch goes back to Nicolo Fontana Tartaglia (currently 14,428 descendants). My tree includes Isaac Newton, Galileo Galilei, and Marin Mersenne (who Mersenne primes were named after). Interestingly, no one on this path belongs to one of the 24 families mentioned in The classical origin of modern mathematics. Also, I was disappointed to find out that I wasn’t related to Leonhard Euler. However, I am a descendant of Henry Bracken, who is the head of one of the 24 families.
I posted some of this information on Facebook and asked if anyone knew how to automatically create a nice visualization of the directed graph corresponding to my family tree. Chris Drupieski replied and pointed out a program called Geneagrapher, which was built to do exactly what I was looking for. In particular, Geneagrapher gathers information for building math genealogy trees from the MGP, which is then stored in dot file format. This data can then be passed to Graphviz to generate a directed graph.
Here are the steps that I completed to get Geneagrapher up and running on my computer running MacOS 10.11. The Geneagrapher website suggests using easy_install
via Terminal, but this didn’t immediately work for me. It often seems that doing anything with Python on my Mac requires a few extra steps. After doing a little searching around, I found a post on Stack Overflow that solved my issue. At the command line, I typed the following:
sudo chown -R <your_user>:wheel /Library/Python/2.7/site-packages/
Of course, you should replace <your_user>
with your username. Note that using sudo
requires you to enter your password. Next, I installed Geneagrapher using the following:
easy_install http://www.davidalber.net/dist/geneagrapher/Geneagrapher-0.2.1-r2.tar.gz
In order to use Geneagrapher, you need to input a record number from MGP. Mine is 125763. At the command line, I typed:
ggrapher -f ernst.dot -a 125763
You can replace ernst
with whatever you’d like the output file to be called. The next step is to pass the dot file to Graphviz. If you don’t already have Graphviz installed, you can do so using Homebrew (which is also easy to install):
brew install graphviz
Following the Geneagrapher instructions, I typed the following to generate my family tree:
dot -Tpng ernst.dot > ernst.png
Maybe it is worth mentioning that unless you specify otherwise, the dot and png files will be stored in your home directory. Below is my mathematical family tree created using Geneagrapher. As you can see, it took a while for my ancestors to leave the University of Cambridge.
]]>Non-mathematicians often tend to be Platonists "by default", so they will assume that every question has an answer and sometimes it's just that we don't know that answer. But it's out there. It's a fine approach, but it can somewhat fly in the face of independence if you are not trained to think about the difference between true and provable.
Continue reading...]]>Title: Dual Ramsey, the Gurarij space and the Poulsen simplex 1 (of 3).
Lecturer: Dana Bartošová.
Date: December 12, 2016.
Main Topics: Comparison of various Fraïssé settings, metric Fraïssé definitions and properties, KPT of metric structures, Thick sets
Definitions: continuous logic, metric Fraïssé properties, NAP (near amalgamation property), PP (Polish Property), ARP (Approximate Ramsey Property), Thick, Thick partition regular.
Lecture 1 – Lecture 2 – Lecture 3
Ramsey DocCourse Prague 2016 Index of lectures.
Throughout the DocCourse we have primarily focused on Fraïssé limits of finite structures. As we saw in Solecki’s first lecture (not posted yet), it makes sense, and is useful, to consider Fraïssé limits in a broader context. Today we will discuss those other contexts.
Solecki’s first lecture discussed how to take projective Fraïssé limits. Panagiotopolous’ lecture (not posted yet) looked at a specific application of these projective limits. We will see how to take metric (direct) Fraïssé limits.
Discrete | Compact | Metric Structure | |
---|---|---|---|
Size | Countable | Separable | Separable, complete |
Limit | Fraïssé limit | Quotient of the projective limit | (direct or projective) Metric Fraïssé limit |
Homogeneity | (ultra)homogeneity | Projective approximate homogeneity | Approximate homogeneity (*) |
Automorphism group | non-archimedian groups (closed subgroups of | homeomorphism groups | Polish Groups |
KPT, extremely amenable iff | RP | Dual Ramsey | Approximate RP (**) |
Metrizability of UMF iff | finite Ramsey degree | (***) | (Open) Compact RP? |
Where we’ve seen these | Classical | Solecki’s lectures | These lectures |
(*) – Exact homogeneity is often not possible.
(**) – In the projective setting this is fairly unexplored. These proofs are usually via direct (discrete) Ramsey, or through concentration of measure.
(***) – You have KPT before you take the quotient, but lose it after taking the quotient. e.g. UMF(pre-pseudo arc) is not metrizable (through RP). A question of Uspenskij asks about the UMF(pseudo arc).
In the context of Banach spaces, it makes sense to use continuous logic. This is where we instead of the usual -valued logic we allow sentences to take on values in the interval . We also suitably adjust the logical constructives.
Classical logic | Continuous logic |
---|---|
True | 0 |
False | 1 |
Now we define functions and relations. Let be a complete metric space. So will be given the sup metric.
Then functions and relations must satisfy the usual things that functions and relations satisfy in classical logic.
Finitely generated substructures | Limit | maps | Language | |
---|---|---|---|---|
Separable metric spaces | finite metric spaces | Separable Urysohn space | isometric embedding | just the distance |
Separable Banach spaces | finite dimensional Banach spaces (**) | Gurarij space | isometric linear embedding | |
Separable Choquet spaces | finite dimensional simplices | Poulsen simplex | affine homeomorphisms (*) | Something that captures the convex structure |
(*) – An affine homeomorphism sends and sends extreme points to extreme points, then is extended affinely to the rest of the simplex. The metric here is not canonical.
(**) – Similar to the discrete case, to take a limit you only need a cofinal sequence. In this case we take .
In continuous logic the maps between models are isometric embeddings that preserves functions and relations.
In the classical Fraïssé setting we looked at homogeneity, HP, JEP and AP. These notions have suitable generalizations in the metric Fraïssé setting.
We say that is approximately ultrahomogeneous (AUH) if and for every morphism, and for all , there is a such that .
is the collection of finitely generated substructures of .
We now explain NAP and PP. The NAP is a striaghtforward generalization of AP.
such that
The PP measures how closely you can embed two metric spaces.
We say satisfies the Polish Property (PP) if is separable for all .
This gives us the following Fraïssé theorem for metric structures.
Recall that is the separable Urysohn space. It is the (unique) complete, separable metric space, universal for separable metric spaces and (exactly) ultrahomogeneous with respect to finite metric spaces.
Its age is the collection of finite metric spaces. It is a metric Fraïssé class.
Its automorphism group has a similar universal property.
See these notes for more information.
Recall the following fact about (classical) Fraïssé structures.
The following observation of Melleray is the corresponding fact for metric structures. It has a similar proof to the classical fact.
For every orbit closure in of a point add a relational symbol called .
The first relevant result is the following:
This proof uses the finite Ramsey theorem and concentration of measure.
The KPT theorem for metric structures is given by the following.
We define the approximate Ramsey Property.
(ARP):
there is a such that
such that
Here , and the -fattening is using the embedding distance (which we haven't defined).
Recall that in the infinite case, rigidity was needed to have the embedding RP. That is why in finite metric spaces we added linear orders to get the RP. However, metric spaces do satisfy the ARP (by Pestov from extreme amenabilty of , without needing to add linear orders.
Also, by using the usual compactness arguments, we can assume that the witness to ARP is the full Fraïssé limit.
In the KPT correspondence, we saw a useful connection between the stabilizer of a set and collections of finite structures. See Lionel Ngyuen van The’s first DocCourse lecture.
Here we mention an analogous connection.
So we can reword the ARP for finite metric spaces, by transfering the colouring to a colouring .
Thickness is a group property that captures some Ramsey properties. This is desirable because we would like to be able to detect Ramsey type phenomena from the group itself, without having to know the underlying Fraïssé limit.
is thick partition regular iff there is a that is thick.
This is really just unwinding definitions. Then by general topological dynamics abstract nonsense we get:
Note that this is a theorem just about groups. This doesn’t use much of the structure of . Our goal is to prove extreme amenability without having to first prove Ramsey theorems.
In the next lectures we will examine the Gurarij space and prove the ARP for (i.e. Banach spaces).
(This is incomplete – Mike)
Yesterday, however, I spent most of my day thinking about how we---as a collective of set theorists---teach axiomatic set theory. About that usual course: axioms, ordinals, induction, well-founded sets, reflection, \(V=L\) and the consistency of \(\GCH\) and \(\AC\), some basic combinatorics (clubs, Fodor's lemma, maybe Solovay or even Silver's theorem). Up to some rudimentary permutation.
Continue reading...]]>I’ve been really enjoying my new job at Time Service in Toledo. I’m about to finish my third month here, and I expect I’ll be staying with this job for quite a while. I find that working in business gives me a variety of interesting problems to solve, and although they’re not deep and abstract in the same way as math research problems, they still require a lot of creative thinking and give me challenges to work on over time and puzzles to chew on as I drift off to sleep, in my morning shower, etc., just like math research did. The whole operation of helping to run a business feels like a big optimization problem — how do I figure out the best way to use all of our company’s resources to the greatest effect?
I hope all my friends in the New York Logic community are doing well. Please keep in touch!
]]>I was very happy when the professor, Matania Ben-Artzi, allowed me to write a final paper about the usage of the axiom of choice in the course, instead of taking an exam.
Continue reading...]]>Below are 15 problems from the course. Originally I was only going to list 5, but it was hard enough to only pick 15. I attempted to showcase a variety of problems that utilize different ways of thinking. I’m intentionally not providing any solutions. Some of these problems are classics or variations on classics. Have fun playing!
If you want to see more problems from the course, go here.
Note: The #loveyourmath 5-day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>It turns out that up to isomorphism, there are exactly 5 groups of order 8. Below are representatives from each isomorphism class:
The first three groups listed above are abelian while the last two are not. It’s a fairly straightforward exercise to prove that none of these groups are isomorphic to each other. It’s a bit more work to prove that the list is complete. The Fundamental Theorem of Finitely Generated Abelian Groups guarantees that we haven’t omitted any abelian groups of order 8. Handling the non-abelian case is trickier. If you want to know more about to prove that the classification above is correct, check out the Mathematics Stack Exchange post here, the GroupProps wiki page about groups of order 8, and the nice classification of all groups of order less or equal to 8 that is located here.
Since groups have binary operations at their core, we can represent a finite group using a table, called a group table. In order to help out minds recognize patterns in the table, we can color the entries in the table according to which group element occurs. Of course, if we rearrange the column and row headings of the table, we have to rearrange or recolor the entries of the table accordingly. Doing so may make some patterns more or less visually recognizable. Similar to the book Visual Group Theory by Nathan Carter (Bentley University), I utilize colored group tables in several chapters of An Inquiry-Based Approach to Abstract Algebra, which is an open-source abstract algebra book that I wrote to be used with an IBL approach to the subject.
While I was teaching out of Carter’s book during the summer of 2009, one of my students (Michelle Reagan) made five quilts that correspond to colored group tables for the five groups of order 8. Here are pictures of the quilts.
It’s a fun exercise to figure out which quilt corresponds to which group. I’ll leave it to you to think about.
Note: The #loveyourmath 5-day campaign is sponsored by the Mathematical Association of America. The goal of the campaign is to engage a general audience across a broad representation of mathematics, whether it is biology, patterns, textbooks, art, or puzzles.
]]>As a side project, I hope to find some time to do a bit of research for MIRI. I’ve discussed MIRI research in a couple of recent posts here. I plan to continue updating this blog with stuff on MIRI research and other updates on my life. I’ll miss my colleagues in New York, and I hope we keep in touch. My students are welcome to keep in touch as well.
]]>Quantilization is a form of mild optimization where you tell an AI to choose something at random from (for instance) the top 10% of best solutions, rather than taking the best solution. This helps to get around the problem of an agent whose values are mostly aligned with yours but that does pathological things when it takes its values to the extreme. In this paper, we examine a similar process, but involving two (or more) agents rather than one.
For those of you who were also at the MSFP, you can read some additional discussion of the paper here. The main idea is that Connor is working on a simulation to help test the ideas in the paper. If you’re interested in helping with the simulation but don’t have access to the forum post linked above, get in touch with me.
]]>Their research has a fair amount of overlap with mathematical logic. I’d encourage any logicians who are interested in these sort of things to get involved. It’s a very good and important cause; the future of humanity is at stake. Unaligned artificial intelligence could destroy us all in a way that makes nuclear war and global warming seem tame in comparison.
Their technical research agenda is a good place to start for a technical perspective. The book Superintelligence by Nick Bostrom is a good starting point for a less technical introduction and to help understand why MIRI’s agenda is important and nontrivial.
One area of MIRI research that I find particularly interesting has to do with a version of Prisoner’s Dilemma played by computer programs that are allowed to read each others’ source code. This work makes use of a bounded version of Löb’s theorem. Actually, a fair bit of MIRI research relates to Löb’s theorem. Here is a good introduction.
Feel free to contact me if you’d like to know more about how to get involved with MIRI research. Or you can contact MIRI directly.
]]>I recently had a chat with James Cummings about teaching. He said something that I knew long before, that being a good teacher requires a bit of theatricality. My best teacher from undergrad, Uri Onn, had told me when I started teaching, that being a good teacher is the same as being a good storyteller: you need to be able and mesmerize your audience and keep them on the edge of their seats, wanting more.
Continue reading...]]>However, we have a proof, a constructive proof that large cardinals are consistent. And they exist in an inner model of our universe.
Continue reading...]]>So, I'm fashionably late to the party (with some good excuse, see my previous post), but after the recent 200 terabytes proof for the coloring of Pythagorean triples, the same old questions are raised about whether or not at some point computers will be better than us in finding new theorems, and proving them too.
Continue reading...]]>If you don't follow arXiv very closely, I have posted a paper titled "Iterating Symmetric Extensions". This is going to be the first part of my dissertation. The paper is concerned with developing a general framework for iterating symmetric extensions, which oddly enough, is something that we didn't really know how to do until now. There is a refinement of the general framework to something I call "productive iterations" which impose some additional requirements, but allow greater freedom in the choice of filters used to interpret the names. There is an example of a class-length iteration, which effectively takes everything that was done in the paper and uses it to produce a class-length iteration—and thus a class length sequence of models—where slowly, but surely, Kinna-Wagner Principles fail more and more. This means that we are forcing "diagonally" away from the ordinals. So the models produced there will not be defined by their set of ordinals, and sets of sets of ordinals, and so on.
Continue reading...]]>But those people forget that \(0=1\) is also very true in the ring with a single element; or you know, just in any structure for a language including the two constant symbols \(0\) and \(1\), where both constants are interpreted to be the same object. And hey, who even said that \(0\) and \(1\) have to denote constants? Why not ternary relations, or some other thing?
Continue reading...]]>Anyway.
There was a joint meeting of CSSWG and DPUB IG on Monday and I was running late (discussing math-on-web things with Daniel Marques actually), so I missed the first 15 minutes. My mind was blown when, within 2 minutes of me sitting down, a motion was accepted to task Florian with spec'ing (specing? speccing?) out a media query for MathML support (as well as an API to flip it). I didn't feel I was in a position to speak up, so I just sat there wondering what just happened.
The motivation seems rather natural, I suppose. As long as there's no universal browser support for MathML, people are still stuck with providing fallbacks. In situation where they cannot load a JS library themselves (e.g., in ebooks), they have to use a fallback even if they could provide MathML.
If there was a media query, people could add both fallbacks and MathML in a standardized fashion, hiding one or the other depending on the result of that media query. In addition, an API would enable JS libraries to leverage a universal way to progressively enhance content; it wasn't quite clear in the end, but some people seemed to hope that API could additionally be triggered by assistive technology.
This discussion started (I think) on the epub3 end, where the IDPF is currently discussing epub 3.1 and best practices; as usual, MathML features in a painfully prominent role. In epub land, the dream seems to be: you create an epub3 file once and some day down the line, when a user's reading system finally picks up MathML support, the old content will magically improve -- progressive enhancements so to speak.
Naturally, @supports
is already very helpful in all sorts of ways today which probably made it a no-brainer (and thus the quick decision). Unfortunately, I think a "media query for MathML" does not solve a single problem.
I was so late to the meeting so when the question for "any objections" came out, I did not feel I was in a position to do so. Still, in a breakout meeting later that day (about epub specifically), I voiced my criticisms to both epub, accessibility, and CSS people.
So this is, if you will, the written version of my opinion. (In case you missed that you are on my personal website, please note the use of "my" here.)
A single media query for MathML won't help me as content provider (author, publisher, technology specialist); I also find it generally unhelpful for the web as a whole.
The problem with a single query is simple: when would a browser respond positively? When should a browser legitimately claim to have MathML support? I honestly don't know.
MathML is a huge (and pretty vague) spec. There's not a single browser or library that could claim complete support. MathJax is the top scorer with 85% on the MathML test suite (since MathPlayer was kicked out by IE) but that's not saying much since the test suite is highly biased -- whoever feels like it can submit the data, and in MathJax's case that was me (who is obviously biased).
I can't see how a single media query for all of MathML could provide people with any kind of reliable information on the front-end. Most likely, Gecko and WebKit implementations will immediately turn it on which does not help one bit -- people will still have to test their content on those engines in detail.
Personally, I have already done that too many times (and keep a close eye on them) and I always come to the same conclusion: I cannot recommend using them to anyone since they are too unreliable. So I'm still stuck the same way I was before. Similarly, any publicly available crawler data I've seen indicates that nobody is using native MathML on Gecko and WebKit in the wild, so my position does not seem to be unique.
Of course, the CSSWG might spec out a whole set of individual media queries for every single MathML features. As unlikely as that seems, we'd just end up deeper in the rabbit hole: MathML is still extremely vague so few features are specified in enough detail (compared to CSS or SVG anyway). To take a simple example, while Gecko and WebKit might claim support for mfrac
(fractions), it's not helping me if those fractions are laid out badly as soon as I put something mildly complex in them. So again, I'll end up not using it.
In terms of accessibility, it seemed an API that assitive technology could trigger would not be as easy to implement in browsers (yet "easy" seemed a pre-requisite given the comments from browser reps in the CSSWG). Since AT tends not to inject scripts (JAWS craziness notwithstanding), they'd need a more sophisticated feature (which is, I think, also being discussed by CSSWG, but considered much harder, i.e., unlikely).
Besides, this assumes that MathML significantly benefits accessibility. After MathJax getting deeply involved in building a suitable tool, I find this argument questionable. Talking to a11y folks, it usually comes down to "but MathPlayer!" and while MathPlayer was pretty good (albeit dead in the water now) it didn't actually use MathML but a proprietary internal format representing the result of semantic heuristics; this makes it kind of hard to use it as an example for how great MathML is for accessibility.
I think it's unrealistic to expect every single assistive technology to invest as much in a niche like math. I'd estimate that, at any one point in time over the past 18 years, the number of actively maintained accessibility tools with MathML support was 1 (no, neither JAWS nor VoiceOver count as "maintained" when it comes to MathML).
Further, not a single tool has ever used MathML as an internal format because it is simply insufficient -- it is a XML document language for layout and is grossly unsemantic (and don't say "but ContentMathML" now).
If people feel like exposing MathML to AT, then they can use one of the many standard tricks to ARIA-hide the fallback content and visually hide the MathML. Again, in my opinion, that's a disservice for your readers, but nobody stops you.
For me, the weirdest thing about this whole decision was its speed: that the CSSWG signed off on this idea in under 20 minutes just makes me a teeny tiny bit skeptical. It feels a lot like one big "whatevs" -- browsers don't really care but, hey, a media query is little work and if it keeps these math people off our backs, all the more reason.
The real problem remains with or without a media query: where is MathML going? As Romain commented on twitter:
@pkrautz it's real progress going on
— Romain Deltour (@rdeltour) September 19, 2016
1999 → hope MathML gets implemented
2016 → hope a declaration of non-implementation gets implemented
Browser vendors have never worked on MathML support, MathML is no longer maintained as a spec, the MathWG is no more (did you notice?), and MathML is a bad web standard for both layout (another post) and accessibility.
I think it's time to realize that after 18 years of not succeeding on the web, the problem might just lie with MathML itself. We don't need it on the web (CSS and SVG are better for layout and ARIA better for accessibility) and we should stop giving browser vendors an excuse not to do anything that actually helps those developers who realize math on the web in its myriad forms today. (And the XML document world, where MathML succeeded, would be better off as well.)
Don't get me wrong, there are many problems left for math on the web but MathML is not a silver bullet, in fact, it solves none of them. Even if it was implemented widely, we'd still need CSS and ARIA features to match. Instead of waiting for others (i.e., browsers) to solve their problems by magic, the few people with an interest (and the resources to match) will have to solve this niche problems on their own and in a way that moves the web forward as a whole.
Either way, a media query for MathML is pointless.
]]>Somebody asked on the MathJax user group
To my understanding MathJax supports these input formats: LaTeX, MathML, and AsciiMath. If I'm making a website and I can choose to use any of the three formats, what are some advantages of choosing each?
Since I've answered this so many times, I thought it might be worth copying here:
"That's a tricky (trick?) question.
MathML is MathJax's internal format (essentially anyway) so anything that can be done in MathJax is done through our MathML support, cf http://docs.mathjax.org/en/latest/mathml.html. While MathML is quite good for such an internal purpose, it can be difficult to create. It's rarely written manually (much like HTML or CSS) and tools can have trouble producing high-quality MathML (converters can fail, editors might produce overcomplicated MathML). MathML is the dominant format used in professional publishing workflows and thus comes with a rich toolchain out of XML-land.
MathJax's LaTeX-like input provides a faithful implementation of the most common math-mode LaTeX commands as well as other standard packages and a few non-standard features, cf. http://docs.mathjax.org/en/latest/tex.html. LaTeX is much easier to author by hand than MathML and provides the typical LaTeX advantages such as custom macros (for even easier authoring). It also has the benefit of a large community thanks to the wide adoption of TeX as a programming language for print layout in academic writing. LaTeX is probably the most popular format when people have a choice, so MathJax's TeX-like input has a wide community out there. From a real TeX perspective, MathJax restricts LaTeX input to math-mode since it converts internally into MathML. Due to LaTeX's print heritage, some constructions are hard to do (e.g., equal-width columns are trivial in MathML but not doable with the default LaTeX macros).
AsciiMath is a lightweight markup language designed to convert well to MathML. I sometimes like comparing it to markdown -- not as powerful but much more sensible to write. It does not have the expressive power of MathML but it is very easy to learn because it was designed by Peter Jipsen specifically for high-school- and college-level students. It is less frequently used but if it's expressive power is sufficient, I tend to recommend it.
In summary, MathML is MathJax's internal format so anything you can do with MathJax you can do with its MathML input. LaTeX is virtually as powerful (with some edge cases), is easier to author by hand, and has a large community both from real TeX-land and MathJax's community. AsciiMath is the little brother of both MathML and LaTeX and provides a good compromise between expressive strength and human readability.
If you look beyond MathJax there are even more options, of course."
Moving on.
On the "Getting Math Onto Web Pages" community group, Tzivya raised a big topic regarding accessibility:
I would love it the world would come to understand that accessibility is a subset of machine readability. Accessibility APIs are a specialized kind of machine. If we are working on machine readable math, we need to make sure that those specialized machines can read the math too. Otherwise we will do the work twice.
I found myself disagreeing with Tzivya (which means I'm probably wrong because she is awesome). This disagreement is mostly influenced by our work at MathJax for the past year or so, making math rendering accessible via MathJax. But the point is an important one to me because, as I expected (feared?), a few discussion on the Community Group have already brought up the problem of looking for the right™ solution instead of the realistic one.
For me, what we have now is the right solution: HTML, CSS, ARIA, SVG etc, several competing math rendering/computation/etc implementations based on these, lots of tools tangential to them. An excellent kind of mess without standards beyond what works ok for each project out there. It's not the right™ solution but it has the potential of becoming better and better. It's really just another part of web development; nothing else needed.
Anyway, so I wrote:
"I do dream that eventually (maybe 10 years from now?) we'll have a thorough a11y API mapping for mathematics. At the moment, I don't think mathematics (as a culture / language) is ready for this (though web technology probably would be).
Regarding general machine readability vs accessibility, one important difference I see is that machine readability can benefit from partial results whereas accessibility cannot.
A typical example for this might be units. If we can find a way to make units machine readable, I think we'd have a major improvement for STEM on the web. But it won't help accessibility (much) to know that there are units in an expression if it is otherwise unintelligible.
Of course, we currently don't have any standard or best practice for exposing units on the web. The MathWG had a very old note on units (from 2003) which suggested class='MathML-Unit' on MathML elements; I don't think that's viable approach today. Perhaps schema is a better starting point considering how successful search engines can leverage units in recipes (I could imagine lab protocols and engineering might benefit from similar methods).
For some tools it's extremely easy to generate markup for units, e.g., Jos de Jong's MathJS has a rich interface for handling units and could probably easily expose them in a visual output. TeX has a rich history with the physics and siunitx packages (which are, for example, partially available in MathJax as third party extensions) and heuristics seem feasible to enrich formats in general (again, MathJax can do some of that via the speechruleengine).
I think for humans we have to change our expectations. Otherwise, we'll just end up repeating the mistakes of the past. I'll post some thoughts on the accessibility thread later."
And I then also wrote on the related thread:
"Today the most
reliable method is still to use binary images with alt text: static images
are the most reliable in terms of cross browser/platform/network conditions
for visual rendering and alt-text is the only way to guarantee at least
some alternative rendering (e.g. aural and braille) -- no matter how poor
the results may be.
Don't get me wrong, in many specific situation, there will be better ways.
If you have simple content, then you can get decent visual results with
HTML tags with nested aria-labels. If you know you can rely on webfonts
(e.g., many ebook situations) then you can use CSS with webfonts for
rendering (and again nested labels). If you don't need IE8 (sigh) then you
can use SVG etc.
But in generality, binary images with alt-text are still the most robust
way -- and that's an extremely sorry state. I'm pretty sure we can do
better but we need to identify what users need and what tools can
realistically achieve today.
My first guess would be: some form of speech text, potentially enabling
some level of exploration through nesting (and perhaps full exploration via
JS). That's not as bad as it sounds. SVGs with aria-labels are already a
close second in terms of usability (pending the ultimate demise of IE8),
and like HTML they open up the opportunity of deep-labels and thus already
get a certain level of exploration.
But there are other aspects. For example in the US, MathSpeak has a long
history and many users of aural rendering are trained to its way of
describing the visual structure of an equation. I've heard enough anecdotal
evidence to take this very seriously -- after all, that's how visual users
do it. Still, a few months ago I learned that in Germany, on the other
hand, blind students might learn TeX syntax early in school (most likely
because there is no tradition like MathSpeak which, after all, precedes the
web by decades).
I also expect much overlap with SVG accessibility, where the challenges of
summary information at a top level and exploration of details are very
similar to mathematics."
Oh, I gave a talk for Global Accessibility Awareness Day 2016 at the FernUni Hagen -- in German (it's been a while). The slides are on GitHub Pages. It's already somewhat outdated because Wikipedia now serves mainly SVGs (generate with mathjax-node).
Anyway, the core summary stays true:
Why is it difficult to make formulas accessible?
- Formulas compress information (extremely)
- Formulas are often visual
- Formulas are context-dependent
- Formulas are poorly authored
In other words, math accessibility sucks bad. And no solution will really help you there. But MathJax now does its best to make it suck less.
Oh, speaking of accessibility, I'm extremely disappointed that I won't make it to role=drinks after all -- but if you're close by, why don't you drop by?
]]>Enjoy!
Continue reading...]]>I'm visiting David Asperó in Norwich at the moment, and on Sunday, the 12th, I will return home. It seems that the pattern is that you work most of the day, then head for a few drinks and dinner. Mathematics is eligible for the first two beers, philosophy of mathematics for the next two, and mathematical education for the fifth beer. Then it's probably a good idea to stop. Also it is usually last call, so you kinda have to stop.
Continue reading...]]>We consider the iteration of quasiregular maps of transcendental type from to . In particular we study quasi-Fatou components, whichare defined as the connected components of the complement of the Julia set.
Many authors have studied the components of the Fatou set of a transcendental entire function, and our goal in this paper is to generalise some of these results to quasi-Fatou components. First, we study the number of complementary components of quasi-Fatou components, generalising, and slightly strengthening, a result of Kisaka and Shishikura. Second, we study the size of quasi-Fatou components that are bounded and have a bounded complementary component. We obtain results analogous to those of Zheng, and of Bergweiler, Rippon and Stallard. These are obtained using techniques which may be of interest even in the case of transcendental entire functions.
]]>Our objective is to determine which subsets of arise as escaping sets of continuous functions from to itself. We obtain partial answers to this problem, particularly in one dimension, and in the case of open sets. We give a number of examples to show that the situation in one dimension is quite different from the situation in higher dimensions. Our results demonstrate that this problem is both interesting and perhaps surprisingly complicated.
]]>We study the class of functions meromorphic outside a countable closed set of essential singularities. We show that if a function in , with at least one essential singularity, permutes with a non-constant rational map , then is a Möbius map that is not conjugate to an irrational rotation. For a given function which is not a Möbius map, we show that the set of functions in that permute with is countably infinite. Finally, we show that there exist transcendental meromorphic functions such that, among functions meromorphic in the plane, permutes only with itself and with the identity map.
]]>The obvious problem is: how should that work? How do we get this small, disparate, and sometimes divided community of math tools for the web to inform web standards and, ultimately, browser development?
Well, it's time to find out.
A couple of people have been working towards a new effort and we've now formed a W3C Community Group. The name may sound funny but it's what this group is after: Getting Math onto Web Pages. No fuss, no drama, no limitations. The focus is on how we do this today and how we can make it easier.
So now it's up to us.
If you're a developer of a tool that makes math work on the web today and want to help shape the future, then it's time to step up. I know your resources are probably tight, in fact most projects out there are run by idealists, as side-projects or chronically under-funded. I hear you.
But you built a tool because nothing was getting the job done. Standards? Same thing. We need to learn about the process, understand what we want to do and what we can do, and ultimately, help build standards that work for everyone. Otherwise, the job won't get done.
So join the Community Group and work together to move the web forward for mathematics and beyond.
Need more information? Here's the initial description from the CG homepage:
There are many technical issues in presenting mathematics in today's
Open Web Platform, which has lead to the poor access to Mathematics in
Web Pages. This is in spite of the existing de jure or de facto
standards for authoring mathematics, like MathML, LaTeX, or asciimath,
which have been around for a very long time and are widely used by the
mathematical and technical communities.While MathML was supposed to solve the problem of rendering mathematics
on the web it lacks in both implementations and general interest from
browser vendors.However, in the past decade, many math rendering tools have been pushing
math on the web forward using HTML/CSS and SVG.One of the identified issues is that, while browser manufacturers have
continually improved and extended their HTML and CSS layout engines, the
approaches to render mathematics have not been able to align with these
improvements. In fact, the current approaches to math layout could be
considered to be largely disjoint from the other technologies of OWP.Another key issue, is that exposing (and thus leveraging) semantic
information of mathematical and scientific content on the web needs to
move towards modern practices and standards instead of being limited to
a single solution (MathML). Such information is critical for
accessibility, machine-readability, and re-use of mathematical content.This Community Group intends to look at the problems of math on the web
in a very bottom-up manner.Experts in this group should identify how the core OWP layout engines,
centered around HTML, SVG, and CSS, can be re-used for the purpose of
mathematical layout by mapping mathematical entities on top of these,
thereby ensuring a much more efficient result, and making use of current
and future OWP optimization possibilities. Similarly, experts should
work to identify best practices for semantics from the point of view of
today's successful solutions.
This work should also reveal where the shortcomings are, from the
mathematical layout point of view, in the details of these OWP
technologies, and propose improvements and possible additions to these,
with the ultimate goal of reaching out to the responsible W3C Working
Groups to make these changes. This work may also reveal new technology
areas that should be specified and standardized on their own right, for
example in the area of Web Accessibility.
The ultimate goal is to pave the way for a standard, highly optimized
implementation architecture, on top of which mathematical syntaxes, like
LaTeX or MathML, may be mapped to provide an efficient display of
mathematical formulae.Note that, although this community group will concentrate on
mathematics, many other areas, e.g., science and engineering, will
benefit from (and factor into) the approach and from the core
architecture.
PS: We've also applied for a CG slot at TPAC 2016 in Lisbon for a face-to-face of the CG as well as the opportunity to talk to other groups. Fingers crossed!
]]>You can find the video here:
Continue reading...]]>I don't have any analytics on this site beyond what CloudFlare collects passively. There was spike of ~800 unique visitors and then higher-than-usual traffic afterwards, it might not be completely unreasonable to guess that 1000 people opened the post back then -- until somebody posted it to Hacker News today (no link to save your sanity from reading HN comments) so now it's more like 20,000 people have clicked a link to that piece. Of course, few of those will have read it, fewer still will have carefully read it. My best guess is: 3 people have read it. Does that sound about right?
Most responses were basically "meh" (especially on the twitters). Steve Faulkner is, of course, to blame for much of that twitter attention (thanks Steve!). I also received a few kind emails with responses, thanks for those. Elsewhere, Jesse McKeown wrote a short tumblr; as a former mathematician I'll formally (get it?) object to the use of Gödel's work.
Paul Topping's "response" was mostly focused on his own ideas which have little to do with what I wrote. Let me respond to those few points that were about my piece. Let's do this inline.
The first thing to note in his post is that he says that MathML is a failed web standard. By this, I believe he is only saying that it has failed as a language supported by browsers.
I had hoped my glorious <s>
tag was making the point clear. But I guess not.
He acknowledges that it is in heavy use in education, publishing, and elsewhere but I wish he’d made this distinction a bit more strongly.
Ignoring the point that I didn't actually mention education (or "elsewhere"), I thought I had fulfilled this "wish" when I wrote: It’s also clearly a success in the XML publishing world, serving an important role in standards such as JATS and BITS. The problem is: MathML has failed on the web.
.
I'm not sure how much clearer I can make that distinction -- success here, failure there.
The browser makers ignore MathML so getting rid of it won’t affect them much. Perhaps Peter is directing his message to the MathML community itself.
For what it's worth (and before anyone needs to speculate), my piece was very broadly directed at the web community. I was probably looking for readers who follow current trends in browser standards and their development. (Shout out to Chaals!)
This one’s easy. MathML isn’t implemented in most browsers so its not used.
That argument seems rather simplistic to me. Looking at any successful new web standard out there today (e.g., picture, flexbox, grid, animation), even a partial, behind-a-flag implementation does not mean the standard is not being used. Instead, there's a positive feedback loop with (often seemingly small groups of) developers. Even at the best of times (e.g., Dave Barton pushing WebKit forward for a year, Fred Wang's crowd-funded months), developer feedback for MathML was (and is) non-existent (cf. my example of serious bugs not even being reported).
Sure but imagine if MathML specified layout to the level that TeX does.
This is a) ignoring how badly Presentation MathML does not specify layout (in particular, compared to CSS) and b) a red herring (TeX).
This might well be the case but what’s the point here? If CSS now has what math layout needs, we’re done, right?
Yes. That's the main point, actually.
Perhaps, but even if Presentation MathML provided sufficient semantics, most authors wouldn’t add them. The fact is MathML already provides recommended markup patterns for expressing a lot of math semantics but authors aren’t interested in adding such patterns to their math. Authors generally stop tweaking their math as soon as it looks right and can be read by a fellow human. I don’t think this will change. Even publishers are less and less interested in spending resources on marking up math with the required level of semantics. This won’t change even if MathML added missing semantics constructs and the necessary editing tools were available. Instead, everyone is moving in the opposite direction, spending less and less time and money on careful authoring.
An elegant straw man argument is still a straw man argument. I did not write about authoring or extending MathML. Good points though.
Peter acknowledges Neil Soiffer’s work on algorithmically extracting semantic information from Presentation MathML but seems to think it has hit a brick wall.
Another case of putting words in my mouth. A bit far-fetched this time, since MathJax is actively doing research in this area.
In technology, when someone has a better idea how to do something they should just do it and let the market decide whether their solution is really an improvement.
To quote myself:
Today, lots of tools will let you render mathematics using CSS.
It’s possible to generate HTML+CSS or SVG that renders any MathML content [...] on the server.[And obviously on the client as well.]
Since layout is practically solved [...].
I tried to make a point that CSS and SVG already provide various solutions today. I also tried to make a point that MathML is not used significantly in the wild (except by conversion to HTML/CSS or SVG of course). So it seems to me that I argued that "the market" has chosen these solutions over MathML.
But I guess I wasn't clear enough. Oh well.
No problem but a lot of work needs to be done first.
No, see above.
Peter claims MathML’s mere existence is blocking discussions. What discussions did it block?
That's a good point even though Paul's piece is a nice example of the point I was trying to make. Calling on the community (who is that again?) to magically fix MathML after 10 years without development instead of making room for successful solutions? That is an elegant block.
Anyway, one problem for me is that many discussions I have in mind happened privately, especially with web standards experts. But that's no excuse for not spending a few minutes thinking about public examples; for some reasons, this example the discussion on mozilla.dev.platform is the first to come to mind (man, I was feeling righteous back then).
Another example are the specs themselves. The ARIA spec basically has a big glaring hole where math should be. Similarly, take a look at the suggestions in the ARIA best practices spec and the epub3 spec. All of them focus entirely on MathML-based solutions without any reflection on whether these actually work in real life. The ARIA practices spec even discourages working solutions like HTML-math using dubious arguments about the semantics of Presentation MathML. Moving on.
Paul goes on to write about generating semantic information. It's not quite a straw man but nevertheless has little to do with my concerns about exposing semantic information on the web.
To wrap up.
Of course, Peter doesn’t believe automated semantics recognition can do the job.
See above.
Do we want that math to look identically in every browser? I believe the answer is generally “no”.
I have the impression people generally expect consistent rendering across browsers. But anecdotal evidence is, well, anecdotal.
And that's all folks. I'll add more as they come along.
And stay tuned for more.
Comments
Don Stolee, 2016-04-14
Totally agree with your points raised and must admit don't understand all of it.
We are XML publishers out of Australia and use MathML within our markup. We then publish the XML content to our HTML5 eReader (tekReader) and use MathJax to assist with the rendering.
Example here: http://tekreader.eglootech.com/book/tekReader-Guide#part22#pt22-1-1-h3
It seems to work well on modern browsers found on desktops, tablets and smartphones and we have a University in Canada using our reader.
I would hope the XML world does not drop the standard and browsers continue to support, somewhat.
Peter, 2016-04-14
Thanks for your comment. Tekreader looks very nice.
MathML is clearly a success in the XML world so I don't see it disappearing. I'm not suggesting that anyone should drop MathML if it works for them.
The point I was trying to make was entirely about its role on the web where other tools have made it obsolete (in the sense that it is no longer necessary to have native MathML browser implementations). Since most XML markup is converted to HTML for web delivery (e.g. OASIS tables), I don't see a huge problem in converting MathML to HTML as well.
Does that make sense?
Don 2016-04-14
All good Peter. Thanks for getting back to me. If I may add. I've been providing XML publishing systems since the early 90's (SGML back then). All very monolithic and complex. With the advent of tablets and smartphones I see a trend in marking down XML (I call it dummy down) to HTML5. In fact my business now advocates markup using HTML5 (now with semantics) and do away with all the complexity downstream. Most of the rich markup is never used anyway (aka S1000D).
Peter 2016-04-14
Thanks for the additional comment, Don. I'm far from your level of experience obviously, but I've also heard about this trend. In that context, I often point to John Maxwell's BiB 2012 talk.
Don 2016-04-14
Awesome! Thanks for sharing. At least I know I am not crazy!
In particular a successful mathematical idea is polished with the dust of the many failed ideas that preceded it.
Continue reading...]]>I recently posted a terse -- uhm, shall we say summary? -- of my thoughts on MathML on a11ySlackers; and I promised a blog post. There's now a 6000 word thingie sitting in my drafts which would take months to whip into shape. So I tried again and it now feels both too long and too short; oh well, maybe it leads somewhere, maybe it doesn't.
Needless to say, opinions posted on my personal website are my personal opinions (funny how that works). In particular, they do not reflect the opinions of any of my clients, let alone the team at MathJax. I think they don't particularly help anything or anyone specifically except, perhaps, in encouraging a more open and realistic discussion.
MathML is a failed web standard.^{*}
We can do better, we deserve better.
MathML-in-HTML5 is in the way of that.
^{*}Some people might prefer "browser standard", as in "a web standard to be implemented natively in the browser" since some web standards do not rely on browser implementations. Also, "natively" as opposed to some web-components hack shipped in a browser.
It doesn't matter whether or not MathML is a good XML language. Personally, I think it's quite alright. It's also clearly a success in the XML publishing world, serving an important role in standards such as JATS and BITS.
The problem is: MathML has failed on the web.
Luckily, many technologies have succeeded and today MathML is neither necessary but also no longer sufficient for math on the web. Instead of one monolithic solution, we have many. We should acknowledge this and move forward towards several newer and smaller standards that actually help developers.
Here are a few reasons that make me say these things.
You might easily think they do (Office! ChromeVox! VoiceOver!) but the browser vendors actually don't. The partial MathML implementations in Gecko and WebKit are entirely the work of volunteers. Largely unpaid, largely unsupervised, largely unaccountable.
Not a single browser vendor has stated an intent to work on the code, not a single browser developer has been seen on the MathWG. After 18 years, not a single browser vendor is willing to dedicate even a small percentage of a developer to MathML.
This is where the story should end, really. But sadly it doesn't. MathML's success in the XML world has kept it alive, but not for the benefit of anyone on the web.
MathML is a poor web standard and it would be better to remove it from HTML 5.
If you look at publicly available crawler data, you'll notice that it's hard to find examples of MathML that aren't behind paywalls. If you look further, you'll hardly find an example where people providing MathML content rely on native MathML implementations; even on Gecko and WebKit they use MathML-to-HTML5 converters. Another indicator is that, despite implementations having subtly deteriorated in the past two years, people aren't even complaining (I mean, WebKit stopped drawing surds (try this in Safari 8) but apparently nobody cared enough to even file a bug). Actual developer problems are so extreme you can't seriously develop anything slightly advanced with MathML (e.g., Gecko has non-existent or incomplete support for basic APIs such as style, dataset, or event handlers for MathML elements).
Ok, truth be told, I don't know. The problem is: it's nearly impossible to generate good Content MathML (except with massive manual labor). As far as I know there is not a single significant collection of mathematics encoded in Content MathML out there. It's mainly ephemeral research projects and some hand-crafted projects. That's fine, we need research after all, but that is not a standard fit for the web.
Now <mstyle>
, <mspace>
, <mpadded>
, <mphantom>
, <menclose>
, <mfenced>
, <mtable>
, <mstack>
might sound funny to a web developer but it's a serious problem. The web has found a productive separation of concern. MathML is incompatible with this approach.
MathML assumes an implementor would know or care about the intricacies and traditions of math layout. How do you draw a surd? Not specified. How do you draw a fraction? Not specified. How do you space things? Not specified. [But yes, dear implementor, you should support arcane mathematical layout features like movable limits, operator dictionaries, the subtle spacing and layout difference of inline- and display-style and so forth; you know why they're important, right? RIGHT? And also make sure to implement 5 different approaches to vertical stacking, because, reasons -- kthx, xxo.]
Today, lots of tools will let you render mathematics using CSS. It's messy but it works everywhere (ok, dear IE7 user, not for you, I'm sorry). The time when MathML implementations would have significantly enhanced web layout features are past.
Neil Soiffer wrote ingenious heuristics for MathPlayer which makes most people think that Presentation MathML makes mathematics accessible. That's about as accurate as saying OCR means all images with text are actually accessible.
The reality is that even for school-level math you need both high-quality Presentation MathML (which is rare in itself) combined with powerful (but inevitably fallible) heuristics to extract meaningful semantic information; that's acceptable in the short run but not a real solution for mathematical semantics on the web.
MathML has seen no significant activity in almost a decade. In the industrial XML world, MathML is a success and people want more features but improvements are not even brought up. It seems nobody wants to jeopardize an adoption on the web. MathML being a web standard is negatively affecting even those users who actually embrace it because MathML is stuck in maintenance mode.
Did you know the MathWG's charter is running out this month? Would you notice if it wasn't renewed and the WG would cease existing? Would you notice if WebKit and Gecko ripped out their MathML implementation tomorrow? I'm not sure many people would.
Many people I've met have the mistaken impression that browser manufacturers have declared an intent to implement everything in the set of standards usually called HTML 5. They have not (even if HTML 5 as a "spec" may strive for that).
I think as long as MathML is in that set of standards, the lame duck argument ("it's a standard!") will continue to prevent alternative developments that help the actually working solutions for mathematics on the web.
At this point, MathML is effectively preventing mathematics from aligning with today's and tomorrow's web. This is hurting everyone. We need to drop MathML to make room for better standards.
It's possible to generate HTML+CSS or SVG that renders any MathML content -- on the server, mind you, no client-side JS required (but of course possible). The resulting markup is arguably crap -- it's span soup at its worst and some use cases are difficult to realize. But we've been there with HTML and CSS; people know how to solve this. It got us standards like flexbox and css-grid; it's worth pursuing improvements to those standards that work instead of waiting for Godot.
It's also difficult to write your own math rendering tool. But we need more ideas, not less! It shouldn't be harder to write a simple math renderer in CSS or SVG than it is to write a RWD framework or a vector graphics library.
We don't need Presentation MathML for this even if many projects (like MathJax) use it as an internal format. MathML's failure as a web standard is hurting the web because it is blocking discussions about improving existing standards to help existing mathematics tools on the promise that eventually "MathML will solve everything (tm)".
I can't see a native MathML approach help to fill these final gaps. What existing rendering solutions need has little to do with what MathML implementations need. We don't need underspecified layout features tied to MathML elements, we need flexible CSS features that are integrated into existing CSS. Most importantly, existing solutions can iterate on partial improvements to ensure that these help layout on the web more generally, not just the needs of one specific mathematical markup language.
We don't need one true approach to math layout, we need flexibility for developers to be innovative and pursue new ways of solving layout problems and expressing mathematical thought on the web.
We need to get together with CSSWG/Houdini TF/etc to work out solutions that help those developers who actually solve the problem of math on the web.
To give a rough idea -- From a MathJax point of view, three areas are difficult in CSS right now (and probably universally for math layout tools on the web):
Stretchy things are by far the biggest layout question, if only because they once led Ojan Vafai to call math layout fundamentally incompatible with CSS layout. As much as I respect his expertise, that cannot be the answer. It seems unlikely that we can't incrementally reduce the complexity for existing rendering solutions; in any case, it has little to do with MathML.
Since layout is practically solved (or at least achievable), we really need to solve the semantics. Presentation MathML is not sufficient, Content MathML is just not relevant.
We need to look where the web handles semantics today -- that's ARIA and HTML but also microdata, rdfa etc. Especially ARIA is an extremely urgent problem because it currently ties mathematics entirely to Presentation MathML elements (where it fails) instead of providing a way to enrich all mathematical rendering on the web.
We also need to look beyond the semantics of mathematics into the semantics of mathematics in its applications, e.g., mathematical notation out of physics (units etc), chemistry (isotopes, reactions etc) and biology (trees, graphs etc). We need to find ways to expose this information to assistive technologies, search and other tools.
]]>You can find the article on the ESTS' website "Resources" page, or in the Papers section of my website.
Continue reading...]]>If you happen to be a student and a member of the Association for Symbolic Logic, you can apply for an ASL travel award. For more information as to how, please see here. There's just enough time to still submit your request!
Continue reading...]]>Some of you may have known him through MathOverflow as "Avshalom" where he often appeared in the comments with generally useful references, and some of you may have known him in real life as a teacher or a colleague, or a student. Some of you may have even knew him as Eoin Coleman.
Continue reading...]]>You can find that video right here:
Continue reading...]]>Not assuming the axiom of choice the definition of cofinality remains the same, if we restrict ourselves to ordinals and \(\aleph\) numbers. But why should we? There is a rich world out there, new colors that were not on the choice-y rainbow from before. So anything which is inherently based on the ordering properties of the ordinals should not be considered as the definition of an ordinal. So first let's recall the two ways we can order cardinals without choice.
Continue reading...]]>MathML is often presented as the single solution to all math accessibility problems. For example, the ARIA spec says "Browsers that support native implementations of MathML are able to provide a more robust, accessible math experience than can be accomplished with plain text approximations of math", the IDPF accessibility guidelines says "[...] a benefit of native MathML support [...] is the ability to provide voicing based on the markup [...]" (ok, they do suggest fallback speech text later only to go on and tell you that annotation-xml will work without, you know, some level of MathML support), even PDF/UA suggests MathML.
While this might seem plausible for authors, I can't shake the feeling that saying "just use MathML" is a bit of a cheat, especially on the web.
On the one hand, there's the reality of the technology landscape. I'm not going to criticize browsers yet again but accessibility happens to include visual rendering (duh!); without it accessibility of mathematics on the web is fundamentally broken. Even more so since ARIA fall short in terms of enabling HTML or SVG rendering of mathematics to be accessible.
On the other, while a growing number of screenreaders happily tout MathML support, there are (please correct me) really just three solutions out there: The new kids are VoiceOver and ChromeVox whose quality might be summarized with "meh" (not terrible but really not yet great in terms of math support or, for that matter, active development of math support). The grand old lady of math accessibility is of course MathPlayer which, I'm guessing, is the origin of the "just use MathML" ("just use MathPlayer"?) attitude for accessibility both because of its quality and because it is what many screenreaders leverage (JAWS, NVDA, Texthelp etc). However, with MathPlayer being pushed out of IE and into the status of a third party library (and integration into screenreaders sometimes lacking) that line of argument is a thing of the past. Practically speaking, there is no real, productive competition today and thus no resources for improvements.
Anyway, the question I've been pondering is: why do most screenreaders rely on external tools rather than implement MathML support themselves?
I suspect the answer is the same as with browsers: because it is too hard to render MathML accessibly. That is, while building on MathML is much better than alternatives (I'm looking at you, TeX), it's still an awful lot of trouble to write a decent (let alone good) MathML accessibility solution. Too much work, too much of a niche, too many other things to do, yadayadayada.
Of course with MathML I mean Presentation MathML since Content MathML is too rare in the wild. Presentation MathML is a very good XML format to canonically represent most traditional (read: print) formula layout and is universally appreciated as an archival format. But Presentation MathML is not "trivially" accessible. Unlike, say, ARIA roles, there is no straight-forward process that will tell you how to, e.g., voice, sensibly explore or highlight a well-written MathML expression (let alone a shoddily-written one). Instead, existing tools end up guessing both the mathematical structure of an expression as well as its semantics.
On the one hand, there's the fundamental problem of context (e.g., to tell whether (a,b) describes an open interval, a point in the plane, or an inner product) and of compression (Kill Math anyone?). But what's even more confusing about "just use MathML" is that, in fact, Presentation MathML can be pretty semantic -- with elements like mfrac
, mroot
, or mlongdiv
, and things like menclose
notation, fences, or the operator dictionary, all of which carry semantics despite Presentation MathML being "just" about layout.
So you might think that's not so bad after all. However, that's only half true. Besides the obvious problem of virtually everything missing in terms of notation, Presentation MathML is somewhat lacking in genuinely neutral layout features. So as an author, you'll have to use those semantic-but-really-layout elements. This way you end up finding suggestions in the spec itself to use mfrac
with linethickness="0"
to represent a binomial coefficient.
Which is visually rather similar to doing a construction using an mtable
(which might in turn be used to convey a vector/matrix).
And then you could also hack something together using mstack
which might sound like a fundamental math layout element (a vertical stack) but unfortunately is designed only for written addition, multiplication, and division.
As an accessibility tool you need to build in something that allows you to guess the semantic structure. And just to stress this again: not for the horribly broken markup you'll inevitably run into but for high quality, spec-suggested markup.
Don't get me wrong. It's great that such heuristics are actually not impossible for Presentation MathML (as opposed to handling a programming language like TeX) so you can at least cover the educational use cases pretty well. But we're a long way making math accessibility being an average task for screenreaders (which is what it should be, just like visual rendering should be a simple task for a browser). MathML is a step forward for math accessibility but it is, ultimately, a tiny step given the practical problems, especially on the web. Endlessly repeating "just use MathML" is not helping.
I feel like I should add a Disclaimer to this one. We're currently building an accessibility solution for MathJax based on improvements to ChromeVox's math engine so obviously I'm terribly biased and a horrible person. But you already knew that.
]]>So I raised a question in the comment, and got replies from two other people who kept repeating the age old silly arguments of what are the elements of \(\RR\times\RR\) or what are these or that elements. And supposedly the correct pedagogical answer is "It does not matter what are the elements of \(\RR\times\RR\)." With that I strongly agree, and when I taught my students about ordered pairs on the very first class of the semester, I made it very clear that there are other ways to define ordered pairs and that we only do that because we want to show that there is at least one way in which ordered pairs can be realized as sets; but ultimately we couldn't care less about what way they encode ordered pairs into sets, as long it is a "legal" way.
Continue reading...]]>So here is how I read a paper, and I'd like to ask you to think about how you read a paper, and why you read it this way.
Continue reading...]]>Around the time when I first came to grips with the part of my job for MathJax which can only be called something horrible like "technology evangelist", webplatform.org launched. For a newbie like me this seemed like a big thing. All the big companies involved, supposedly working together, pushing the Open Web Platform, bringing together the best of existing devloper docs (Mozilla, Google, Microsoft etc), creating documentation hackathons etc. This is huge! (No it wasn't.)
So as a new MathJax and thus MathML "evangelist" I was dismayed that MathML was not mentioned in the "hot topics" list on the frontpage (cf. the Wayback Machine). I remember trying to raise the issue and getting a response literally years later (2014) pointing me to where I should have tried to start a discussion. Recently, I visited the site again, and since its redesign last year, it's a bit clearer where things stand, but still MathML is hard to find.
In fact, I can't find any link to MathML while browsing webplatforms.org. Only the search finally yields a link to the base page for MathML (and the content you'll find starting form there seems to be copied from MDN (which is obviously fine)). But don't worry, even here you'll find a little bit of MathML bashing.
So as I came upon webplatform.org again recently, I started to wonder why I had given up on approaching such sites. And it's pretty simple: if you look around, it's pretty much the same thing everywhere.
Whether it's the html5-is-cool sites like html5rocks or html5please, MathML just doesn't show up. General web development sites? Oh look, Smashing Magazine has no mention since 2009 and A List Apart has one comment in 2013 and even that 2009 article comes with snark..
I'd give you that caniuse lists MathML but even if you can bear the pain of looking at all that red, take a look at its frontpage which lists MathML under "other", a miraculous category with anything from EOT to strict mode to ShadowDOM; not exactly prime real estate.
Then you cast your net wider and go to Google Web Alerts and your register to get an alert for MathML, you set it to its widest setting – and what you'll get is almost exclusively a long lists of MathML snippets produced by Springer OA journals, with maybe some MathJax or StackOverflow sprinkled in. Speaking of which, don't go search for mathml on StackOverflow because you will only see questions that have next to nothing to do with the web (except that really nice and difficult one that obviously has to have negative votes – yay SO community...).
But maybe you are also interested in other things. Like regular web technologies (you know, the ones that get implemented by browser vendors) or other niche web ecosystems. And then you might just notice some really cool resources in those areas. Can you even imagine something like flexbugs or an awesome-style GitHub list or the incredible 99problems for MathML? I admit I can't.
Let's stop here.
]]>A while back Tim Arnold, the awesome person behind projects like plastex and mathjax-server asked the following question on the MathJax User Group.
I am trying to decide what font to use for MathJax. The TeX font is the default, but I think I remember that the STIX-Web fonts have the best glyph coverage.
I have a lot of math to support on all kinds of browsers. What factors should I consider when choosing the best font to use in MathJax?
Soon thereafter, fellow Booles' Ringer Dana Ernst asked me the very same thing. At that point, I was hooked and started this post. It only took me a month to actually get around to finishing it because I wanted to include a basic demo.
tl;dr. Font pairing is an art, is a pain, is an art. I've cooked up a small example on CodePen that allows you to test Google fonts with various MathJax fonts. Just grab a font name from Google Fonts, paste it in and check out how the available math fonts pair up. For screen-real-estate reasons you might want to head over to CodePen. Easy as that.
See the Pen MathJax Font lab by Peter Krautzberger (@pkra) on CodePen.
It is a complex question because, essentially, font pairing is an art. If you simply look at existing sites that try to help you with this, it's clear that many people are looking for solutions while fully realizing that this is highly arcane design knowledge. Alas, I have no such knowledge. What I can add is that it's also a compromise between overall design and the effects on MathJax functionality. So let me summarize some of the important details.
The biggest limitation is obviously that MathJax only supports a handful of fonts. That's a bummer and we hope to add support for more fonts so if you're savvy and interested in helping out, reach out!
The next thing worth pointing out is that MathJax already goes a long way by matching the ex-height and em-width of the surrounding font, that is the height of x
and width m
. That's simply best practice but more work on the web.
However, it's usually still important to pair the math font with the surrounding font carefully to avoid disrupting the reader's flow between math and non-math (because ex-height/em-width are often not enough matching, especially for upper case letters). Of course, you could use the math font for the surrounding text to avoid that but most people strongly favor their options for text more (and rightly so, mathematics should always serve the text in my opinion).
(Edit, 2015-10-01 Davide Cervone had to correct me there. originally this had em-height, height of x
and m
, ex/em-height; D'oh...)
The next important thing is usually another piece of font functionality. That is, most people like to weigh their options with respect to font coverage, i.e., which Unicode points are covered by glyphs in the fonts. For that it's important to consider what happens if MathJax encounters a Unicode point that's not in the glyphs of the configured fonts.
For the default MathJax "TeX" fonts (for historic reasons), there's an additional feature: MathJax supports a wider range of Unicode than the fonts themselves might tell you upon inspection of their glyphs. That's because MathJax builds some characters on the fly (e.g., the TeX fonts do not include a quadruple integral but build it out of two double integrals; similarly for "negated" characters). If I recall correctly, we only do this for the "TeX" fonts (the release that added the additional webfonts was simply sub-par for various unfortunate reasons, I'm afraid, and we never got around fixing it).
Next, MathJax will run through a (pretty complex) fallback chain within the configure fonts (e.g., upright Greek will be substituted with italic Greek because we think that's better).
Next, MathJax will ask the browser for a glyph, i.e, fallback to system fonts. Side fact: this also triggers reflows as MathJax has to measure the actual glyph as best it can (for the configured fonts, we generate the relevant data during production and load them on the fly but there are no browser APIs to get the relevant metrics for unknown fonts/glyphs).
The lack of exact information about an unknown glyph means that the layout can't be as precise as it is with our supported fonts. However, in many situations this is not a huge issue as such glyphs are usually rare and not part of complex layout situations. Then again, e.g., placing sub/supscripts can be affected so your mileage may vary.
The bigger issue (speaking from the complaints we get) is the randomness of the system font. You can control that via the undefinedFamily
configuration option of each MathJax output processor. You might then also add a separate webfont for that fallback (well, if you can find one that helps with your content and both fonts for math and text; a tall order usually).
Finally, by testing / pre-processing your content via MathJax-node (for QA or for actual output), you can gather up the information on the missing glyphs for your content.
In the future, we are hoping to find the resources to expand the fallback cascade. The idea is to enable you to specify other supported fonts before the system fonts are used (e.g., use TeX fonts but then be able to fallback to Latin Modern or STIX). This would resolve the problem of measurements / layout quality but adds load (both webfonts and fontmetric data). In that context, we would probably work on simplifying our dev tools so that developers can build their own cascade. Finally, we would also hope to simplify our tools for generating the fontmetrics data, i.e., enable developers to modify a copy of MathJax to use their own in-house fonts. But there are some technical requirements to the fonts and considerations for a smart fallback chain so that's highly non-trivial to set up.
In any case, you can play around with the pen and let me know what you think, either here or on CodePen.
See the Pen MathJax Font lab by Peter Krautzberger (@pkra) on CodePen.
]]>So I figured, why not use this for explaining mathematical theorems.
Continue reading...]]>We find a similar concept in Zelda's poem "Every man has a name" (לכל איש יש שם), which in Israel is closely associated with the Holocaust and with assigning numbers to people. But alas, we are all numbers in some database. Our ID numbers, employer number, the index under which you appear in the database. You are your phone number, and your bank account number. You are the aggregation of all these numbers. And more.
Continue reading...]]>Richard Feynman, who was this awesome guy who did a lot of cool things (and also some physics (but I won't hold it against him today)), has a famous three-steps algorithm for solving any problem.
Do not worry about your difficulties with MathML; I can assure you that mine are still greater.
I have written about why I care about MathML and why I care about Native MathML. Time to talk about some of the problems I see.
This piece reflects my personal opinion and is not indicative of the position of any project I might work on. It is meant as a conversation starter.
I care very much about MathML and in particular the mission of the W3C Math Working Group to facilitate and promote the use of the Web for mathematical and scientific communication. Yet, while MathML has succeeded everywhere else, it struggles on the web. That worries me.
MathML did not start out as an XML language but as the <math>
tag in HTML3. It was the browser vendors (Microsoft, Netscape) who rejected it; as a result, <math>
went into XML "exile" (where it was immensely successful) and returned to HTML in HTML5.
Still, all OWP technologies stand and fall with the support and adoption from browser vendors. It does not matter how good (or bad) a web standard is or how well it works elsewhere. Browser vendor adoption is the only relevant measure.
It's been two years since I started to write "MathML forges on".
Back then, native browser support seemed to be on the tipping point. Dave Barton had done amazing work on improving Alex Milowski's WebKit code, the deactivation in Chrome seemed to be a hiccup due to one single bug (that already had a fix). It seemed, with a little luck, Gecko/Firefox and WebKit/Safari would have made it to the 80/20 point within a year, hopefully in turn get the Blink/Chrome team to re-enable MathML; then we'd watch as Trident/IE (now Edge) would hurry to integrate the math support from LineServices.
Two years later, Gecko has moved sideways, WebKit has barely moved, Trident/Edge remains a mystery, and Blink is "the villain" (for dropping the WebKit MathML code). MathML is still the only viable markup language for mathematics (recently reaffirmed by its ISO standardization), and yet, native browser support seems just as far away as ever.
Why?
Gecko's and WebKit's (still quite partial) support has been almost exclusively implemented by volunteer contributors (and mostly unpaid volunteers at that).
Effectively, no browser vendor has ever worked on MathML support in their browser. (Yes, that's a bit unfair to Mozilla devs who are great -- sorry. There are also good people at Apple, Google, Microsoft; still, the companies all fail to invest in MathML browser support.)
The volunteers, on the other hand, come and go. Nobody is able to find significant funding and development is, once again, effectively dead.
At this point, I don't see how we can ever get sufficient native MathML support in browsers; the volunteer method does not work and the vendors remain uninterested.
The fact that browser vendors do not implement MathML says virtually nothing about MathML. Studying past discussions, it's clear that there isn't a lot of knowledge about the spec or the requirements of mathematical layout. (Again, this is a little unfair to some Mozilla devs.)
So I see no reason to give up on MathML, let alone math and science notation on the web. Because one thing has not changed: it's still the best markup for math -- and education, industry, and research need a good markup that works on the web.
While I don't think native browser implementations is a realistic goal at this point, I think MathML can still be a trail blazer, especially for scientific notation. It is, after all, a long standing W3C standard and we know it works very well in a browser context (even if you need polyfills).
I think there are two problems we can focus on that are just as useful to move scientific markup (and the web in general) forward:
As opposed to native browser support for MathML, both of these are extremely feasible.
For the first, it recently became clear to that modern browsers (IE9+) are actually good enough for layout; that is, you can write converters from MathML to HTML or SVG markup so that the result is stable, i.e., provides the same layout on all browsers (comparable to TeX quality but naturally integrated into the page context). To be clear, this is not (just) about client-side rendering like MathJax (in fact, MathJax does not provide this yet).
The biggest problem is that the necessary markup itself is messy, making it hard to generate (just look at the spans-spans-spans that MathJax currently generates).
But is this unusual? I think this situation is not unlike how grids using Bootstrap or Foundation are overly complicated compared to grids using css-grid layout. Or how doing flexbox-like layout is horribly complicated without flexbox.
I think we should focus on widely implemented standards and work on improving them so that the markup you need for good math layout becomes cleaner and thus easier to generate (both in terms of structure/semantics and performance).
For the second point, looking at the developments of the semantic web, it's obviously not being realized in terms of mandated HTML tags or CSS properties. It is being realized via ARIA roles, RDFa, microdata etc. I'm not saying these approaches work for the semantic structure of MathML (let alone STEM in general) but something along those lines seems achievable.
Frankly, I'm a bit tired of waiting for Godot native browser support for MathML. MathML is frozen because we're all waiting for browsers to catch up. It is simply not happening. Let's look for ways to move forward.
Comments
gimsieke, 2015-08-10
Nobody is able to find significant funding
MathJax has managed to attract a significant network of donors. Why don’t they either encourage their “investors” to also invest in native math rendering, or why don’t they use the proceeds to fund native development directly? This shouldn’t be beyond their bylaws.
Peter, 2015-08-10
Why don’t they either encourage their “investors” to also invest in native math rendering,
The "nobody" includes MathJax. Of course, the fact that we failed does not say much. The fact that everybody failed so far, might.
why don’t they use the proceeds to fund native development directly
Because then we wouldn't be able to develop MathJax itself.
This shouldn’t be beyond their bylaws.
Sure. Neither would be curing cancer.
Bruce Miller, 2015-08-10
Interesting blog post! I've two comments to make.
Easy one first: I feel like you are unnecessarily harsh on the quality of Gecko's MathML support. While I understand your pride in MathJax, I'd still put Gecko at 90/10 or better rather than 80/20 or below. It certainly can use improvement and is more variable, depending on system fonts, etc, and I'd definitely appreciate more official support from Mozilla. But it gets all the essentials and with the right fonts looks virtually as good as MathJax --- and it's blindingly fast in comparison.
This is more than a fanboy stance: I think there's a psychological factor to this as well, when the message seems to be that no matter what is done, it's never good enough.
The second issue is a bit more subtle. On the one hand, you advocate strongly for MathML; on the other, you propose to focus on "widely implemented standards" for doing mathematical layout. There seems a big ambiguity there: Are you suggesting that authors should create & serve MathML in their web pages and that the way forward is in improving and using the better supported standards as a way of rendering the MathML? Or are you suggesting using whatever technology is available to render something that looks like math, whether or not the representation is MathML? I suspect the former, hope for it, but whichever stance you take, I'd prefer to see it more explicit. The ambiguity just feeds the suspicions about MathJax in some and provides an excuse to abandon MathML in others.
Thanks for the thought provoking article;
bruce
Peter, 2015-08-11
While I understand your pride in MathJax, [...]
This post is really not about MathJax. In many ways, the opposite. But the only ones who could claim pride in MathJax would be Davide and Robert; certainly not me.
I'd still put Gecko at 90/10 or better rather than 80/20 or below
I've often described Gecko as the baseline for MathML feature support. If you can't make your MathML work in Gecko, you probably shouldn't be using it.
But I also understand why people disagree with that. In my experience, you need to be quite knowledgeable about Gecko's implementation (at least from the outside) to avoid running into layout quirks or missing features; watching the MediaWiki math extension feedback is a good example for this.
Of course, this is nothing special, the same is true about MathJax. But the problem is that no large scale MathML adopter I've ever talked to is willing or able to spend the resource on optimizing their content for Firefox.
[...] depending on system fonts [...]
That's not a minor issue though. The switch to MATH tables has brought quite a few problems in terms of layout and more importantly developer burden.
While MATH tables seem to be the way to go, they can only be leveraged by native implementations (and there aren't exactly many fonts with MATH tables, nor would I expect that expensive niche to grow much soon).
This adds to the burden of front end developers who would have to provide two sets of webfonts -- one for Gecko and one for everyone else (i.e., polyfills). It's another case of a good standard being useless because it's not widely implemented. But it's made worse because polyfills cannot leverage it so there's no positive feedback loop.
it's blindingly fast in comparison.
Sure. That's why I'm not talking about client-side rendering here but for the generation of HTML with CSS. This includes tools like LaTeXML or pandoc or any XML workflow tool.
(But fun fact: we've seen edge cases of client-side MathJax outperforming Firefox by a clear margin. I got lucky and was able to mention it to Rob O'Callahan personally and Gecko got improvements.)
I think there's a psychological factor to this as well, when the message seems to be that no matter what is done, it's never good enough.
I don't think the problem is "never good enough". MathJax is certainly not "good enough" for many people (in particular in terms of performance, but also layout, feature support etc).
I think the problem is rather "no chance of getting better". There is no interest from the browser companies; that's what would have to change.
I think even a limited implementation would be interesting if developers had the promise that bugs will get fixed and implementations moved forward. This is not some kind of chicken-and-egg problem, it's simply a failure of browser vendors (and just to repeat myself: not of individual browser developers!).
Are you suggesting that authors should create & serve MathML in their web pages and that the way forward is in improving and using the better supported standards as a way of rendering the MathML?
Or are you suggesting using whatever technology is available to render something that looks like math, whether or not the representation is MathML?
Neither and both. Authors should use whatever works for them. If that's asciimath during authoring or even in the final page, that's fine; I don't lose sleep over it. (Just like nobody loses sleep over somebody converting markdown in the page.) I do think that authoring tools and converters should not stop at MathML but think further because waiting for MathML support to come around is not helping.
I would like to see those tools move MathML forward by making it the best markup for rendering math on the OWP. But I'm not thinking of something that "just" looks like math but about HTML or SVG markup that is enriched to be just as powerful as its underlying MathML. That's currently not possible for lack of, e.g., aria roles. But I think wecould quickly get to a point where a fully equivalent "interpretation" (or "transpilation" to use a fashionable term) in HTML or SVG does not require client-side rendering.
The ambiguity just feeds the suspicions about MathJax in some and provides an excuse to abandon MathML in others.
This reads like FUD to me.
My piece opens with "This piece reflects my personal opinion and is not indicative of the position of any project I might work on." This obviously includes MathJax.
MathJax is a MathML rendering engine. I'm proposing to something based on MathML and my hope is to move MathML forward despite the lack of interest from browser vendors.
But if somebody needs an excuse to "abandon" MathML, I'd prefer to convince them by showing them how great MathML is rather than saying "oh, just wait a few more years and browser vendors will finally get it and implement it". MathML deserves better!
Bruce, 2015-08-12
This post is really not about MathJax.
Understood. But really my point was that both Gecko & MathJax, while both imperfect, do pretty decent math typography, at least by the measure of web typography generally.
Sure. That's why I'm not talking about client-side rendering here but for the generation of HTML with CSS. This includes tools like LaTeXML or pandoc or any XML workflow tool.
... and ...
I would like to see those tools move MathML forward by making it the best markup for rendering math on the OWP. But I'm not thinking of something that "just" looks like math but about HTML or SVG markup that is enriched to be just as powerful as its underlying MathML. That's currently not possible for lack of, e.g., aria roles. But I think we could quickly get to a point where a fully equivalent "interpretation" (or "transpilation" to use a fashionable term) in HTML or SVG does not require client-side rendering.
...
The ambiguity just feeds the suspicions about MathJax in some and provides an excuse to abandon MathML in others.
This reads like FUD to me.
FUD? Perhaps, but the fact that I'm paranoid, doesn't mean that I'm not being followed. :>
MathML offers a representation of math in such a form as to enable: high-quality rendering; accessibility; reuse (especially content). One would have hoped for gradual adoption & implementation of MathML, starting with the aspects that are both "easiest" and most in demand: rendering first; increasing support for accessibility; and eventually support for reuse. That seems to me a critical evolutionary path if true accessibility and reuse of mathematics will ever be achieved.
Alas, math is a niche; generating good MathML and rendering it is non-trivial, content moreso. And, as you point out, browser support seems stalled, at best.
While your proposed solution of improving HTML+CSS, RDF and aria seems practical and innocent, without a strong and simultaneous call for continued improvement of native MathML support in browsers and its generation by authors as well as the actual serving of MathML, there's the danger of undermining that evolutionary path of MathML support. I don't believe that's your intention, but the implication that authors need only serve HTML+CSS for rendering, imagining they'll someday add aria annotation, eliminates the most pressing reasons for wanting MathML in the first place. Reuse of mathematics, or even truly useful accessibility remain mere pipe-dreams.
...
But if somebody needs an excuse to "abandon" MathML, I'd prefer to convince them by showing them how great MathML is rather than saying "oh, just wait a few more years and browser vendors will finally get it and implement it". MathML deserves better!
Thanks; That's what I was hoping to hear. I just want to make sure that message doesn't get lost in the shuffle. If we give the impression that rendering and a modicum of accessibility is "good enough", we may as well just leverage the browser's improvements in image rescaling, attach little tape-recordings to the images, and call it done.
Peter, 2015-08-12
Thanks, Bruce.
While your proposed solution of improving HTML+CSS, RDF and aria seems practical and innocent, without a strong and simultaneous call for continued improvement of native MathML support in browsers and its generation by authors as well as the actual serving of MathML, there's the danger of undermining that evolutionary path of MathML support.
I disagree. As I wrote, I don't see any practical interest from vendors towards implementing MathML. So calling for improvements is pointless -- they are not doing anything.
I'd be thrilled to be wrong and see browser vendors dedicate the necessary resources to MathML development (and maybe join the MathWG to help move the spec forward).
But if I'm not wrong, then "Waiting for Improvements" will be worthy of Beckett.
As much as I care about MathML, I care even more about mathematics on the web. Since native MathML support is not happening, I think MathML needs to evolve into something that can be native. My suggestion voiced here is that it should evolve towards HTML and CSS.
but the implication that authors need only serve HTML+CSS for rendering, imagining they'll someday add aria annotation, eliminates the most pressing reasons for wanting MathML in the first place.
I think "eliminates" is misleading. First, I disagree because you cannot "eliminate" what's not there. MathML is not usable on the web (without polyfills) because browser vendors are not supporting it.
Secondly, I disagree because I believe that only MathML will allow us to move towards "HTML as powerful as MathML".
That's the whole point of this piece, really: imho browser support will not happen, so let's think about ways how the spec (and maybe even the MathWG) can evolve to fulfill its mission.
And I obviously and strongly believe that MathML is the best basis for doing so.
But unless somebody can get browser vendors to dedicate the necessary resources, then I find it unhelpful to sit around and pretend like MathML is working out on the web. Instead, we should think hard about how it can be made to help math and science on the web (without native MathML implementations).
Reuse of mathematics, or even truly useful accessibility remain mere pipe-dreams.
Again, I disagree. On the one hand, it is really pretty easy to achieve exposure of the underlying MathML -- just look at what ChromeVox did already years ago with MathJax, leveraging the internal MathML to enable fully accessible exploration of the visual output.
On the other hand, I think "reuse of mathematics" is too broad. I think quite a few use cases that people hope for are unrealistic (e.g., copy&paste has so many challenges on the web, with or without MathML). And the realistic ones (e.g., accessibility, search) can be achieved in HTMLified-MathML (pretty easily, I think).
As much as I care about MathML, I care even more about mathematics on the web. Since native MathML support is not happening, I think MathML needs to evolve into something that can be native. My suggestion voiced here is that it should evolve towards HTML and CSS.
Bruce, 2015-08-12
As much as I care about MathML, I care even more about mathematics on the web. Since native MathML support is not happening, I think MathML needs to evolve into something that can be native. My suggestion voiced here is that it should evolve towards HTML and CSS.
Just to be sure I understand, you're suggesting that rather than
<mfrac><mi>a</mi><mi>b</mi></mfrac>
the "New MathML" would be
<span class="mfrac"><span class="mi">a</span><span class="mi">b</span></span>
with perhaps a few `style="..."`` thrown in?
thanks;
bruce
Peter, 2015-08-02
No. But probably for very different reasons than you might think.
But I'm not very interested in discussing technical details here. This is a conversation starter, not a technical document. If MathWG wants to consider this direction, then I think we need to bring together practitioners first. And that's practitioners who deal with rendering MathML in a web context; that's not exactly a strong suit of the WG today.
I also need to slightly correct (or extend) my previous comment to include what I mentioned in the post: I might prefer HTML but I also think SVG should be an equal target. For example, your LaTeXML can already generate SVGs for MathML; why make it less useful than it could be if you already have good underlying data in the form of MathML?
In general, while I do find it entertaining to think about god, afterlife, or a concrete mathematical universe, I find more comfort in the uncertainty of existence than I do in the likelihood that my belief is wrong, or in the terrifying conviction that comes along with believing in something (and everyone else is wrong).
Continue reading...]]>Oh, permalinks. The name is so clean and yet so misleading. WordPress is so forgiving to both admins, authors and visitors. But leaving that paradise is fun, too. At one point I had renamed all posts in a way which led to a site with zero posts; hilarious.
I've switched to the simplest permalink structure -- enumeration. But then the question was: how many digits (I like my numbers to be the same string length)? I ended up with four digits. This is No.181 after 5 years of writing on the web, so it seems rather unlikely I'll reach 9999 in my life time. And if I do, I'd be happy to revisit this (@future self: sorry! it'll be a pain!).
I've been discussing the changes with Sam over the past few months. The biggest point of disagreement has been comments. Jekyll can't provide comments (obviously) and I am not interested in going back to Disqus (for various reasons). I also had the impression that comments were not doing it for me anymore. The ratio spam / useful comment was about 1000 to 1. Sure, Akismet took care of this and Disqus could, too. In addition, I'd get comments from other places (twitter, g+ or plain email) and since I'm not a cool indieweb dev it's never that many, I manually added them to posts.
In other words, I started to feel like comments are just not that useful anymore (caveat lector: see below) and that having a special technology for it seems overkill.
So for now, I'm going with comments-by-email, with a simple link at the end of each post, prepped with a subject line for you. Comments will then be added by myself. I'm hoping anybody willing to comment is willing to send me an email (anonymous or not). Maybe I'm wrong. I'll also pull in comments from other places (e.g., twitter). There's currently a hypothes.is opt-in as well. Not sure if I'll keep it though. Feedback would be nice.
As always, xkcd.com/1357 applies. If you really feel the need to comment, please do it on your own site.
Having to do a lot of manual editing of my own work was a healthy experience. Yes, it drained time and varied from cringeworthy to depressing. But it also showed me that, once in a while, I still like my old writing. It also showed me some horrible crap, including one troll post which I'm keeping to remind me never to troll again. I hope it's the only one 😞.
I was surprised about the number of comments. One reason to go with email comments was the general lack of comments. Why keep extra technology on the site when I only get spam comments? But I admit I was surprised by the many (real) comments I have from my postdoc days and especially from other Booles' Ringers and mathblogging folks. You people are the best!
Oh, and it took me a while to realize that I had actually been on Jekyll before moving to WordPress. Guess that means I'm going back to WordPress in a few years. (@future self: again, very sorry! Let's wait until we hit 9999 posts, ok?) One thing I regret losing is the post-specific history from WordPress; couldn't get that to survive this migration (but will back the database up for myself). Hopefully git will improve this (with some auto-committing).
With Jekyll I switched on some basic CI (thanks, Travis CI!), including html-proofer. With ~1000 links right now, it's no surprise that some of them are dead. Fixing the internal ones along the way of my review was easy enough. And for the rest (but not that many), I used the Wayback Machine; a handful are actually lost forever.
What was surprising to me was which links needed the Wayback Machine. It's not surprising that some random app on appspot goes down. But something on Harvard.edu or publishers.org? That's somewhat funny (and painful). Small niche blogs? They were solid. You are all awesome!
Being on such a long hiatus (also caused by having other writing projects that bled me out), I want to get back into writing here. Since this site is now a git repo, you can file bugs on the GitHub copy but also fin ideas for posts I put down as issues.
I was thinking about some technical posts on math on the web. And there's one post that's been in the works for months; I should finish that one. Or give it a few more months maybe; you know how these things go.
]]>I always preferred to be the master of my domain. The king of my castle. But literally, not the Seinfeld euphemisms sense. In any case. I've been thinking about a page where I can post short thoughts about math, life and otherwise. The blog is not suitable, since I'm not going to add a post each time I have a new thought. So instead I've started a blurbs page. Each blurb has a number, and an anchored link that you can use in case you want to share it.
Continue reading...]]>If you are not on this list, you better hurry up to this application form and register! Come on, what are you waiting for???
Continue reading...]]>But without the axiom of choice the world is indeed a strange place. This was posted as answer on math.SE earlier today.
Continue reading...]]>We have verified, in the meantime, that the same person impersonating me on Quora is the one who used Isa's name in those comments.
Continue reading...]]>But if I want to be sure that I can finish next year, I should probably omit one of the problems I originally wanted to solve; and keep that for later, unless it turns out to be particularly simple when I finish the rest.
Continue reading...]]>Mathematics will often dangle in front of you some ideas, and you will work them out, to find a mistake. Then you will go back to the beginning, find new ideas that she had in store, work those out and proceed only to find a mistake much later. Then you go back to the beginning, and you find yet another minor idea that was missing, and now when everything works you continue. But then you find another gap, and you have to go back to the beginning and hope to find yet another idea. And don't get me started on those ideas that you find not to work during all these searches.
Continue reading...]]>Switching back to a static site generator. Jekyll, which took me a while to decide on. In the end, Ian Mulvany rang true. Jekyll is trivial to set up (I'm using Poole/Lanyon), hosting on GitHub pages, some simple CI via Travis).
I thought about exploring other static-site generators (in particular JS-based ones) but, in the end, Jekyll is the static-site generator so it's easy to switch to and from if I need to.
I'm not yet going to switch the old site over since I have yet to properly import the older content, set up redirects etc.
]]>What struck me about the conversation was the nature of the discussion. I suppose a good example was a ever so slightly sharp turn in the conversation when it came to the translation of a Lewis Carroll poem which, in its new translation, featured a Porsche -- an anachronism that met criticism from the host.
What caught my ear was how these two talked eye-to-eye, the host displaying in-depth knowledge not only of literature in general but the guest's work in particular. This allowed them to discuss how the writer worked, the real essence of her work, the challenges, the modus operandi. (What also made me wonder was the precarity of the writer; the collection came out of her PhD work, the first book seemed only a success in so far as it landed her some prize/stipend that allowed her to write the second book. Literary careers always sound like scientific research careers, yet we keep things separated.)
I've always yearned for the equivalent of an art critic (which the host evidently provided) but for math and science. One of my first blog posts ever was about mediocrity and, in may ways, critics are the perfect example of why mediocrity is [pun not averted] critical. Instead of pretending to pursue "high" art/science/math a critic is helping their field by providing constructive (and when necessary destructive) review. In public. We do not have this for STEM. Yet the discussion between those two was as esoteric to me as a discussion about forcing axioms or JavaScript libraries would be to them. Of course, German Feuilleton (oh my, I had no idea about contemporary meaning in French) assumes none of its work is esoteric but features 0% of real science criticism (let alone math).
Skip back a few years. My only comment left on Carta.info (no link because I can't find it and because carta has become quite strange) was a foolish, troll-like comment (confirming Hanlon's razor, it was out of stupidity) where I wondered why DLF's Presseschau never included quotes from blogs, since I clearly had (and have) the impression political bloggers are on par with those strange, small-town newspapers that make it into that selection of op eds. (IIRC, there's now some minor tech segment on DLF that features some blog posts; oh well.)
Over the past year I started to listen to more and more podcasts, primarily about web technology, i.e., work (it all started with the excellent Shoptalkshow). Listening to the conversation on DLF, I realized two things. First, technology podcasts provide just that criticism for web technology. While it's often infantile, it's equally often profoundly useful. As usual, web tech is trying to skip an old medium; a loss for both sides.
Still, during the DLF conversation yesterday I realized that I need to look for another kind of technology podcast: one about actual code. That is, where developers talk about their approach to programming, problem solving, how various tools do their job, and who knows, maybe even actively review code. In other words, a podcast that does for web tech what the DLF piece yesterday might do for writers. Maybe streaming things like twitch.tv (and perhaps livecoding.tv if it ever goes [pun not averted] live) will fill the gap naturally. Still, I'll have to hunt around some time.
Thinking back to mathematics, the podcasts I tried do not fill that gap. There are really good ones out there but they are not on the level of that DLF conversation or on the level of technology podcasts. They always seemed to be more interested in news, puzzles etc rather than challenging the listeners and the experts alike. Which reminds me, I should try to pick up Vilani's book.
Later it smelled like Sommerregen. And everything was well.
]]>It occurred to me today that this is a very Kurtzian story, if we take the Brando interpretation of Mistah Kurtz (he dead) in Apocalypse Now! (the Redux version is one of my favorite movies, I guess). In the movie Harrison Ford plays a tape where Kurtz is describing a snail crawling along the straight edge of a razor, crawling slithering, this is his dream, this is his nightmare.
Continue reading...]]>Yesterday was the first day where you could say that the weather is characteristically spring; and today (as well tomorrow) we are expected for a daytime heatwave and a nighttime cold weather (e.g. Beer-Sheva is expecting a whopping 31 degrees centigrade during the day, and 13 during the night).
Continue reading...]]>So I am happy that I have only one course each day this semester. I am teaching two courses this semester. Precalculus (Math 200) meets on Tuesdays and Thursdays at 8AM, and Elementary Algebra (Math 96) meets on Mondays and Wednesdays at 9:15 AM. (Each class meets with me a total of five hours per week.) Then on Fridays I have the set theory seminar at 10AM at the Graduate Center, or occasionally a faculty seminar at LaGuardia at 9AM where we will prepare to teach a seminar for first year LaGuardia students. I think that will be cool, because I really enjoyed my first year seminar as an undergraduate student at Grinnell.
This morning schedule is a big change for me; I have been a total night owl for the last seven years at least, rarely getting up much before noon. But I think it will be good for my health to wake up more with the sun. It might be a rough adjustment period, but it will be worthwhile. As a bonus, if all goes well, I can leave work by mid to late afternoon most days and be able to go out in the city some weekday evenings for dinner or a show. (If all doesn’t go well, I’ll be buried in grading, course preparation, administrative work, etc. and rarely get out of here until late anyway. But I am optimistic that it will be better than that.) Another nice benefit to the schedule is that I can conveniently make myself available for 45 minutes worth of office hours four days per week, so that students have a better opportunity to see me.
The elementary algebra students seem like a good group. They really seemed to appreciate the activity of sharing their feelings towards math and their expectations for the course. The videos didn’t seem to be as effective; only a few students commented on them, but the initial discussion before the videos was quite fruitful. A few students told me that they hate math, but many, I think a majority though I didn’t count, came in with positive attitudes towards math. Now it is my responsibility to help them to maintain these positive attitudes and to work hard and succeed in the class. I’m up for the challenge.
]]>For posterity, here's the version I submitted, including typos
Without mathematics, there's nothing you can do. Everything around you is mathematics.
Shakuntala Devi
It has always surprised me a little that the web -- created at CERN by a trained physicist turned computer scientist -- was born without much consideration for math and science. Of course, it isn't all that surprising since the original HTML lacked more basic things (such as support for tables or images). Either way, people did see the need early on and in 1995 the draft of HTML 3 proposed a <math>
tag, adding basic math support in HTML. Unfortunately, HTML 3 was rejected by browser vendors, and its more fortunate successor, HTML 3.2, dropped the <math>
tag (among other things). As was the fashion of the time, the <math>
tag was turned into a separate XML specification and within a year MathML was born. Problem solved? Not quite.
MathML did turn out to be hugely successful in the XML world. Authoring and conversion tools quickly made MathML easy to create and edit while publishers adopted MathML in their XML workflows. The main reason was that MathML provided a robust, exchangeable, and reusable format for rendering and archiving equational content. However, XML did not succeed as much on the open web and the XML legacy made it difficult to use MathML in HTML itself. This meant that mathematics (and in extension scientific notation) remained a second-class citizen. Surprisingly, MathML did not simply fade away like other web standards but made a comeback in HTML 5, where we can now use like any other tag. Problem solved? Not quite.
Despite its success, its rich ecosystem, and its importance for research and education, MathML continues to struggle on the most critical front: browser adoption. So far, not a single browser vendor has actively developed their MathML implementation. While Internet Explorer and Chrome lack MathML support entirely, Firefox and Safari at least accepted code contributed by volunteers (and in Mozilla's case actively supported the code base). To compensate, the MathJax project (disclaimer: which I work for) developed an open-source JavaScript solution that authors and publishers can easily drop into their content. MathJax renders MathML on the fly, providing high-quality output that works everywhere out of the box, using only web standards such as HTML and CSS. A joint venture of the American Mathematical Society and the Society for Industrial and Applied Mathematics with the support from numerous sponsors, including Wiley, MathJax has become the gold standard for math on the web with our free CDN service alone registering 35 million daily visitors. Problem solved? Not quite.
While we are proud of our accomplishments at MathJax, we know that we can only provide half the solution: native browser support must be the goal. Only native browser support can make MathML universal, helping everyone and allowing people to push the envelope for math and science on the web further. I believe a crucial role lies with publishers. Taking a cue from Forbes, now every publishing company is a web technology company. Not being involved in the development and implementation of web standards is a bit like printing books but not caring about literacy rates -- if you build it, they still can't come! When it comes to the development of the web, scientific publishers can become the bridge between authors and standards bodies and they can be instrumental in supporting the development of tools and processes that push everyone forward. Problem solved? Not quite but if you build this...
The re-integration of MathML into HTML5 was a huge step towards math and science becoming first class citizens on the web. MathML is not only a fully accessible exchange format for mathematics but it is also part of other scientific markup such as the Chemistry Markup Language and the Cell Markup Language. The future of MathML in browsers will determine the future of scientific markup on the open web. In the end, a chemical reaction or a data plot has no more reason to be a binary image than an equation -- we need markup that is alive in the page and can adapt to the needs of the users. Only this will allow us to develop new forms of expressing scientific thought, forms that are leveraging the full breadth of the open web platform, that are truly native to this amazing medium called the web. And that would be an exciting problem to have.
The comments were also interesting.
Thank you Peter for your accurate and witty post, and thank you for MathJax which has served as a beautiful solution to math on the web. The lack of support from browsers has been pathetic and shameful, and you are right that the only real solution is that MathML (and other MLs) are supported natively supported as the definitive content. We should not really have to resort to "tricks" such as MathJax, however well executed those tricks might might be!
Thanks, Kaveh. As I wrote, Firefox and Safari do have some support for MathML and of course MathJax is also not yet complete in its implementation (there's only so much room in a non-technical post).
In my humble opinion, it's an achievable goal for a publisher to produce MathML that renders fine on Firefox's native support (while I don't think the same can be said about Safari at this point).
Of course in practice we simply cannot restrict users to browsers these days, so until there is native support of MathML in all popular browsers, we'll continue with MathJax which does work on all. ;-)
Here's hoping that one day, we won't have to. Wouldn't that be a nice problem to have?
The best way to get all browsers to support MathML natively is to push math users to use Firefox for its native MathML support. That will get the attention of the other browser vendors. Unfortunately, even in this post you didn't clearly commend Firefox for being the only browser with native MathML.
Thanks for the comment, Robert. I'm not sure who you have in mind with "users". I would agree that authors should ensure that their MathML renders well on Firefox natively.
I wouldn't quite agree to call Gecko/Firefox the only browser with MathML support. WebKit/Safari made a lot of progress last year thanks to Fred Wang's work even if it's behind Firefox in its implementation.
By "users" I mean people producing and viewing math content.
I'm glad Safari is making progress. Feel free to recommend it too. The important thing is to create market pressure for browser vendors to implement native MathML, and that means users/developers choosing one browser over another because of MathML.
As you probably realize, MathJax being so good has actually reduced that pressure; it's easy for browser vendors to say "hey, MathJax works fine in our browser, so why bother investing in native MathML". Even in this post, you haven't clearly identified reasons why native MathML is better than MathJax fallback.
I fully agree that users should choose browsers for their features and Firefox's MathML support is, to me, a huge factor, especially in an educational setting. (In fact, I just recently had an interesting situation where I helped a student struggling with a school project about HTML that required some math -- and naturally he chose MathML since they were using Firefox and he wasn't even aware of browser support issues -- bliss ;-) ).
I've encountered the "MathJax is holding back browser implementations" argument a couple of times now and it feels like a Catch-22 to me. Without MathJax (I think) there wouldn't be significant amounts of MathML on the open web and thus no incentive to implement MathML support natively. Now, with MathJax, there's lots of MathML, yet there's still no incentive. I suspect the reasons lie elsewhere. (And from speaking to Gecko, WebKit and Blink developers it does not lie with the developers).
The reason why I didn't go into technical details about why native support is so important is that it didn't fit in this forum (both in length and audience). But you're right that perhaps I should have tried better. Some basic notes can be found on my personal blog at http://boolesrings.org/krautzb...
You and others here say MathJax isn't an adequate solution. But you don't explain why? It seems like a very successful project, and a far better approach from an software engineering perspective than native browser support.
Adding MathML support into every browser requires duplication of development effort and places responsibility for maintenance in the hands of browser vendor employees for whom MathML is neither a priority nor an area of expertise. Each implementation will vary in its performance, bugs and feature-set, and authors will need to know these differences in order to produce content that is compatible across all browsers. Future versions of the MathML spec will require development and deployment across all browsers, increasing the cost and delay in making new features available.
In contrast, keeping MathML support within a library allows development to proceed at its own pace, handled by those for whom it is both a priority and an area of expertise, and removes the cross-browser compatibility burden. MathML users then only have to deal with a single set of features and bugs, and can upgrade to newer versions of the library as and when they need to, instead of being beholden to browser development and upgrade cycles.
It seems like browser vendors would be better off concentrating on providing powerful, general low-level APIs for things like parsing, layout and rendering, in order to help the implementation and use of libraries like MathJax. That way, the web can scale to support custom rendering of not just MathML, but also Chemistry ML, Cell ML, and the many other useful markup languages and formats, while reducing the centralisation of effort and complexity within the browsers themselves.
But you don't explain why?
See my other comments on this.
As for the other points your raise, they seem to apply to any newer web standard so I don't quite see how they're relevant to MathML specifically.
But yes, certain low-level APIs could make MathML polyfilling much easier; no surprise there. However, their implementation seems even less likely -- especially since some of them have been rejected in the past.
Besides, MathML is not rocket science. It adds a few basic constructs to HTML/CSS such as multiscripts, stretchy characters and better table alignments. If you look at Gecko and WebKit it's clear that it's not a huge burden to maintain.
I don't see any concrete technical reasons in any of your other comments for the inadequacy of MathJax. Could you be more specific about which comments you mean?
As for other new web standards, you are absolutely right that same points apply to them. The web has to get away from the situation where features must be implemented natively in the browser in order to avoid feeling second-class. That is the only way the web will be able to regain competitiveness with native platforms like iOS and Android. As it stands, the web is losing ground quickly to these platforms, because the need to implement features natively results in an unacceptable bottleneck in innovation.
Fortunately, while there may have been resistance in the past to making low-level capabilities available to library authors, that is changing. For example, there W3C CSS Working Group has recently created the Houdini Task Force [1] which aims to design low-level APIs for parsing, layout, content fragmentation and font metrics. I am certain that they and others would be very interested in hearing what APIs would help in implementing MathJax and equivalent libraries. The Extensible Web Manifesto [2] also covers similar ground.
[1] https://wiki.css-houdini.org/
[2] https://extensiblewebmanifesto...
−
Thanks, I'm well aware of Houdini and the extensible web manifesto and these are great initiatives with excellent people involved (such as Rob who commented here as well).
Personally, I think reports of the imminent death of the web are exaggerated. But even so, I'd argue that abandoning important and established web standards will do nothing but speed that up.
As for the technical issues, they are (again) nothing particular to MathML. Polyfilling a textual rendering component -- be it math or bidi or linebreaking -- always happens too late in the game, i.e., after the page renders because good layout will depend deeply on the surrounding context. Similarly, inserting large amounts of content fragments (easily in the thousands) into the DOM will always come with issues, especially performance.
More importantly, relying on a polyfill will prevent universal use. Developers will always have to make a conscious decision to add support, adding complexity and risking instability. In reality, we could never expect to be able to use mathematics in something as basic as a webmailer or a social network.
The thing is: the "if" is not even the problem. When you ask actual browser developers (be it Mozilla or Google or Apple or Microsoft) they in favor of MathML. The problem lies much more on the management side.
Ultimately, it comes down to how important mathematics is. (Why not kick bidi? SVG? flexbox? tables?)
The web is the most important medium for human communication and mathematics is one of the oldest and most universal forms of expression. Every school kid (worldwide) engages in mathematics (often for many years) and soon will do so in an HTML context. In particular, students will have to actively communicate (author, share, digest) mathematics and this will primarily happen on the web. To me, that makes it pretty important to have math natively on the web.
]]>Here are a few terminological ideas that I doubt are going to be developed by anyone. But if you plan on doing something similar (or if my terminology inspires some proof) feel free to use these terms, and please let me know!
Continue reading...]]>It didn't really work out but perhaps in a good way. Yes, I posted 7 out 8 weeks so that's close (still, Mike gets to name a charity of his choice). No, I most definitely did not spend just 30min per post (more like 1h, sometimes way more...). But those were means to an end which included a) try out something that gets me to write more regularly and b) make it interesting for my two readers, maybe add a third reader (crazy, I know!).
At the end of the year I was exhausted (so I had to take January off -- well, be kind enough to pretend I did that intentionally and not simply failed to write that one last post for week 8). In part this was due to me writing on a couple of other, work-related places. I suppose one could say the blogging challenge helped there; e.g., it motivated me to finish a couple of outstanding blog posts on mathjax.org. But I think in reality it was the holidays and I had enough opportunity to write for a couple of hours (or sleep in to compensate).
As for the means, this exhaustion leaves me in doubt for the first one. Did I simply overdo it? Maybe I just need to pace myself better. We'll see (thanks to Asaf for bugging me to get back on the wagon).
As for the second one, I think that was a bit of a miss. At least in the sense that my posts seemed to cause a lot of confusion and irritation. Then again, that was somewhat intentional, I just wasn't happy with the kind of confusion, perhaps.
As for 2015, I will try to pace myself better. First target: finish that post from the original list of the tiny blogging challenge.
Comments
Last week I wrote about why I care about MathML in general. Given that I work for a project that serves as a MathML polyfill, it's worth while to to point out why native implementations matter; they matter an entire alot of mattering.
A while back, Alex Miłowski asked me for some quotes about how native MathML implementations are important so luckily I can copy myself here.
Some people say, "few people on the web need MathML support." This is true. Just like saying "few people need children's clothing".
Why is MathML important? Education, education, and education. Mathematics is a core skill and a vast amount of educational time and effort is spent on teaching children and adults to understand and apply math & science. Very soon, HTML will be the dominating delivery method for educational content across the world. This means mathematics must be HTML, viz. MathML.
Where should HTML rendering be implemented? In the browser!
MathML has been HTML from its inception and after a (forced) XML-detour, MathML is back where it belongs: a part of HTML5. MathML layout is core HTML functionality, widely used in everything from web communities to professional publishers to educational startups. HTML and thus MathML rendering belongs in the browser.
While browser vendors show great interest in enabling polyfills to behave like native implementations, polyfills implementing layout standards (MathML, Flexbox etc), in the end, will not achieve native performance. The reason is simple: layout polyfills simply enter too late in the game -- after the browser layout is done, at a point where the user expects content, not additional rendering delays. Moore's Law helps a little but, ultimately, performance issues will prevent math and science from fulfilling their potential on the web.
Even the most advanced polyfilling technology will remain a JavaScript solution. This increases the risk of problematic interactions with regular scripts for design, user interaction, and styling. Native support will always be more robust for web developers and consumers.
Even the most ideal polyfill will require a conscious choice of the web developer to load it. This poses a grave restriction for end users and the emergence of new platforms for math and science on the web. From webmailer, to web based authoring, to social networks, all of these could turn out to be highly productive platforms -- but it's unlikely their developers will consider adding a polyfill for a perceived niche. With native MathML rendering, rendering MathML would be universal.
The web has revolutionized how we communicate. Not by magic but because thought leaders continually push the envelope, building new tools and platforms that transform how we work, speak, and think. These innovations feed back into standards development, enabling everyone to benefit and restarting the process, pushing us further.
MathML 3 captures traditional mathematical typography. Thanks to polyfills, we get a glimpse of how MathML might develop, how it can revolutionize the communication and dissemination of scientific knowledge. Yet without native implementations of MathML 3, we will never see MathML 4, 5, or 10, and the opportunities this will open up.
It took 50 years from Gutenberg's printing press to the first typeset mathematics book. We're 25 years into the web. Do we wait another 25 years or can browser vendors finally invest 1-2 developer years to get us there?
Update.
First, I changed the embedded video; it was previously this one.
Second, over on Google+, Harald Hanche-Olsen asked about the claim that MathML is a huge success. Here's what I responded with.
]]>Re success of MathML. Today, almost all equational content is stored as MathML. This is because almost all scientific (including mathematical) publishers have switched to XML workflows for production and archival where MathML fits in very naturally; similarly most technical writing (e.g., aerospace) is done in XML workflows.
For authoring, it's a bit more complicated. It is similar to, e.g., vector graphics where applications such as Adobe Illustrator have their own formats but when you save vector graphics for re-use you'll most likely export to SVG.
As I mentioned, there's definitely the need for a professional-grade, open source pure MathML editor (ideally HTML5). The only one I know of is MathFlow. But if you have ever used MathJax then you have authored MathML -- it's how MathJax works: convert any input to MathML and then leverage our MathML rendering engine.
Similarly, lots of other tools are able to output MathML -- besides converters from TeX (such as LaTeXML or tex4ht), Microsoft Word Equation editor can export to MathML, as does Open Office Math editor, MathType, MathMagic, the Windows Math Input Panel (handwriting recognition), MyScript (ditto), Maple, Mathematica and virtually any other tool you might have authored serious equational content in. (Oh well, I should've simply linked to http://www.w3.org/Math/wiki/Tools#Authoring_tools which I recently set up.)
Of course, Word is the big reason why most scientific and educational content ends up providing MathML. I don't claim (or believe) that people are aware of most of this which was one of the reasons I wrote about it.
When I started this writing challenge, I had listed a couple of potential blog post titles. One of them was "Why you should care about MathML". I realized later that I really didn't want to pretend I could even try to tell my two readers what they should or should not care about. Instead, I want to jot down (remember: 30mins time limt) a few reasons why I started to care about MathML, alot.
Unsurprisingly, it was in many ways a story of my education. Here are two quotes from yours truly.
I think MathML is so far the best solution to present mathematical content on the web
-- actually me, Dec. 2009
Actually, more stuff wrong on my post; also, referencing Terry Tao's blog, weird.
But mathml sucks [...]
-- also actually me, Feb. 2011
(In my defence, I probably meant authoring tools and browser support.)
So as you can see, I flip-flopped a bit there (and, in a fundamentally different way, I still do). So here are five short reasons why I care about MathML.
When I started using MathJax on a personal blog (thanks to the above quote I realize I started blogging 5 years ago this month, (local copy), although I think I started to blog a year ealier on scivee.tv (though this seems lost)), I was first annoyed and then very happy to not use macros. Obviously, you can use macros with MathJax but I started to avoid personalized macros at all costs. Ultimately, they prevented me from writing mathematics elsewhere and they limited re-use of my writing by other people (well, ok, that's more hope than reality I suppose).
MathML does not suffer any of these complications (well, technically Content MathML could if anyone used it). Instead, MathML provides a truly stable format for storing equational content while still allowing for re-use. Granted, it's not exactly easy to write by hand but neither are SVG or HTML/CSS (certainly not as soon as you want to express something more complex). Still, I'd encourage anyone to spend some time with it (e.g., try copy-editing a random piece of MathML and compare that to copy-editing some macro-filled LaTeX horror). In any case, creating MathML is straight forward, especially for those knowing LaTeX syntax (even if we could use a a good open-source MathML editor). Ultimately, MathML is more readable in isolation thanks to its nature of being actually a mark-up language and not a programming language.
What struck me early on was how successful MathML was outside of research. Research mathematicians (and scientists) tend to think their habits are vital for the longevity of mathematical writing. However, technical writing (such as industrial (think aerospace) documentation), engineering, and most importantly school-level mathematics are arguably more important -- and have benefited enormously from a mathematical markup that is easily handled by researchers and non-researchers alike. MathML has brought high quality rendering together with easy authoring to an incredibly wide and diverse community; a huge accomplishment.
What I also learned early on (in crass contrast to my 2009 self above) was that MathML has turned out to be critical for having truly accessible mathematics.
Of course, TV Raman's AsTeR voiced TeX/LaTeX long before MathPlayer, ChromeVox or VoiceOver voiced MathML. But besides the refinements (which later tools could so easily provide), the notion of accessibility stretches far beyond voicing and visually impaired users. Features like synchronized highlighting would be much harder in TeX (just think about identifying subexpressions in a complex TeX macro, let alone in poorly authored TeX) but they are critical for helping people with learning or physical disabilities. Even more advanced features like summarization and semantic analysis are much more straight forward in a markup language like MathML than in TeX. And so is search whose importance can hardly be overstated in times of ever increasing publication pressure; without search mathematical knowledge won't be accessible to us in the long run.
The main reason why MathML is irreplaceable on the web is its compatibility with the DOM. This allows web developers to apply the full breadth of their tools to make mathematical content truly native instead of copying print-based layout. We cannot re-invent everything as Knuth did because web "typography" is far from finished and communicating on the web will probably change drastically every couple of years for the foreseeable future (just like communicating using the printing press did in another age). Having a naturally fitting technology allows mathematics to continually evolve its expression alongside other forms of expression on the web -- an incredible benefit (and challenge!).
This leads me straight to the last and probably main reason why I care for MathML. What the web has already done for regular language (all over the world), it can do for the language of mathematics: transform the way we communicate; expand, enhance, deepen, and lighten the way we express mathematical thought. You don't have to be Bret Victor to believe that in 30 years we will have developed new forms of expressions that truly leverage web technology and eliminate baroque limitations of black-and-white, print layout. We should strive to do so much better and I believe MathML is an important step in this direction.
]]>Today I only have ~15 min. This week, I happen to be in Chicago for dotAstronomy 6. This might be odd since I'm not an astronomer (nowhere near in fact). It is actually an immense privilege, though, since I'm part of a small group of invited interdisciplinary participants (also including biologists, climate scientists and library scientists). So my perspective is that of an outsider and I hate to admit it: it's what I suspected all along.
That is, ever since running into the dotAstronomy website a few years ago, I have been a little envious. I kept thinking "This sounds incredibly fantastic. How could we do something like this for mathematics?" Until today I could at least pretend that it couldn't actually be as great as it appears. Because nothing is, right?
My two readers won't be surprised to hear it: I was wrong. dotAstro is every bit as exciting, enlightening, creative, and savvy as I had hoped. A fantastic group of scientists from all walks of scientific life, including "recovering" researchers who have been led to non-standard careers while retaining a deep, nay fierce enthusiasm for their field as well as for the untapped potential offered to scientific communities by the web. This first day has been a perfect mix, starting with excellent talks, switching to amazing lightning talks, followed by an exhausting-because-engaging unconference sessions, and finally some great conversation at the pub (including perfectly greasy US bar food).
Luckily, I don't have to bore you with my notes but can simply point you to the live-blogging of the first day by @vrooje. In case my notes go up in flames, I could probably reconstruct half of it from the Twitter hashtags of the unconference sessions I attended, i.e.,
Now I'm exhausted but excited for tomorrow's hack day.
Comments
As you know, this blogging challenge of mine is based on the observation that I would like to write more. And then Jeff Atwood reminds me in this interesting piece that
we badly need to incentivize listening
which makes me wonder if my natural tendency to let things brew for ages might not be a good thing. This blogging challenge will invariably show if I'm actually able to write in decent quality under tighter constraints. (Right now, I'm not so sure.) So perhaps I will have to realize that silence is golden.
On a related note, in recent months, I was forced to think about my comment "policy". This hadn't really come up before since I get very few comments and even fewer from strangers. But I think I should point out that nobody leaving a comment should expect said comment to be posted. Similary, nobody should expect a comment that has been posted to stay up (especially if gets posted automatically after I've allowed a comment in the past). Finally, nobody should expect me to reply to a comment even if I've replied to other comments and even if that happened in the same thread.
This policy has very little to do with trolling, actually, but more with off-topic comments and comments on ancient posts documenting how things have changed (I'm so surprised! not). It's also related to a different point: I'm probably switching off automated comments at some point next year (ooooooh, something will change, hint hint).
The number of worthwhile comments I get is roughly 1 per month (vs 5-10K of spam). So instead of a comment sytem, I'll figure out some way you can quickly send me a comment and then I will add it manually. This move is not just laziness about dealing with spam (it will be slightly more work, I suspect) but also reflects the fact that I consider your comments to be additions to the content, not separate from it. This does not mean that a comment needs to be serious, of course -- silly comments are just as (more?) (more!) relevant to me, so I hope people will keep'em coming.
Comments
Darth Vader/Stewie: Oh, come on, Luke, come join the Dark Side! It's really cool!
Luke/Chris: Well I don't know. Whose on it?
Darth Vader/Stewie: Well um... there's me, the Emperor, this guy Scott. You'll like him, he's awesome...
Where my previous post was more about TeX-like syntax, this is about TeX/LaTeX proper. If you're a TeX/LaTeX enthusiast, don't go all crazy on me (I mean, have you seen my thesis?). This is about me feeling a growing awkwardness towards TeX/LaTeX. And this has little to do with TeX/LaTeX itself.
TeX/LaTeX is a tool. It is a tool designed by Knuth to solve a problem in print layout. The trouble is: print is becoming less and less relevant and I think this holds for most TeX users (when was the last time you went to a library to look at the printed copy of a current journal issue?). What is not obsolete is PDF and TeX is, of course, very good when it comes to generating PDF.
However, this "Portable Document Format" is really quite useless in the one place where people consume more and more information: the web. (I admit I'm of the conviction that the web won't go away; crazy talk, I know.) And for the web, TeX/LaTeX is the wrong tool. Yes, there are about a gazillion projects out there that try to bridge that gap, try to create HTML out of LaTeX. But if you try them out you'll soon notice that you'll have to restrict yourself quite a bit to make conversion work.
Turn this around and you'll realize that the community as whole has a serious problem: almost nobody writes TeX/LaTeX that way which means almost all TeX/LaTeX will never convert to web formats well. To put it differently, there's a reason for a large market of blackbox vendors that specialize in TeX to XML/HTML conversion for professional publishers (and this often involves re-keying).
This is, of course, in no way a fault of TeX/LaTeX itself which was designed for print, in 1978. But it is a problem we are facing today.
Now TeX is Turing complete and this means we can do everything with TeX (even toast). So a universal output for the web is theoretically possible. However, everything is nothing if we can't make it practical. Perhaps one day, we'll be lucky to find another Leslie Lamport who will give us "HTMLTeX", i.e., a set of macros that work and rapidly become the de-facto standard for authors. I doubt it. (And not just because I know mathematicians who don't upload to the arXiv because their ancient TeX template won't compile there.)
I doubt it because there's no problem to solve here. Where Knuth (and Lamport) solved imminent problems, there is no problem when it comes to authoring for the web -- a gazillion tools do it, on every level of professionalism. TeX is neither needed for this nor does it help.
"The best minds of my generation are thinking about how to write TeX packages."
-- not Jeff Hammerbacher.
Another part of my awkwardness towards TeX/LaTeX these days lies in the resources the community invests in it. It feels like every day, my filter bubble gives me a new post about somebody teaching their students LaTeX. These make me wonder. How many students will need LaTeX after leaving academia? How many would benefit from learning how to author for the web?
And then there's actual development. How many packages on CTAN are younger than 1/2/5 years? How many of those imitate the web by using computational software in the background or proprietary features such as JS-in-PDF (and who on earth writes a package like that)?
To me, this seems like an unfortunate waste of resources because we need people to move the web forward. If we remain stuck in PDF-first LaTeX-land, we miss a chance to create a web where math & science are first class citizens, not just by name but by technology and adoption from its community.
If only a part of the TeX/LaTeX community would spend an effort on web technologies like IPython Notebook, BioJS (or even MathJax) it would make a huge impact.
This brings me to my last awkward feeling about LaTeX for today which comes on strongly whenever somebody points out that LaTeX output is typographically superior.
I understand why somebody would say it but once again LaTeX is a merely tool. The reality of publishing is that almost all LaTeX documents are poorly authored, leading to poor typesetting. In addition, actual typographers will easily point out that good typography is not limited to Knuth's preferences enshrined in TeX.
So while I can understand why somebody would claim that their documents are well typeset, this is not very relevant. As long as we cannot enforce good practices (let alone best ones), the body of TeX/LaTeX documents will remain a barely usable mess (for anything but PDF generation).
On the other hand, publishers demonstrate every day that you can create beautiful print rendering out of XML workflows, no matter if you give them TeX or MS Word documents. Even MS Word has made huge progress in terms of rendering quality and nowadays ships with a very neat math input language, very decent handwriting recognition and other useful tools.
The web is typographically different. On the one hand, much of its standards (let alone browser implementations) is not on the level of established print practices. On the other hand, its typographic needs are very different from print for many reasons (reflow, reading on screens etc). And even though some of print's advantages will eventually be integrated, I suspect we will develop a different form of communication for STEM content on the web than we have in print because we have a much more powerful platform.
Comments
\label{...}
in the 4th example, and then \ref{...}
that label in Section 4. This would also improve the PDF by allowing a link to the reference.\newcommand
is available, but its definitions must fully resolve in the vocabulary of the profile. The language of the profile should be sensible both for classical print and for HTML5. See my talk at the TUG meeting in 2010, http://www.albany.edu/~hammond/presentations/Tug2010/LaTeX is the path to the dark side. LaTeX leads to TeX. TeX leads to DVI. DVI leads to suffering.
-- not Yoda.
Ever since joining MathJax, MathML has been a major part of my professional life. It's a slightly unhealthy relationship: wide-eyed enthusiasm and bottomless despair are frequent companions (although, I think, I'm becoming slightly more stable). Among the web standards of the W3C, MathML is, I think, unique and this is both good and bad (and topic for another post).
One thing that comes up regularly in discussions is how the use of LaTeX notation on the web is somehow evil. I believe this is a phantom menace.
You might say that comparing full TeX/LaTeX and MathML is comparing apples and orange -- at most, I should be comparing math-mode TeX/LaTeX to MathML. But the problem is that the difference is tricky since mixing math and tex mode is all too common in the real world. Since TeX is a programming language and lacks enforceable best practices, there will never be a "good" subset of TeX/LaTeX that could provide reasonable markup constraints. The reality of how people use TeX/LaTeX is just too messy.
Quite literally, there is no such thing as "LaTeX" on the web. What is really being compared is a bunch of TeX-like input languages. If you think Markdown is bad off (yay CommonMark!) take a look at the number of easily incompatible TeX-like input on the web. MathJax's TeX-input vs Wikipedia's texvc vs iTeXMML vs pandoc vs ... -- they are all different on some level.
And even if you think: oh well, one day there'll be one standard LaTeX subset for the web (right?), then there's still no threat here. Markdown, wikitext etc have never threatened HTML; raphaeljs, d3.js etc have never threatened SVG; threejs, pixi.js etc have never threatened WebGL. Instead, these tools pushed the use and thereby the standards forward. Pretending that TeX-like input (or asciimath or jqmath) has any other affect is a phantom.
While you might still wish to speculate that LaTeX could somehow be coaxed into being playing nice with HTML, CSS etc, the story really ends at the DOM. LaTeX does not fit in the DOM; period.
There is a reason why MathML is so damn good for mathematics -- it was created by people with a huge amount of experience, in particular in TeX and CAS. So in many ways, MathML is the natural continuation of the insights gained from TeX, applied to the web.
While at first sight MathML appears verbose (just like HTML or SVG might appear), it ultimately has one huge advantage over TeX: it is clean, self-contained, and stable. MathML provides a clear-cut presentation of equational content. It is infinitely easier to understand someone else's MathML than it is to understand someone else's TeX. (And you also cannot redefine \relax
in MathML...)
Fun fact: for roughly a decade, almost all new mathematics has been stored as MathML. Mathematicians are usually surprised by this -- doesn't every math journal accept TeX submissions? That's true and nobody would claim that the majority of mathematics is authored in MathML (come to think of it, that one probably goes to MS Word). But unless you publish with a very math-specific publisher (e.g., the AMS), your content is invariably converted into XML and your equational content into MathML. So even in pure math research (which is a miniscule amount of mathematics published compared to STEM in general) the authoritative format is MathML.
So LaTeX as a web standard is just not practical. Which brings me to my final point. If MathML fails because of a bunch math-mode LaTeX-like input thingies, then I think we deserve to fail. These are such a weak contender, MathML would have to be truly a miserable standard to loose out. By contraposition, the fact that MathML is far from miserable (as its success demonstrates every day) means it will not fail no matter how many web pages include TeX/LaTeX in their HTML.
The more interesting question for me is where this phantom originates from. I suspect this is really about the lack of browser implementations. It's always easier to look for a scapegoat. Making up a phantom like TeX will distract us from the important discussion: what's really holding back browser implementation? It's definitely not the math end where MathML simply rocks. And then the really interesting question can be: what could MathML 4 and MathML 5 look like?
Comments
25 years ago today, the wall fell in Berlin, opening up Germany, opening up Europe. Admittedly, I don't remember much about that night; of course, I technically remember (and reconstructive memory is grand) but the event held little signficance to 10-year-old me. (Though arguably not zero signficance since I had actually visited Berlin for the first time that year, and I remember standing on a platform near the Brandenburger Tor, looking over the death strip to that iconic land mark and not really understanding things).
As you may have noticed, I recently moved back to Germany and most recently to Göttingen. This meant, after some 8+ years, I'm living in West Germany again. Admittedly, when I lived in Berlin while working on my PhD, I lived in a heavily gentrified (i.e., West-ernized) East Berlin quarter (fun fact 1: at the time, the percentage of foreign citizens in Prenzlauer Berg was precisely the city average, with the "slight" difference of almost all of those being from G8 countries...). Still, even that part of East Berlin (let alone other parts) remained structurally very different from, e.g., Bonn and Munich. A particular aspect for me was always the absence of the typical West German "infrastructure" of small shops and businesses (or ATMs for that matter). In any case, Prenzlberg still felt incredibly different from anything West German (though not as much as it did in the 90s or even early 2000s when I first fell in love with Berlin).
It has struck me how Göttingen, to me, seems like a perfect example of a West German city. I can't quite pinpoint this particular feeling. Maybe it is the beautiful 18th century city center (fun fact 2: supposedly Laplace urged Napoleon to spare Göttingen because Gauss might get hurt), maybe the lovely ring of late 19th century quarters surrounding it, perhaps the 50s Karstadt, the 70s Neues Rathaus, and the 90s malls. Certainly all of that a little. The city has also seen the typical post-WWII re-design towards cars as primary mobility solutions, which makes it a mess for the large number of bikes, pushing them to the sidewalks to collide with pedestrians (fun fact 3: I couldn't remember when I had last seen an atomkraft-nein-danke flag but I did see one on my first trip to Göttingen). Göttingen has this feel of wealthy-but-reluctant-to-admit-it (as so much of West Germany). It's filled with students making it appear modern and young and yet it's history weighs heavily in places (bizarro Bismarck adoration in the Bismarckturm does not compute). Göttingen is also surrounded by a beautiful countryside with a gazillion potential destinations for the weekend, many having been popular retreats at some point of the city's long history. Of course, for a mathematician, Göttingen is a particular attraction and yet it's hard to ignore the great purge in 1933.
Göttingen has this particular, everything-is-finished vibe (with a no-room-for-change beat) which I find so typical of West German cities. It's oddly appealing (especially after returning from SoCal) and yet slightly suffocating. If you want to live in a perfect example of West Germany, come stay in Göttingen. At the very least, you can stop by Gauss's grave and since it's a 5 minute walk from my home I expect you to stop by for a coffee after.
Today, celebrations of the peaceful revolution of 1989 may be in focus. But on November 9 we always remember more. 1918, 1923, 1938, 1989; I can't remember one without the other.
]]>Still, I miss writing. So I'm setting myself a tiny blogging challenge for the few weeks remaining in 2014.
Yes. I don't want to take on a 1-post-a-day challenge because, well, I'd simply fail. I'm no Cathy O'Neill. So most likely I'll write these posts on the weekend or possibly late at night.
One post per week seems reasonable. It's realistic, I've done it in the past yet it's far from what I'm currently able to do.
Not a lot, I admit, but the averaging napping time of certain person. Given that my usual writing includes a procrastination phase of 5-6 months I'm expecting a change in quality. I'm just hoping for an improvement, given that more writing regularly should mean more practice.
I thought it might be prudent to have a couple of topics ready so that when I sit down (not unlikely on a Sunday at 11:30pm to make the deadline) I have a last resort for a topic to babble about.
Note that these are not actually related to drafts or even proper ideas. They are just ideas I jotted down over the past few weeks.
Comments
And it seems my new neighborhood is trying to tell me something.
Oh well. I supposes that's what you get for moving to the town where these folks spent very productive years.
Comments
Definition. Let \(V\) be a model of \(\ZFC\), and \(\PP\in V\) be a notion of forcing. We say that a cardinal \(\kappa\) is "colloopsed" by \(\PP\) (to \(\mu\)) if every \(V\)-generic filter \(G\) adds a bijection from \(\mu\) onto \(\kappa\), but there is an intermediate \(N\subseteq V[G]\) satisfying \(\ZF\) in which there is no such bijection, but there is one for each \(\lambda\lt\kappa\).
Continue reading...]]>In case you forgot, \(\kappa\) is a huge cardinal if there is an elementary embedding \(j\colon V\to M\), where \(M\) is a transitive class containing all the ordinals, with \(\kappa\) critical, and \(M\) is closed under sequences of length \(j(\kappa)\).
Continue reading...]]>People often like to cite the paradoxical decomposition of the unit sphere given by Banach-Tarski. "Yes, it doesn't make any sense, therefore the axiom of choice needs to be omitted".
Continue reading...]]>But we can clearly see some various degrees of largeness by how much structure the existence of the cardinal imposes. Inaccessible cardinals prove there is a model for second-order \(\ZFC\), and Ramsey cardinals imply \(V\neq L\). Strongly compact cardinals even imply that \(\forall A(V\neq L[A])\).
Continue reading...]]>Forcing is horrible. If you can think about it, you can encode it into generic objects. If you can't think about it, you can encode it into generic objects. If you think that you can't encode it into generic objects, then you are probably wrong, and you can still encode it into generic objects.
Continue reading...]]>Continue reading...]]>
As you can see, that text file has some beautiful ascii-art mathematics. Of course, Doug wanted to code this up properly for the web which means using MathML and the question was: what's the easiest way to do so?
It's not hard to see why I suggested ASCIIMathML (or asciimath). Asciimath was written by Peter Jipsen with whom I happen to have two lucky personal connections -- first, I luckily shared a room with Peter at BLAST 2010 (way before I got involved with MathJax, see these posts), second I was lucky enough to enjoy his hospitality a couple of times while we lived in LA, including Peter taking me surfing for the first time in my life -- good times.
If I remember correctly, asciimath was born out of pure necessity -- finding a way for college students to write mathematics on the web. These kids were accustomed to graphing calculator style input, and Peter, of course, believed that MathML was the right way for an output on the web -- so in 2004 he started to write this beautiful JavaScript library to convert from one to the other.
Later on, David Lippman wrote a nice MathJax addon, which was ultimately re-written by David Cervone, and so nowadays you can use asciimathml in any browser by combining it with MathJax.
First off, if you know some TeX I would probably describe asciimath as "TeX without backslashes". Because, really, why not write alpha
or phi
for %alpha, phi%? Similarly, why not just write sin
for %sin%? (Oh, and let's have a fun discussion about phi
vs varphi
, Unicode vs TeX. But not a problem, you can switch to whichever convention you like using MathJax.)
Second, if you know markdown, then I might describe asciimath as "markdown for math". It's not TeX in all its (infamous) glory or even MathJax's TeX-like input with its many advantages for the web. It's much more restricted and that's by design -- much like markdown is.
Given its target (MathML) and its general webbiness, asciimath works smoothly with Unicode, which adds to its readability and usability (and internationalization). Everyone will probably appreciate that ->
and →
work interchangeably (both of which seem much saner to me than anything LaTeX would suggest). So f: A -> B
and f: A → B
produce identical MathML: %f: A -> B% and %f: A → B%.
Similarly, asciimath's minimal approach does not need TeX's cumbersome \begin{} \end{}
environments, but many important tools are available in much simpler ascii/computing notation, e.g., ((a,b),(c,d))
for matrices: %((a,b),(c,d))%.
Personally, I think asciimath probably deserves the title "markdown for math" although I think the title will go to TeX-like input after all (but that's another post).
What I'd really love to see is more people pushing asciimath further. The official ASCIIMathML repository is now hosted on MathJax's GitHub account and we even grabbed a nice domain at www.asciimath.org to have an open page using Github pages for people to easily contribute enhancements to.
There's a lot of low hanging fruit in the form of improving the quality of the MathML (e.g., a\\b
should probably produce <mfrac bevelled="true"><mi>a</mi><mi>b</mi></mfrac>
instead of the problematic <mi>a</mi><mo>/</mo><mi>b</mi>
) and of course asciimath by design should probably not strive to be feature complete (i.e., generate any kind of MathML) which means there should be situations where asciimath will simply fail and, much like markdown with HTML, it could perhaps gracefully mix MathML and asciimath.
But in any case, it's great to have this alternative to TeX-like inputs because TeX is ultimately holding math on the web back (but that's another post, for another time).
Comments
\begin{align} \end{align}
like environements is also a downside of ASCIIMath. I use align environments all the time. I agree that ASCIIMathML aim shouldn't be to be feature complete. But I do think it should be able to generate 90 percent of what a mathematician uses. In other words 90 percent of what is used at math.stackexchange for example.\(W^{3\beta}_{\delta_1\rho_1\sigma_2}\)
W_(delta_1rho_1sigma_2)^(3beta}
W_δ₁₂ρ₁σ₁^3β
(W_δ₁₂ρ₁σ₁^3β)
in this comment box, was allmost trivial using that tool (which can be used as a bookmarklet script).I think there is a world market for maybe five browsers
-- not Thomas J. Watson
As my one two regular readers know, I work for a project that's all about cross-browser support. It might, therefore, not come as a surprise that I use three browsers when working. That's mostly because I love incognito-modes; not just for the slightly increased privacy beyond ghostery/disconnect/abp but for the convenience of a clean, nowhere-logged-in browsing experience. However, a sense of realism forces me out of incognito-mode, so I spread things out.
On the desktop, I use Chrome for all Googly things (email, docs etc) and all social things (social networks, feed readers etc), Firefox (in privacy mode) for work things (Github etc) , and Chromium (in incognito mode) for other things (aka bouncing around the intertubez). I guess I also sometimes use "Web" (the Gnome browser; weird name) because it's WebKit and every so often I spin up one of Microsoft's testing VMs for IE. On my Android devices I use Chrome and Firefox mobile (I tried Opera and Dolphin as well but never felt like switching). "Manual" browsing I usually do in Chrome incognito tabs, links from other apps get opened in Firefox (because, trust). Maybe I should add the Wikipedia-beta app (which is so much better now) but I'm lucky to be on KitKat on all devices (no more horroribly-ancient-WebKit in apps) so browser-wise, I'm ok. And I feel the need to mention duckduckgo which is simply awesome (w00t! I just found out there's Android app. Gotta try that.)
But then there's an iPad in my home (where I'm only a guest). And of course, there are no choices for browsers: Safari is all you get. (In case you didn't know it already, all browsers on iOS have to use the underlying mobile Safari as a rendering engine because Apple's TOS forbid all browser engines in the app store). I think this needs to change.
Somebody recently pointed out to me that after the convergence following the browser wars, we seem to be in a phase of (massive) divergence. And it's not going too well. Browser vendors are doing crazy stuff all over the place. Chrome gets a lot of heat (pulling MathML, CSS regions, threatening XSLT), though I find myself defending them more often than not because they are, at least, transparent (and they're also doing cool stuff like the earliest web components implementation, the CSS font loading API, the (failed) WebIntents etc). IE is like Chrome, just without the positive transparency. How crazy is it to read over at Murray Sargent's blog that IE is using a MathML-capable rendering system yet MathML is "not planned" in the IE dashboard? Then there's Apple which does things like happily touting MathML support when a) it's still enormously limited and b) it was all done by 3-4 volunteers (not together, mind you, all fighting by themselves, one following when the other burned out); or using the (non-standard) Pages engine in iBooks (only for iBooks Author books but still a heck of a bad practice).
Don't get me wrong, fundamentally, I think that's ok -- divergence needs to follow convergence. But I think it might take the same level of regulation that we saw in the browser wars to ensure we'll see convergence again. Currently, browsers are more like utilities, yet essentially unregulated. While desktop statistics are slightly better (but not actually good), mobile is an alarming monopoly. Safari on iOS, Chrome on Android; that's it. Sure, you can get Firefox for Android etc. but those browsers are at a massive disadvantage. Back in the day, Microsoft was forced to actively help users to install non-IE browsers (well, in the EU at least). The same should be done for all OSs, including mobile, and possibly even more in terms of apps/webview etc. (Granted, for FirefoxOS, this seems impossible; but just because Mozilla is mostly a positive force doesn't mean they can get a free pass.)
In the long run, I think, we need browsers to become commodities. For this they need to become easy to develop, to modify and recombine -- and with regulations to prevent abuse like we saw on Windows and we see on iOS. We need hundreds, maybe thousands of browsers, dozens of layout engines, modular, recombinable etc. I would love to be able to "compile" my own browser -- take some MathML support from one place, CSS modules from another, accessibility features from a third etc. pp. Not in the days-gone-by XML-dreams of modularization but in the "hey, code-for-kids teaches you to write an HTML9 layout engine" or in a breach-but-for-real way (i.e., not just on top of Chromium), or (let's go crazy) write an HTML rendering engine in TeX or lolcode (what's the difference, really?).
Really, there simply has to be room for more than 5 browsers in the world.
PS: Yes, this is mostly about "layout engines", not "browsers". To most people the distinction is meaningless.
Comments
Well, of course that the answer is negative. If \(\cal U\) is a free ultrafilter on \(\omega\) then \(\{X\subseteq\mathcal P(\omega)\mid X\cap\omega\in\cal U\}\) is a free ultrafilter on \(\mathcal P(\omega)\). But that doesn't mean that the question should be trivialized. What Yair asked was actually slightly subtler than that: is it consistent that there are free ultrafilters on \(\omega\), but no uniform ultrafilters on the real numbers?
Continue reading...]]>Neil deGrasse Tyson pushed a lot on the point that we really push the planet to its limits, and we might be close to the point of no return from which there is only a terrible Venus-like fate to this planet. And that is an important issue, no doubt.
Continue reading...]]>Almost always, the problems are easy to track down (e.g., the infamous 15s delay if a custom configuration/extension/etc has an incorrect loadComplete
call), sometimes they are bugs (e.g., the recent Chrome/WebKit webfont loading bug), but of course every so often they hit on the subtleties that make what MathJax does so hard (ex/em matching, webfont detection etc.).
A surprising recent example for the latter revolves around the use of display:none
. It usually comes up in reports of broken layout but the other day there was an interesting performance issue. To understand the second, it helps to understand the first.
display:none
The rendering issues sometimes seen for content which starts off with CSS display:none
and later made visible stem from a simple problem: browser engines won't actually layout elements with display:none
. MathJax on the other hand, needs to take a few vital measurements (basically widths and heights) to produce a correct layout -- and these measurements are not available when the content wasn't laid out by the browser.
To work around this predicament, we could just leave it to the author to work as if content with display:none
was dynamically loaded content -- and force them to trigger a manual typeset when the content is revealed. But that's silly because the content is there, we should damn well use it.
So to work around display:none
, MathJax does something quite simple: it moves the content into an invisible element that does get laid out -- using visibility:hidden
with zero dimensions. Then MathJax can take the measurements, produce good rendering and put the rendered output back to the original location.
Now there's an obvious problem with that approach: where would you move the content to do the rendering? After all, just because something is display:none
doesn't mean it has no context. It might be in a completely different CSS context (think: hints to a homework problem, sidebar content, menus), or the context might change once it becomes visible (think: popup footnotes/references, knowls). In other words, MathJax output in some other context might get screwed up when put back into the original context (e.g., matching font sizes correctly, dealing with inherited CSS). Of course, more often than not, this will work well but it is a general problem and should be avoided.
(Another way might be to use mutation observers. Besides supported being limited, I think there's an argument to be made that layout should happen right away if possible. But it should probably become an option via an extension.)
Recently, we saw a sample where all this magic had a very different side effect: serious performance issues. In that sample, hundreds of equations were hidden away with display:none
. This meant that MathJax had to shift those around in the DOM -- and especially mobile browsers did not like that at all. What made matters worse was that the MathJax status messages gave no useful indication of what was going on, instead hanging at unrelated points -- because MathJax currently doesn't have a signal to catch a delay for such a "simple" action like laying out display:none
. In the end, the sample (with 2000+ equations) left the user with the impression that their mobile browsers were hanging/crashing -- just because of all these necessary layout shenanigans! Darn!
The moral of the story is: use visibility:hidden
, e.g., position: absolute; top: 0, left:0; width:0, height:0, overflow: hidden; visibility: hidden;
), or tell MathJax to skip the content and manually queue a typesetting call when you reveal hidden content. If you want to put in some extra work, use visibility:hidden
, let MathJax skip the hidden content and then queue a typesetting call for the hidden content after MathJax is loaded; that way the hidden content will be typeset only after the visible content is done (on MathJax's initial pass).
Any which way, don't get caught in bad layout or performance issues related to display:none
!
And so I'm honored to join the un-secret society of Carnival of Mathematics hosts. Indeed, the list of former and future hosts over at The Aperiodical (who took over the organizational stress two years ago, stepping into the tremendously large footsteps of Mike Croucher of Walking Randomly), this list reads like a who-is-who of true math bloggers (the kind that cares for blogging as a community and art form). If you're not on it, do yourself a favor and volunteer right now. I'll wait. Honestly, I will. This post will still be here when you get back; I promise.
In the time-honored tradition, let us remember that 111 has many marvellous properties. However, if I were forced to name a favorite, I could not decide between the fact that the smallest magic square containing 1 and otherwise prime numbers, has a magical constant of 111, as well as the simple beauty of being a palindromic number.
When you enter our attractions, you are almost unnaturally drawn to an oldie-but-goldie, an attraction worth a visit every time the carnival is in town: John Baez's Beauty of Roots. As with Vincent Pantaloni (@panlepan) put it: "The best math I stumbled upon this month is this visualisation of polynonmial roots".
Then stop by Antonio Sanchez Chinchon since he shares with us his mnemoneitoR, to translate numbers into easy-to-remember phrases inspired by books to generate funny mnemonic rules.
But don't stop there, wonders await as AP Goucher gives us the elliptic curve calculator, a fixed page paper slide rule using elliptic curves.
And while you take a break, make sure to sit down and listen in on Alexandre Borovik's Math under the Microscope pointing us to this New York Times article on a simple one-time exercise that might prevent community college students from dropping out of math classes.
But throw yourself back into the crowds of the carnival because when Colin Beveridge (of Flying Colours Math) was asked why he loves math, he wrote a short and sweet post to make his answer public. As luck will have it, a student asked Stephen Cavadino of cavmaths the very same, and so we can enjoy another answer that might inspire future students to grow their own and personal passion for mathematics.
By now you're hungry and rightfully so. However, if you ever wondered where to place a hot dog stand, and how to adapt when the best customer moves into a motorhome, then fear not -- David Orden at Mapping Ignorance will fill your stomach with a great post, taking you from Sylvester's original question back in 1857 all the way to today's cutting edge research.
With a full belly, let's head over to the Aperiodical, where Paul Taylor tackled the mind-bending and subtle hidden maths of the Eurovision song contest while Katie Steckles provides us with a recap of Matt Parker's appearance on the BBC's consumer moanfest, Watchdog, where Matt helped everyone get their percentages right.
And while you leave, why not trust him when the Aperiodical's Christian Perfect points you in the direction of Nick Berry's excellent blog DataGenetics, with a post that will introduce you to the wonders of Amidakuji, bringing together braid theory and a very old arcade game.
Then go on and follow Katie Steckle to visit Goading the IT geek's post on the deceivingly simple problem of calculating averages.
And if you find yourself in a part of the Carnival you have already visited, why not take a chance and run into Stephen Cavadino's posts on mathematics on children's playgrounds and small puzzle on the number 71?
Back on the main road through the carnival, you'll see in the far end Alex's adventures in numberland, where Alex Bellos has learned from Joseph Mazur how surprisingly new mathematical notation is.
And if you like to gamble, dear friend, worry not. The BBC's Janet Ball can tell you a story that might encourage you, how the (in)famous MIT blackjack team won enormous amounts of money tackling the odds with their mathematical prowess.
Behold, the Mechanical Turk is nothing against our next mind-bending adventure as Andrea Hawksley takes you on a dive into Non-Euclidean Chess!
After his shocker, cool down a little and enjoy the talented Shecky Riemann sharing with us his interview with passionate math-ed blogger Fawn Nguyen.
Nest step into the ghost house at Google+, where Richard Green took everyone on a journey from simply squaring prime numbers to monsters and moonshine and some of the most complex and arduous mathematics of the 20th century.
In our version of the house of mirros, behold: Nim and Fractals -- what could go better together? The amazing Tany Khovanova provides us with the background on her latest paper with one of her high school students in MIT's PRIMES project
But you obviously cannot get enough! Well, then, we dare you to follow The Aperiodical's Peter Rowlett into the long, long list of podcasts for university math students. Only the bravest have listened to them all!
As immortal challenges go, Chris Burke of (x,why?) celebrates overcoming one: the 30 posts in 30 days blogging challenge with a fine post on the struggle students face with piecewise functions.
For the craziest ride of this carnival, be sure to stop by Matifutbol where, just in time for the start of the World Cup 2014 in Brazil, Herminio's post on trees and googols at the World Cup will take you on a wild trip to all possible competitions, the Wedderburn-Etherington number and the very edge of the known universe, making you appreciate how simple life will be over the next 4 weeks.
If this is too wild, get your dose of World Cup math blogging at Matt Scroggs's who will tell you how many Panini packages you really need to buy to complete that Panini book you've been hiding under your bed all these years.
The strongest man in the world cannot resist the powers of set theoretical forcing. And Asaf Karagila will make sure you won't wrongly use the analogy of field extensions to explain forcing.
And as you leave this Carnival behind, excited, exhausted, and content, you might still turn back for that one last ride, that one last attraction. So head over to Patrick Honner / Mr Honner as he takes on the The Grant Wiggins Conceptual Understanding Challenge, allowing us, through his response, a peek into that insightful brain of his.
And so the Carnival comes to an end and we move on. As we must. Always.
Comments
But both these analogies would be wrong. They only take you so far, and not further. And if you wish to give a proper explanation to your listener, there will be no escape from the eventual logic and set theory of it all. I stopped, or at least I'm doing my best, using these analogies. I do, however, use the analogy of "How many roots does \(x^{42}-2\) has?" as an example for everyday independence (none in \(\mathbb Q\), two in \(\mathbb R\) and many in \(\mathbb C\)). But this is to motivate a different part of the explanation: the use of models of set theory (e.g. "How can you add a real number??", well how can you add a root to a polynomial?) and the fact that we don't consider the universe per se. Of course, in a model of \(\ZFC\) we can always construct the rest of mathematics internally, but this is not the issue now. Just like we have a model of one theory, we can have a model for another.
Continue reading...]]>But I found myself brushing past even those few postings, so that yesterday I thought it was time to move on and remove them from my feed reader, de fact closing the "math" section of my feed reader, where all my research related feeds ended up. And then just as I am about to, I see this question and answer which, while neither spectacular or particular, reminded me why I once fell in love with set theory.
So, Asaf, I will call you Joey Zasa from now on.
Comments
INCONCEIVABLE!
The kind overlords of the math blogosphere to whom we are all but humble servants, yes, the one true and ever-so-periodical team at The Aperiodical have called upon yours truly to host a carnival. Not any carnival, but a blogging carnival of math.
So I invite to step into our tiny realm of mathblogging and help me host a grand show for all the world to see.
Bring me your posts, your rants, your poems.
Share idle idiosyncrasies, deranged derivations, cool calculations, and rash remarks.
Give it your best and your worst and your all. For the Carnival of Math is here and all creatures are welcome on its arena's floor.
]]>We're getting to some serious results here. The "tree characterization" of centrality is, I think, not known (or not appreciated) widely enough. It might be a lot to wrap your mind around as a student but this might be one of the better ways of providing some insights into the notion of cwpws sets.
This page is very amusing. The random note on destroying strongly summable ultrafilters is what occupied a large part of my postdoctoral research. Apparently it took me a while to realize this is an interesting question. Come to think of it, Francois and I also spent quite a bit of time on the tree characterization; makes me want to skip ahead to a postdoc notebook...
]]>It's a short little proof that the classic downward Löwenheim-Skolem theorem is equivalent to \(\DC\), and that for a well-ordered \(\kappa\), the downward Löwenheim-Skolem asserting the existence of models of cardinality \(\leq\kappa\) is in fact equivalent to the conjunction of \(\DC\) and \(\AC_\kappa\).
Continue reading...]]>Clearly, the theme is different now. I also changed the content of the Papers page. I removed the abstracts (for some reason I thought this is going to be a cool thing to have, but with time it grew to annoy me greatly). I will definitely post a few things there in the coming time, some notes and eventually some nice papers -- I hope!
Continue reading...]]>This page contains the proof of Theorem 3.4 of the previous part (I guess I should've included that yesterday). I can't really make much of it. It's the dull of writing up a new notion. But if you look closer, you might stumble over a few details (as I did when I took these notes). Writing this up just now I find the choice of \(w\) quite striking.
]]>We're back to Sabine Koppelberg's talks about basic \(\beta S\) results (with four more pages to come). This time, tackling the not-so-basic notions of collectionwise thick/pws sets. These notions are cricital for analysing sets the minimal ideal -- and equally elusive.
I'm not very happy with notation here; it seems to sacrifice accessibility over corrrectness. A sloppier notation might be helpful. In addition, "collectionwise" is a cumbersome prefix. I'd go for "uniformly" or "coherently" as they are often used in the context of filters (and this is what "collectionwise" is all about). But it probably wouldn't help to add yet another terminology.
Funny thing. I actually spent my last few weeks in Michigan thinking about these notions.
This finishes the attempts to solve 4.1.7 from Hindman&Strauss (successfully). Given the nice write up of the solution, I'm guessing I worked the proof out someplace else (blackboard, separate piece of paper etc). This reminds me that in the office I was working in at the time I found this wonderful stack of thick letter size paper (letter size! in Germany!). I loved writing on the paper for rough drafts, preparing talks/lecture notes etc. But it clashed with my desire to keep notebooks.
This page is extremely fascinating for me because of the final question. It's always easy to ask yourself if the reverse of a proposition holds; that's just standard. In this case, the answer should be a pretty straight forward "no"; however, I don't think I ever worked out a counterexample.
But that's not what makes this so fascinating for me. What is fascinating is that I spent a lot of time during my postdoc to solve a very similar problem (and failed) which I consider one of the most interesting questions about idempotent filters. Unfortunately, I was unable to solve the question. I don't want to go into detail here and it will take months until we get to that (a teaser never hurts, right?). It's fascinating to see that I was very nearly thinking about the very same problem this early in my PhD (and, not surprisingly, missed the actually interesting question at this point).
I find this double page (and the one following it) quite interesting. Mathematically speaking, there's very little going. If I recall correctly, it was Stefan Geschke (or else Sabine Koppelberg) who had mentioned the fact to me that sets in ultrafilters that are sums have too many small gaps, i.e., the size of gaps in their enumeration does not have an (improper) limit. So I found the exercise in Hindman&Strauss and tried to solve it.
What's interesting is how I went about solving it. I would call this the "formalist approach", i.e., by manipulation of symbols following simple logic since I have no intuition of the subject. Of course, I fail, repeatedly; the solution will be found on the next page.
By the way, the first two lines are about the grading of a set theory course (the previous page contains more but I did not reproduce it). I will skip a rant about how PhD students are often forced into TA duties without being paid; in a logical twist, they often "cannot" be paid because they are on grant money and most grants directly prohibit teaching duties.
]]>The previous page is followed by another attempt of research ideas.
First there's a note on a basic but important observation for \(\beta \mathbb{N}\) -- it contains lots of copies of \(\mathbb{Z}\). I remember trying to figure this out and ending up asking Sabine Koppelberg -- and the solution took two second, leaving me miserably disappointed by my failure.
To understand the second part of the note, I should explain that my Diplom thesis was about large cardinals and reflection principles. The first few things I tried that summer came out of that perspective -- looking at cardinals (as a semingroup with ordinal addition/multiplication), hoping to connect with large cardinal theory. Nothing ever came of it but perhaps we'll encounter that later in this workbook.
]]>Finally, a first note that is not some lecture note but (almost) a note on research. Not that it's particularly meaningful or even sensible. In fact, it's rather mysterious to me. At first I thought the background lies at TOPOSYM (which I visited during the summer), where Jana Flašková talked about P-points. But looking back at my notes on her talk (in the red workbook but not published here), I don't think this really fits (but I might be wrong).
More wonderful stuff about thick, piecewise syndet