## Some new pages on Laver tables.

I have added the following new pages on the classical Laver tables.

The classical Laver tables can be given in ZFC a linear ordering such that the endomorphisms are monotone functions. When we reorder the classical Laver tables according to this compatible linear ordering we get the tables in the following link.
http://boolesrings.org/jvanname/lavertables-database-classical-fullcompatibletabletoa5/

The following link gives the compressed versions of the multiplication table for the classical Laver tables reordered according to the compatible linear ordering.
http://boolesrings.org/jvanname/lavertables-database-classical-alltablescompatibletoa10/

The compatible linear ordering on the classical Laver tables produces fractal like patterns that converge to a compact subset of the unit square. These images are found on the following link.
http://boolesrings.org/jvanname/lavertables-visualization-classical-imagesofcompatibletables/

And this page gives images of the fractal pattern that comes from the classical Laver tables.
http://boolesrings.org/jvanname/lavertables-visualization-classical-imagesoftables/

Hopefully I will be able to finish my long paper on the classical Laver tables over the next couple of months.

## Why complete regularity rather than Hausdorff is the correct cutoff point between lower and higher separation axioms

Disclaimer: By regular (completely regular) we mean regular (completely regular) and $T_{0}$

In general topology, there are two different kinds of topological spaces. There are the topological spaces that satisfy higher separation axioms such as the 3 dimensional space that we live in; when most people think of general topology (especially analysts and algebraic topologists), they usually think of spaces which satisfy higher separation axioms. On the other hand, there are topological spaces which only satisfy lower separation axioms; these spaces at first glance appear very strange since sequences can converge to multiple points. They feel much different from spaces which satisfy higher separation axioms. These spaces include the Zariski topology, finite non-discrete topologies, and the cofinite topology. Even spaces that set theorists consider such as the ordinal topology on a cardinal $\kappa$ or the Stone-Cech compactication $\beta\omega$ satisfy higher separation axioms; after all, $\beta\omega$ is the maximal ideal space of $\ell^{\infty}$. The general topology of lower separation axioms is a different field of mathematics than the general topology of higher separation axioms.

However, can we in good conscience formally draw the line between the lower separation axioms and the higher separation axioms or is the notion of a higher separation axiom simply an informal notion? If there is a line, then where do we draw the line between these two kinds of topological spaces?

As the sole owner of a silver badge in general topology on mathoverflow, I declare that the axiom complete regularity is the place where we need to draw the line between the lower separation axioms and the higher separation axioms. I can also argue that complete regularity is correct cutoff point by appealing to an authority greater than myself; the American Mathematical Society’s MSC-classification (the authority on classifying mathematics subjects) also delineates the lower separation axioms and the higher separation axioms at around complete regularity:
54D10-Lower separation axioms ($T_0$–$T_3$, etc.)
54D15-Higher separation axioms (completely regular, normal, perfectly or collectionwise normal, etc.)

Let me now give a few reasons why complete regularity is the pivotal separation axiom.

Hausdorffness is not enough. We need at least regularity.

Hausdorff spaces are appealing to mathematicians because Hausdorff spaces are precisely the spaces where each net (or filter) converges to at most one point. However, the condition that every net converges to at most one point should not be enough for a space to feel like it satisfies higher separation axioms. Not only do I usually want filters to converge to at most one point, but I also want the closures of the elements in a convergent filter to also converge. However, this condition is equivalent to regularity.

$\mathbf{Proposition}:$ Let $X$ be Hausdorff space. Then $X$ is regular if and only if whenever $\mathcal{F}$ is a filter that converges to a point $x$, the filterbase $\{\overline{R}\mid R\in\mathcal{F}\}$ also converges to the point $x$.

The next proposition formulates regularity in terms of the convergence of nets. The intuition behind the condition in the following proposition is that for spaces that satisfy higher separation axioms, if $(x_{d})_{d\in D},(y_{d})_{d\in D}$ are nets such that $x_{d}$ and $y_{d}$ get closer and closer together as $d\rightarrow\infty$, and if $(y_{d})_{d\in D}$ converges to a point $x$, then $(x_{d})_{d\in D}$ should also converge to the same point $x$.

$\mathbf{Proposition}$ Let $X$ be a Hausdorff space. Then $X$ is regular if and only if whenever $(x_{d})_{d\in D}$ is a net that does not converge to a point $x$, there are open neighborhoods $U_{d}$ of $x_{d}$ such that whenever $y_{d}\in U_{d}$ for $d\in D$, the net $(y_{d})_{d\in D}$ does not converge to the point $x$ either.

$\mathbf{Proof:}$ $\rightarrow$ Suppose that $(x_{d})_{d\in D}$ does not converge to $x$. Then there is an open neighborhood $U$ of $x$ where $\{d\in D\mid x_{d}\not\in U\}$ is cofinal in $D$. Therefore, there is some open set $V$ with $x\in V\subseteq\overline{V}\subseteq U$. Therefore, let $U_{d}=(\overline{V})^{c}$ whenever $d\in D$ and $U_{d}$ be an arbitrary set otherwise. Then whenever $y_{d}\in U_{d}$ for each $d\in D$, the set $\{d\in D\mid y_{d}\not\in U\}$ is cofinal in $D$. Therefore, $(y_{d})_{d\in D}$ does not converge to $x$ either.

$\leftarrow$ Suppose now that $X$ is not regular. Then there is an $x\in X$ and an open neighborhood $U$ of $x$ such that if $V$ is an open set with $x\in V$, then $V\not\subseteq U$. Therefore, let $D$ be a directed set and let $U_{d}$ be an open neighborhood of $x$ for each $d\in D$ such that for all open neighborhoods $V$ of $x$ there is a $d\in D$ so that if $e\geq d$, then $U_{d}\subseteq V$. Then let $x_{d}\in\overline{U_{d}}\setminus U$ for all $d\in D$. Then $(x_{d})_{d\in D}$ does not converge to $x$. Now suppose that $V_{d}$ is a neighborhood of $x_{d}$ for each $d\in D$. Then for each $d\in D$, we have $V_{d}\cap U_{d}\neq\emptyset$. Therefore, let $y_{d}\in V_{d}\cap U_{d}$. Then $(y_{d})_{d\in D}$ does converge to $x$. $\mathbf{QED}$.

Complete regularity is closed under most reasonable constructions

If there is a main separation axiom that draws the line between higher separation axioms and lower separation axioms, then this main separation axiom should be closed under constructions such as taking subspaces and taking arbitrary products. Since every completely regular space is isomorphic to a subspace $[0,1]^{I}$, the crossing point between lower and higher separation axioms should be no higher than complete regularity.

Not only are the completely regular spaces closed under taking products and subspaces, but the completely regular spaces are also closed under taking ultraproducts, the $P$-space coreflection, box products and other types of products, and various other constructions. Since we want our main separation axiom to be closed under most reasonable standard constructions and no lower than regularity, regularity and complete regularity are the only two candidates for our main separation axiom. We shall now find out why complete regularity is a better candidate than regularity for such a separation axiom.

Completely regular spaces can be endowed with richer structure

The completely regular spaces are precisely the spaces which can be given extra structure that one should expect to have in a topological space.

While a topological space gives one the notion of whether a point is touching a set, a proximity gives on the notion of whether two sets are touching each other. Every proximity space has an underlying topological space. Proximity spaces are defined in terms of points and sets with no mention of the real numbers, but proximity spaces are always completely regular. Furthermore, the compatible proximities on a completely regular space are in a one-to-one correspondence with the Hausdorff compactifications of the space.

$\mathbf{Theorem:}$ A topological space is completely regular if and only if it can be endowed with a compatible proximity.

The notion of a uniform space is a generalization of the notion of a metric space so that one can talk about concepts such as completeness, Cauchy nets, and uniform continuity in a more abstract setting. A uniform space gives one the notion of uniform continuity in the same way the a topological space gives one the notion of continuity. The definition of a uniform space is also very set theoretic, but it turns out that that every uniform space is induced by a set of pseudometrics and hence completely regular.

$\mathbf{Theorem:}$ A topological space is completely regular if and only if it can be endowed with a compatible uniformity.

For example, it is easy to show that every $T_{0}$-topological group can be given a compatible uniformity. Therefore, since the topological groups can always be given compatible uniformities, every topological group (and hence every topological vector space) is automatically completely regular.

Complete regularity is the proper line of demarcation between low and high separation axioms since the notions of a proximity and uniformity (which capture intuitive notions related to topological spaces without referring to the real numbers) induce precisely the completely regular spaces.

The Hausdorff separation axiom generalizes poorly to point-free topology

I realize that most of my readers probably have not yet been convinced of the deeper meaning behind point-free topology, but point-free topology gives additional reasons to prefer regularity or complete regularity over Hausdorffness.

Most concepts from general topology generalize to point-free topology seamlessly including separation axioms (regularity, complete regularity, normality), connectedness axioms (connectedness, zero-dimensionality, components), covering properties (paracompactness,compactness, local compactness, the Stone-Cech compactification), and many other properties. The fact that pretty much all concepts from general topology extend without a problem to point-free topology indicates that point-free topology is an interesting and deep subject. However, the notion of a Hausdorff space does not generalize very well from point-set topology to point-free topology. There have been a couple attempts to generalize the notion of a Hausdorff space to point-free topology. For example, John Isbell has defined an I-Hausdorff frame to be a frame $L$ such that the diagonal mapping $D:L\rightarrow L\oplus L$ is a closed localic mapping ($\oplus$ denotes the tensor product of frames). I-Hausdorff is a generalization of Hausdorffness since it generalizes the condition “$\{(x,x)\mid x\in X\}$ is closed” which is equivalent to Hausdorffness. Dowker and Strauss have also proposed several generalizations of Hausdorffness. You can read more about these point-free separation axioms at Karel Ha’s Bachelor’s thesis here. These many generalizations of the Hausdorff separation axioms are not equivalent. To make matters worse, I am not satisfied with any of these generalizations of Hausdorffness to point-free topology.

It is often the case that when an idea from general topology does not extend very well to point-free topology, then that idea relies fundamentally on points. For example, the axiom $T_{0}$ is completely irrelevant to point-free topology since the axiom $T_{0}$ is a pointed concept. Similarly, the axiom $T_{1}$ is not considered for point-free topology since the notion of a $T_{1}$-space is also fundamentally a pointed notion rather than a point-free notion. For a similar reason, Hausdorffness does not extend very well to point-free topology since the definition of Hausdorffness seems to fundamentally rely on points.

Just like in point-set topology, in point-free topology there is a major difference between the spaces which do not satisfy higher separation axioms and the spaces which do satisfy higher separation axioms. The boundary between lower separation axioms and higher separation axioms in point-set topology should therefore also extend to a boundary between lower separation axioms and higher separation axioms in point-free topology. Almost all the arguments for why complete regularity is the correct boundary between lower and higher separation axioms that I gave here also hold for point-free topology. Since Hausdorffness is not very well-defined in a point-free context, one should not regard Hausdorffness as the line of demarcation between lower separation axioms and higher separation axioms in either point-free topology or point-set topology.

Conclusion

Spaces that only satisfy lower separation axioms are good too.

While completely regular spaces feel much different from spaces which are not completely regular, spaces which satisfy only lower separation axioms are very nice in their own ways. For example, non $T_{1}$-spaces have a close connection with ordered sets since every non-$T_{1}$-space has a partial ordering known as the specialization ordering. I do not know much about algebraic geometry, but algebraic geometers will probably agree that spaces which only satisfy the lower separation axioms are important. Frames (point-free topological spaces) which only satisfy lower separation axioms are also very nice from a lattice theoretic point of view; after all, frames are precisely the complete Heyting algebras.

The underappreciation for complete regularity

The reason why Hausdorffness is often seen as a more important separation axiom than complete regularity is that Hausdorffness is easy to define than complete regularity. The definition of Hausdorffness only refers to points and sets while complete regularity refers to points, sets, and continuous real-valued functions. Unfortunately, since the definition of complete regularity is slightly more complicated than the other separation axioms, complete regularity is not often given the credit it deserves. For example, in the hierarchy of separation axioms, complete regularity is denoted as $T_{3.5}$. It is not even given an integer. However, Hausdorffness is denoted as $T_{2}$, regularity is denoted as $T_{3}$ and normality is denoted as $T_{4}$. Furthermore, when people often mention separation axioms they often fail to give complete regularity adequate attention. When discussing separation axioms in detail, one should always bring up and emphasize complete regularity.

In practice, the Hausdorff spaces that people naturally comes across are always completely regular. I challenge anyone to give me a Hausdorff space which occurs in nature or has interest outside of general topology which is not also completely regular. The only Hausdorff spaces which are not completely regular that I know of are counterexamples in general topology and nothing more. Since all Hausdorff spaces found in nature are completely regular, complete regularity should be given more consideration than it is currently given.

## The classical and generalized Laver tables can be computed quickly.

The Laver tables have the noteworthy distinction of being the only structures which originally arose in modern set theory from very large cardinals but which have been studied experimentally using computer calculations. Among other things, these computer calculation can be used to generate images of fractals that resemble the Sierpinski triangle.

Classical Laver tables computation

Recall that the classical Laver table is the unique algebra $A_{n}=(\{1,…,2^{n}\},*_{n})$ such that $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$, $x*_{n}1=x+1$ for $x<2^{n}$, and $2^{n}*_{n}1=1$. The operation $*_{n}$ is known as the application operation. Even though the definition of the classical Laver tables is quite simple, the classical Laver tables are combinatorially very complicated structures, and there does not appear to be an efficient algorithm for computing the classical Laver tables like there is for ordinary multiplication.

If $x,r$ are positive integers, then let $(x)_{r}$ denote the unique integer in $\{1,…,r\}$ such that $x=(x)_{r}\,(\text{mod}\,r)$. The mappings $\phi:A_{n+1}\rightarrow A_{n},\iota:A_{n}\rightarrow A_{n+1}$ defined by $\phi(x)=(x)_{\text{Exp}(2,n)}$ and $\iota(x)=x+2^{n}$ are homomorphisms between self-distributive algebras. If one has an efficient algorithm for computing $A_{n+1}$, then these homomorphisms allow one to compute $A_{n}$ efficiently as well. Therefore, the problem of computing $A_{n}$ gets more difficult as $n$ gets larger.

Randall Dougherty was able to write an algorithm that computes the application in $A_{48}$ around 1995. This algorithm is outlined in his paper [1] and will be outlined in this post as well. So far no one has written any algorithm that improves upon Dougherty’s algorithm nor has anyone been able to compute even $A_{49}$. However, with enough effort, it may be possible to compute in $A_{n}$ for $n\leq 96$ with today’s computational resources, but I do not think that anyone is willing to exert the effort to compute $A_{96}$ at this moment since in order to compute in $A_{96}$ one needs to construct a rather large lookup table.

The basic Laver table algorithms

We shall begin by outlining three algorithms for computing in classical Laver tables, and after discussing these three algorithms for classical Laver table computation, we shall explain Dougherty’s algorithm.

Algorithm 1:

The easiest to write algorithm for computing the classical Laver tables is simply the algorithm that directly uses the definition of a classical Laver table. In other words, in this algorithm, we would evaluate $x*1$ to $x+1$, and we evaluate $x*y$ to $(x*(y-1))*(x+1)_{\text{Exp}(2,n)}$ whenever $y>1$.

This algorithm is extremely inefficient. This algorithm works for $A_{4}$ on my computer, but I have not been able to compute in $A_{5}$ using this algorithm. Even though this algorithm is terrible for computing the application in classical Laver tables, a modification of this algorithm can be used to calculate the application operation in generalized Laver tables very efficiently.

Algorithm 2:

In this algorithm, we fill up the entire multiplication table for $A_{n}$ by computing $x*y$ by a double induction which is descending on $x$ and for each $x$ we compute $x*y$ by an ascending induction on $y$. Here is the code for constructing the multiplication for algorithm 2 in GAP (the multiplication table for $A_{n}$ is implemented in GAP as a list of lists).

table:=[]; table[2^n]:=[1..2^n]; for i in Reversed([1..2^n-1]) do table[i]:=[i+1]; for j in [2..2^n] do table[i][j]:=table[table[i][j-1]][i+1]; od; od; 

I have been able to calculate $A_{13}$ using this algorithm before running out of memory.

The difference between algorithm 1 and algorithm 2 is analogous to two algorithms for computing the Fibonacci numbers. The following program fibslow in GAP for computing the Fibonacci numbers is analogous to algorithm 1.

fibslow:=function(x) if x=1 or x=2 then return 1; else return fibslow(x-1)+fibslow(x-2); fi; end;

This program takes about fibslow(x) many steps to compute fibslow(x) which is very inefficient. Algorithm 1 is inefficient for similar reasons. However, by computing the Fibonacci numbers in a sequence and storing the Fibonacci numbers in memory as a list, one obtains the following much more efficient algorithm fibfast for computing the Fibonacci numbers.

fibfast:=function(x) local list,i; if x<3 then return 1; else list:=[1,1]; for i in [3..x] do list[i]:=list[i-1]+list[i-2]; od; return list[x]; fi; end;

One can calculate the classical Laver tables much more quickly using algorithm 2 instead of using algorithm 1 for reasons similar to why the Fibonacci numbers are more easily computable using fibfast than fibslow.

Algorithm 3:

One of the first things that one notices when one observes the classical Laver tables is that the rows in the classical Laver tables are periodic, and this periodicity allows the classical Laver tables to be more quickly computed. Click here for the full multiplication tables for the Laver tables up to $A_{5}$.

Algorithm 3 is similar to algorithm 2 except that one computes only one period for each row. For example, instead of computing the entire 2nd row $[3,12,15,16,3,12,15,16,3,12,15,16,3,12,15,16]$ in $A_{4}$, in algorithm 3 one simply computes $[3,12,15,16]$ once and observe that $2*_{4}x=2*_{4}(x)_{4}$ whenever $1\leq x\leq 16$.

Using this algorithm, I am able to calculate up to $A_{22}$ on a modern computer before running out of memory. If one compresses the data computed by this algorithm, then one should be able to calculate up to $A_{27}$ or $A_{28}$ before running out of memory.

The lengths of the periods of the rows in classical Laver tables are all powers of 2 and the lengths of the periods of the rows in the classical Laver tables are usually quite small. Let $o_{n}(x)$ denote the least natural number such that $x*_{n}o_{n}(x)=2^{n}$. The motivation behind $o_{n}(x)$ lies in the fact that $x*_{n}y=x*_{n}z$ iff $y=z(\text{mod}\,\text{Exp}(2,o_{n}(x)))$, so $2^{o_{n}(x)}$ is the period of the $x$-th row in $A_{n}$. The maximum value of $o_{n}(x)$ is $n$, but in general $o_{n}(x)$ is usually quite small. We have $E(o_{10}(x))=2.943$, $E(o_{20}(x))=3.042$, and $E(o_{48}(x))=3.038$ (Here $E$ denotes the expected value. $E(o_{48}(x))$ has been calculated from a random sample from $A_{48}$ of size 1000000). Therefore, since $o_{n}(x)$ is usually small, one can calculate and store the $x$-th row in memory without using too much space or time even without compressing the computed data.

Dougherty’s algorithm

Dougherty’s algorithm for computing in the classical Laver tables is based on the following lemmas.

$\mathbf{Lemma}$ Suppose that $t\leq n$ and $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$. If the mapping $L$ is a homomorphism, then

1. $2^{t}x*_{n}y=(y)_{\text{Exp}(2,t)}-2^{t}+2^{t}\cdot(x*_{n-t}(\frac{y-(y)_{\text{Exp}(2,t)}}{\text{Exp}(2,t)}+1))$ whenever $1\leq x\leq 2^{n-t}$ and $1\leq y\leq 2^{n}$ and
2. $2^{t}x\circ_{n}y=2^{t}x+y$ whenever $1\leq y<2^{t}$ and $1\leq x<2^{n-t}$.

The following result by Dougherty gives examples for when the premise of the above Lemma holds.

$\mathbf{Theorem}$ (Dougherty) Suppose that $t=2^{r}$ for some $r$ and $n\leq 3t$. Then the mapping $L:A_{n-t}\rightarrow A_{n}$ defined by $L(x)=x\cdot 2^{t}$ is a homomorphism .

More generally Dougherty has proven the following result.

Assume that $s\leq n$ and $n-s\leq Gcd(2^{\infty},2s)$. Then the mapping $L:A_{n-s}\rightarrow A_{n}$ defined by
$L(x)=x\cdot 2^{s}$ is a homomorphism.

Now assume that $t=2^{r},n\leq 3t$ and that $x*_{n}y$ has already been computed whenever $x\leq 2^{t},y\leq 2^{n}$. Then we may compute $x*_{48}y$ for any $x,y\in A_{n}$ by using the following algorithm:

1. If $x<2^{t}$ then $x*_{n}y$ has already been computed so there is no work to be done here.
2. If $x$ is a multiple of $2^{t}$, then we may use Dougherty’s theorem to compute $x*_{n}y$ and we may use recursion on this same algorithm to compute the operation $*_{n-t}$.
3. Suppose that the above cases do not hold. Then by Dougherty’s theorem we have $x=(x-(x)_{\text{Exp}(2,t)})\circ(x)_{\text{Exp}(2,t)}$. Therefore, we evaluate $x*_{n}y$ using the fact that
$x*_{n}y$
$=((x-(x)_{\text{Exp}(2,t)})\circ_{n}(x)_{\text{Exp}(2,t)})*_{n}y=((x-(x)_{\text{Exp}(2,t)})*_{n}((x)_{\text{Exp}(2,t)}*_{n}y)$.

Using the above algorithm and the precomputed values $x*_{48}y$ for $x\leq 2^{16}$, I have been able to compute 1,200,000 random instances of $x*_{48}y$ in a minute on my computer. One could also easily compute in $A_{48}$ by hand using this algorithm with only the precomputed values $x*_{48}y$ where $x\leq 2^{16}$ for reference (this reference can fit into a book).

Dougherty’s algorithm for computing the clasical Laver tables has been implemented here.

Precomputing the $x$-th row for $x\leq 2^{t}$.

In order for Dougherty’s algorithm to work for $A_{n}$, we must first compute the $x$-th row in $A_{n}$ for $x\leq 2^{t}$. However, one can compute this data by induction on $n$. In particular, if $n<3t$ and one has an algorithm for computing $A_{n}$, then one can use Dougherty's algorithm along with a suitable version of algorithm 3 to compute the $x$-th row in $A_{n+1}$ for $x\leq 2^{t}$. I was able to compute from scratch $x*_{48}y$ for $x\leq 2^{16}$ in 757 seconds.

Generalized Laver tables computation

Let $n$ be a natural number and let $A$ be a set. Let $A^{+}$ be the set of all non-empty strings over the alphabet $A$. Then let $(A^{\leq 2^{n}})^{+}=\{\mathbf{x}\in A^{+}:|\mathbf{x}|\leq 2^{n}\}$. Then there is a unique self-distributive operation $*$ on $(A^{\leq 2^{n}})^{+}$ such that $\mathbf{x}*a=\mathbf{x}a$ whenever $|\mathbf{x}|<2^{n},a\in A$ and $\mathbf{x}*\mathbf{y}=\mathbf{y}$ whenever $|\mathbf{x}|=2^{n}$. The algebra $((A^{\leq 2^{n}})^{+},*)$ is an example of a generalized Laver table. If $|A|=1$, then $(A^{\leq 2^{n}})^{+}$ is isomorphic to $A_{n}$. If $|A|>1$, then the algebra $(A^{\leq 2^{n}})^{+}$ is quite large since $|(A^{\leq 2^{n}})^{+}|=|A|\cdot\frac{|A|^{\text{Exp}(2,n)}-1}{|A|-1}$.

Let $\mathbf{x}[n]$ denote the $n$-th letter in the string $\mathbf{x}$ (we start off with the first letter instead of the zero-th letter). For example, $\text{martha}[5]=\text{h}$.

When computing the application operation in $(A^{\leq 2^{n}})^{+}$, we may want to compute the entire string $\mathbf{x}*\mathbf{y}$ or we may want to compute a particular position $(\mathbf{x}*\mathbf{y})[\ell]$. These two problems are separate since it is much easier to compute an individual position $(\mathbf{x}*\mathbf{y})[\ell]$ than it is to compute the entire string $\mathbf{x}*\mathbf{y}$, but computing $\mathbf{x}*\mathbf{y}$ by calculating each $(\mathbf{x}*\mathbf{y})[\ell]$ individually is quite inefficient.

We shall present several algorithms for computing generalized Laver tables starting with the most inefficient algorithm and then moving on to the better algorithms. These algorithms will all assume that one already has an efficient algorithm for computing the application operation in the classical Laver tables.

Algorithm A:

If $x,y\in\{1,…,2^{n}\}$, then let $FM_{n}^{+}(x,y)$ denote the integer such that in $(A^{\leq 2^{n}})^{+}$ if $FM_{n}^{+}(x,y)>0$ then $(a_{1}…a_{x}*b_{1}…b_{2^{n}})[y]=b_{FM_{n}^{+}(x,y)}$ and if $FM_{n}^{+}(x,y)<0$ then $(a_{1}...a_{x}*b_{1}...b_{2^{n}})[y]=a_{-FM_{n}^{+}(x,y)}$. If one has an algorithm for computing $FM_{n}^{+}(x,y)$, then one can compute the application operation simply by referring to $FM_{n}^{+}(x,y)$. The function $FM_{n}^{+}$ can be computed using the same idea which we used in algorithm 2 to calculate the classical Laver tables. In particular, in this algorithm, we compute $FM_{n}^{+}(x,y)$ by a double induction on $x,y$ which is descending on $x$ and for each $x$ the induction is ascending on $y$. I was able to calculate up to $FM_{13}^{+}$ using this algorithm. Using the Sierpinski triangle fractal structure of the final matrix, I could probably compute up to $FM_{17}^{+}$ or $FM_{18}^{+}$ before running out of memory.

Algorithm B:

Algorithm B for computing in the generalized Laver tables is a modified version of algorithm 1. Counterintuitively, even though algorithm 1 is very inefficient for calculating in the classical Laver tables, algorithm B is very efficient for computing the application operation in generalized Laver tables.

If $\mathbf{x}$ is a string, then let $|\mathbf{x}|$ denote the length of the string $\mathbf{x}$ (for example, $|\text{julia}|=5$). If $\mathbf{x}$ is a non-empty string and $n$ a natural number, then let $(\mathbf{x})_{n}$ denote the string where we remove the first $|\mathbf{x}|-(|\mathbf{x}|)_{n}$ elements of $\mathbf{x}$ and keep the last $(|\mathbf{x}|)_{n}$ elements in the string $\mathbf{x}$. For example, $(\text{elizabeth})_{5}=\text{beth}$ and $(\text{solianna})_{4}=\text{anna}$.

One calculates $\mathbf{x}*\mathbf{y}$ in $(A^{\leq 2^{n}})^{+}$ using the following procedure:

1. If $|\mathbf{x}|=2^{n}$ then $\mathbf{x}*\mathbf{y}=\mathbf{y}$.
2. $|\mathbf{y}|>o_{n}(|\mathbf{x}|)$, then $\mathbf{x}*\mathbf{y}=\mathbf{x}*(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$. In this case, one should therefore evaluate $\mathbf{x}*\mathbf{y}$ to $\mathbf{x}*(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$.
3. If $|\mathbf{x}|*_{n}|\mathbf{y}|=|\mathbf{x}|+|\mathbf{y}|$ and $|\mathbf{y}|\leq \text{Exp}(2,o_{n}(|\mathbf{x}|))$, then
$\mathbf{x}*_{n}\mathbf{y}=\mathbf{x}\mathbf{y}$. Therefore, one should evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{x}\mathbf{y}$ in this case.
4. If 1-3 do not hold and $\mathbf{y}=\mathbf{z}a$ the one should evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $(\mathbf{x}*_{n}\mathbf{z})*\mathbf{x}a$.

It is not feasible to compute the entire string $\mathbf{x}*\mathbf{y}$ in $(A^{\leq 2^{n}})^{+}$ when $n$ is much larger than 20 due to the size of the outputs. Nevertheless, it is very feasible to compute the symbol $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{n}})^{+}$ whenever $n\leq 48$ by using a suitable modification of algorithm B. I have been able to compute on my computer using this suitable version of algorithm B on average about 3000 random values of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ in a minute. To put this in perspective, it took me on average about 400 times as long to compute a random instance of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ than it takes to compute a random instance of $x*y$ in $A_{48}$. I have also been able to compute on average 1500 values of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ per minute where the $\mathbf{x},\mathbf{y},\ell$ are chosen to make the calculation $(\mathbf{x}*\mathbf{y})[\ell]$ more difficult. Therefore, when $\mathbf{x},\mathbf{y},\ell$ are chosen to make the calculation $(\mathbf{x}*\mathbf{y})[\ell]$ more difficult, it takes approximately 800 times as long to calculate $(\mathbf{x}*\mathbf{y})[\ell]$ than it takes to calculate an entry in $A_{48}$. This is not bad for calculating $(\mathbf{x}*\mathbf{y})$ to an arbitrary precision in an algebra of cardinality about $10^{10^{13.928}}$ when $|A|=2$.

Algorithm C:

Algorithm C is a modification of algorithm B that uses he same ideas in Dougherty’s method of computing in classical Laver tables to compute in the generalized Laver tables $(A^{\leq 2^{n}})^{+}$ more efficiently.

$\mathbf{Theorem}$ Suppose that $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$ for all $x\in A_{n-t}$ and $L$ is a homomorphism.
Define a mapping $\iota:((A^{2^{t}})^{\leq 2^{n-t}})^{+}\rightarrow(A^{\leq 2^{n}})^{+}$ by letting
$\iota((a_{1,1},…,a_{1,2^{t}})…(a_{r,1},…,a_{r,2^{t}}))=a_{1,1},…,a_{1,2^{t}}…a_{r,1},…,a_{r,2^{t}}$. Then $\iota$ is an injective homomorphism between generalized Laver tables.

More generally, we have the following result.

$\mathbf{Theorem}$ Suppose that $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$ and the mapping $L$ is a homomorphism. Then

1. Suppose that $|\mathbf{x}|$ is a multiple of $2^{t}$,
$\mathbf{x}=(x_{1,1}…x_{1,2^{t}})…(x_{u,1}…x_{u,2^{t}})$, and
$\mathbf{y}=(y_{1,1}…y_{1,2^{t}})…(y_{v-1,1}…y_{v-1,2^{t}})(y_{v,1}…y_{v,w})$.

Furthermore, suppose that
$$\langle x_{1,1},…,x_{1,Exp(2,t)}\rangle…\langle x_{u,1},…,x_{u,Exp(2,t)}\rangle*_{n-t} \langle y_{1,1},…,y_{1,Exp(2,t)}\rangle…\langle y_{1,1},…,y_{1,Exp(2,t)}\rangle\langle y_{v,1},…,y_{v,w}\rangle$$
$$=\langle z_{1,1},…,z_{1,Exp(2,t)}\rangle…\langle z_{p-1,1},…,z_{p-1,Exp(2,t)}\rangle\langle z_{p,1},…,z_{p,w}\rangle.$$
Then $\mathbf{x}*_{n}\mathbf{y}=(z_{1,1}…z_{1,Exp(2,t)})…(z_{p-1,1}…z_{p-1,Exp(2,t)})(z_{p,1}…z_{p,w}).$

2. Suppose that $|\mathbf{x}|$ is a multiple of $2^{t}$ and $|\mathbf{x}|<2^{n}$. Furthermore, suppose that $1<|\mathbf{y}|<2^{t}$. Then $\mathbf{x}\circ_{n}\mathbf{y}=\mathbf{x}\mathbf{y}.$

One can compute $\mathbf{x}*_{n}\mathbf{y}$ recursively with the following algorithm:

1. First we select a natural number $t$ with $t\leq n$ and $n-t\leq\text{Gcd}(2^{\infty},2t)$.
2. If $|\mathbf{y}| > o_{n}(|\mathbf{x}|)$, then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to
$\mathbf{x}*_{n}(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$.
3. if $|\mathbf{x}|=2^{n}$ then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{y}$.
4. if $|\mathbf{x}| < 2^{t}$ then evaluate $\mathbf{x}*_{n}\mathbf{y}a$ to $(\mathbf{x}*_{n}\mathbf{y})*\mathbf{x}a.$
5. Assume $|\mathbf{x}|$ is a multiple of $2^{t}$. Suppose that
$\mathbf{x}=x_{1,1}…x_{1,2^{t}}…x_{u,1}…x_{u,2^{t}}$ and $\mathbf{y}=y_{1,1}…y_{1,2^{t}}…y_{v-1,1}…y_{v-1,2^{t}}y_{v,1}…y_{v,w}$. Suppose also that

$\langle x_{1,1},…,x_{1,2^{t}}\rangle…\langle x_{u,1},…,x_{u,2^{t}}\rangle*_{n-t} \langle y_{1,1},…,y_{1,2^{t}}\rangle…\langle y_{v-1,1},…,y_{v-1,2^{t}}\rangle\langle y_{v,1},…,y_{v,w}\rangle$

$=\langle z_{1,1},…,z_{1,2^{t}}\rangle…\langle z_{p-1,1},…,z_{p-1,2^{t}}\rangle\langle z_{p,1},…,z_{p,w}\rangle$.

Then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $z_{1,1}…z_{1,2^{t}}…z_{p-1,1}…z_{p-1,2^{t}}z_{p,1}…z_{p,w}$.

6. Assume now that 1-5 do not hold and $2^{t}<|\mathbf{x}|<2^{n}$. Then let $\mathbf{x}=\mathbf{x}_{1}\mathbf{x}_{2}$ where $\mathbf{x}_{2}=(\mathbf{x})_{\text{Exp}(2,t)}$. Then $\mathbf{x}*_{n}\mathbf{y}=(\mathbf{x}_{1}\circ_{n}\mathbf{x}_{2})*_{n}\mathbf{y}= \mathbf{x}_{1}*_{n}(\mathbf{x}_{2}*_{n}\mathbf{y})$ (The composition in $A^{\leq 2^{n}}$ is a partial function). We therefore evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{x}_{1}*_{n}(\mathbf{x}_{2}*_{n}\mathbf{y})$.

As with algorithms A and B, there is a local version of algorithm C that quickly computes the particular letter $\mathbf{x}*\mathbf{y}[\ell]$. Both the local and the global versions of algorithm C are about 7 or so times faster than the corresponding version of algorithm B. Algorithm C for computing generalized Laver tables has been implemented online here.

Conclusion

The simple fact that calculating $(\mathbf{x}*\mathbf{y})[\ell]$ is apparantly hundreds of times more difficult than calculating in $A_{48}$ is rather encouraging since this difficulty in calculation suggests that the generalized Laver tables have some combinatorial intricacies that go far beyond the classical Laver tables. These combinatorial intricacies can be seen in the data computed from the generalized Laver tables.

Much knowledge and understanding can be gleaned from simply observing computer calculations. Dougherty’s result which allowed one to compute $A_{48}$ in the first place was proven only because computer calculations allowed Dougherty to make the correct conjectures which were necessary to obtain the results. Most of my understanding of the generalized Laver tables $(A^{\leq 2^{n}})^{+}$ has not come from sitting down and proving theorems, but from observing the data computed from the generalizations of Laver tables. There are many patterns within the generalized Laver tables that can be discovered through computer calculations.

While the problems of computing the generalized Laver tables have been solved to my satisfaction, there are many things about generalizations of Laver tables which I would like to compute but for which I have not obtained an efficient algorithm for computing. I am currently working on computing the fundamental operations in endomorphic Laver tables and I will probably make a post about endomorphic Laver table computation soon.

All of the algorithms mentioned here have been implemented by my using GAP and they are also online here at boolesrings.org/jvanname/lavertables. While the problems of computing the generalized Laver tables have been solved to my satisfaction, there are many things about generalizations of Laver tables which I would like to compute but for which I have not obtained an efficient algorithm for computing.

I must mention that I this project on the generalizations of Laver tables is the first mathematical project that I have done which makes use of computer calculations.

1. Critical points in an algebra of elementary embeddings, II. Randall Dougherty. 1995.

## Set theory seminar notes-April 1, 2016-Generalized Laver tables part II.

On April 1, 2016 I gave a talk at 10:00a.m. at the Set Theory seminar at the Graduate Center titled Generalized Laver Tables Part II. This talk is a continuation of the same topic I talked about here last semester, and this talk is a part of the research project generalizations of Laver tables that I have been investigating since July of 2015. Here are the notes for the talk.

Classical Laver tables

The $n$-th classical Laver table is the unique algebra $A_{n}=(\{1,…,2^{n}\},*)$ where

1. $x*1=x+1$ whenever $x<2^{n}$
2. $2^{n}*1=1$
3. $x*(y*z)=(x*y)*(x*z)$.

Let $\mathcal{E}_{\lambda}$ denote the set of all elementary embeddings $j:V_{\lambda}\rightarrow V_{\lambda}$
Suppose that $j\in\mathcal{E}_{\lambda}$ is a rank-into-rank embedding and $\gamma<\lambda$. Then there is some $n$ where $\langle j\rangle/\equiv^{\gamma}$ is isomorphic to $A_{n}$.

Generalizations of Laver tables

The following list of algebras lists out all the generalizations of the notion of a classical Laver table which I have investigated.

Classical Laver tables-These algebras have one generator and one binary operation. The braid group acts on these algebras.

Generalized Laver table-These algebras have multiple generators, but one binary operation. These algebras are always locally finite.

Endomorphic Laver tables-These algebras can have possibly multiple generators, possibly multiple operations, and the operations can have arbitrary arity. Endomorphic Laver tables are usually infinite. Endomorphic Laver tables seem to be very difficult to compute in part due to the immense size of the output. There are generalizations of the notion of a braid group (positive braid monoid) that act on the endomorphic Laver tables. For instance, suppose that $G_{n}$ is the group presented by $\{\sigma_{i}\mid 0\leq i<n\}$ with relations $\sigma_{i}\sigma_{i+1}\sigma_{i+2}\sigma_{i}=\sigma_{i+2}\sigma_{i}\sigma_{i+1}\sigma_{i+2}$ and
$\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}$ whenever $|i-j|>2$. Let $G_{n}^{+}$ be the monoid presented by the same generators and relations. Then $G_{n}^{+}$ acts on $X^{n-2}$. Furthermore, the algebra $G_{\omega}$ can be given a ternary self-distributive operation which I conjecture gives rise to free ternary self-distributive algebras.
Partially endomorphic Laver tables-These algebras have multiple generators, multiple operations, the operations can have arbitrary arity, only some of the operations distribute with each other. These algebras are always infinite.

Twistedly endomorphic Laver tables-These algebras have multiple generators, multiple operations, operations can have arbitrary arity, these algebras satisfy the twisted self-distributivity laws such as $t(a,b,t(x,y,z))=t(t(a,b,x),t(b,a,y),t(a,b,z))$. Any semigroup can be used to generate more complicated twisted self-distributivity identities. Twistedly endomorphic Laver tables do not seem to arise in any way from algebras of elementary embeddings.

So far I am the only researcher working on the generalizations of Laver tables although around last July another researcher (a knot theorist) has expressed interest and has some ideas that one can research about generalized Laver tables.

Permutative LD-systems

Suppose that $*$ is a binary function symbol. Then define the Fibonacci terms $t_{n}(x,y)$ by the following rules:

1. $t_{1}(x,y)=y$
2. $t_{2}(x,y)=x$
3. $t_{n+2}(x,y)=t_{n+1}(x,y)*t_{n}(x,y)$.

For example, if $x=y=1$ and $*$ is addition $+$, then $t_{n}(x,y)$ is simply the $n$-th Fibonacci number.

The first few Fibonacci terms are

• $t_{3}(x,y)=x*y$
• $t_{4}(x,y)=(x*y)*x$
• $t_{5}(x,y)=((x*y)*x)*(x*y)$
• $t_{6}(x,y)=(((x*y)*x)*(x*y))*((x*y)*x).$

The algebra of elementary embeddings $(\mathcal{E}_{\lambda},*,\circ)$ satisfies the braid identity $j\circ k=(j*k)\circ j$. Furthermore, by applying the braid identity multiple times, we obtain $j\circ k=(j*k)\circ j=((j*k)*j)\circ(j*k)=…=t_{n+1}(j,k)\circ t_{n}(j,k).$

Now suppose that $(\kappa_{n})_{n\in\omega}$ is a cofinal increasing sequence in $\lambda$. Then define a metric $d$ on $\mathcal{E}_{\lambda}$ by letting $d(j,k)=\frac{1}{n}$ whenever $j\neq k$ where $n$ is the least natural number with $j|_{V_{\kappa_{n}}}\neq k|_{V_{\kappa_{n}}}$ and where $d(j,j)=0$.

$\mathbf{Proposition:}$ $(\mathcal{E}_{\lambda},d)$ is a complete metric space without isolated points.

$\textbf{Corollary:}$ $|\mathcal{E}_{\lambda}\mid\geq 2^{\aleph_{0}}$.

$\textbf{Proof:}$ This follows from the fact that every complete metric space without any isolated points has cardinality at least continuum.

$\textbf{Proposition:}$ Let $j,k\in\mathcal{E}_{\lambda}$. Then

1. if $\textrm{crit}(j)>\textrm{crit}(k)$, then
1. $t_{2n+1}(j,k)\rightarrow j\circ k$
2. $t_{2n}(j,k)\rightarrow Id_{V_{\lambda}}$
2. if $\textrm{crit}(j)\leq\textrm{crit}(k)$, then
1. $t_{2n}(j,k)\rightarrow j\circ k$
2. $t_{2n+1}(j,k)\rightarrow Id_{V_{\lambda}}$.

An LD-system is an algebra $(X,*)$ that satisfies the identity $x*(y*z)=(x*y)*(x*z)$.

Let $X$ be an LD-system. An element $x\in X$ is said to be a left-identity if $x*y=y$ for each $y\in X$. Let $\textrm{Li}(X)$ denote the set of all left-identities of $X.$ A subset $L\subseteq X$ is said to be a left-ideal if $y\in L$ implies that $x*y\in L$ as well.

An LD-system $X$ is said to be permutative if

1. $\textrm{Li}(X)$ is a left-ideal in $X$ and
2. for all $x,y\in X$ there exists an $n$ with $t_{n}(x,y)\in \textrm{Li}(X)$.

The motivation behind the notion of a permutative LD-system is that the permutative LD-systems capture the notion that the Fibonacci terms converge to the identity without any reference to any topology.

Example: $\mathcal{E}_{\lambda}/\equiv^{\gamma}$ is a reduced permutative LD-system.

A permutative LD-system is said to be reduced if $|\textrm{Li}(X)|=1$.

Proposition: Let $X$ be a permutative LD-system. Let $\simeq$ be the equivalence relation on $X$ where $x\simeq y$ iff $x=y$ or if $x,y\in \textrm{Li}(X)$. Then $\simeq$ is a congruence on $X$ and $X/\simeq$ is a reduced permutative LD-system.

An LD-monoid is an algebra $(X,*,\circ,1)$ where $(X,\circ,1)$ is a monoid and where

1. $x*(y\circ z)=(x*y)\circ(x*z),(x\circ y)*z=x*(y*z),x\circ y=(x*y)\circ x$
2. $x*1=1,1*x=x$.

Example: $(\mathcal{E}_{\lambda},*,\circ,1)$ is an LD-monoid.

Theorem: Suppose that $(X,*)$ is a reduced permutative LD-system with left-identity $1$. Then define an operation $\circ$ on $X$ where $x\circ y=t_{n+1}(x,y)$ where $n$ is the least natural number with $t_{n}(x,y)=1$.  Then $(X,*,\circ,1)$ is an LD-monoid.

Critical points:

Define $x^{n}*y=x*(x*(…*(x*y)))$ ($n$-copies of $x$). More formally, $x^{0}*y=y$ and $x^{n+1}*y=x*(x^{n}*y)=x^{n}*(x*y)$.

Suppose that $X$ is a permutative LD-system and $x,y\in X$. Then define $\textrm{crit}(x)\leq \textrm{crit}(y)$ iff there exists some $n$ where $x^{n}*y\in \textrm{Li}(X)$.

Theorem: Suppose that $X$ is a permutative LD-system. Then

1. $\textrm{crit}(x)\leq \textrm{crit}(x)$
2. $\textrm{crit}(x)\leq \textrm{crit}(y)$ and $\textrm{crit}(y)\leq \textrm{crit}(z)$ implies $\textrm{crit}(x)\leq \textrm{crit}(z)$.
3. $\textrm{crit}(x)\leq \textrm{crit}(y)$ implies $\textrm{crit}(r*x)\leq \textrm{crit}(r*y)$.
4. $\textrm{crit}(x)$ is maximal in $\{\textrm{crit}(y)\mid y\in X\}$ if and only if $x\in \textrm{Li}(X)$.
5. $\{\textrm{crit}(x)\mid x\in X\}$ is a linear ordering.
6. If $(X,*)$ is reduced, then $\textrm{crit}(x\circ y)=Min(\textrm{crit}(x),\textrm{crit}(y)).$

Let $\textrm{crit}[X]=\{\textrm{crit}(x)\mid x\in X\}$. If $x\in X$, then define the mapping $x^{\sharp}:\textrm{crit}[X]\rightarrow \textrm{crit}[X]$ by $x^{\sharp}(\textrm{crit}(y))=\textrm{crit}(x*y)$.

The motivation behind the function $x^{\sharp}$ is the basic fact about elementary embeddings that if $j,k\in\mathcal{E}_{\lambda}$, then $j(\textrm{crit}(k))=\textrm{crit}(j*k)$.

Theorem: Suppose that $X$ is a permutative LD-system.

1. $x^{\sharp}(\alpha)\geq\alpha$ for $x\in X,\alpha\in \textrm{crit}[X]$.
2. $x^{\sharp}(\alpha)>\alpha$ if and only if $\alpha\geq \textrm{crit}(x)$ and $\alpha\neq Max(\textrm{crit}[X])$.
3. Let $A=\{\alpha\in \textrm{crit}[X]\mid x^{\sharp}(\alpha)\neq Max(\textrm{crit}[X])\}$. Then $x^{\sharp}|_{A}$ is injective.

$\mathbf{Theorem}$ Let $X$ be a permutative LD-system and let $\simeq$ be a congruence on $X$. Then there is a partition $A,B$ of $\textrm{crit}[X]$ such that $A$ is a downwards closed subset, $B$ is an upwards closed subset, and

1. whenever $\textrm{crit}(x)\in B$ there is some $y\in \textrm{Li}(X)$ with $x\simeq y$ and
2. whenever $\textrm{crit}(x)\in A$ and $x\simeq y$ we have $\textrm{crit}(x)=\textrm{crit}(y)$.

Furthermore, if $X$ is reduced, then $\simeq$ is also a congruence with respect to the composition operation $\circ$.

For example, suppose that $X$ is a permutative LD-system and $\alpha\in\textrm{crit}[X]$. Then there exists some $r\in X$ with $\textrm{crit}(r)=\alpha$ and where $r*r\in \textrm{Li}(X)$. Therefore define an equivalence relation $\equiv^{\alpha}$ by letting $x\equiv^{\alpha}y$ if and only if $r*x=r*y$. Then $\equiv^{\alpha}$ is a congruence on $X$ that does not depend on the choice of $r$. In the above, theorem, if $\simeq$ is the equivalence relation $\equiv^{\alpha}$, then $A=\{\beta\in\textrm{crit}[X]\mid\beta<\alpha\}$ and $B=\{\beta\in\textrm{crit}[X]\mid\beta\geq\alpha\}$.

Generalized Laver tables

Let $A$ be a set. Let $A^{+}$ denote the set of all strings over the alphabet $A$. Let $\preceq$ denote the prefix ordering on $A$. Let $L\subseteq A^{+}$ be a downwards closed subset such that $L\cap B^{+}$ is finite whenever $B\in[A]^{<\omega}$. Let $M=\{\mathbf{x}a\mid\mathbf{x}\in L,a\in A\}\cup A$. Let $F=M\setminus L$. Then there is a unique operation $*$ such that

1. $\mathbf{x}*a=\mathbf{x}a$ whenever $\mathbf{x}\in L,a\in A$
2. $\mathbf{x}*\mathbf{y}=\mathbf{y}$ whenever $\mathbf{x}\in F$.
3. $\mathbf{x}*\mathbf{y}a=(\mathbf{x}*\mathbf{y})*\mathbf{x}a$ whenever $\mathbf{x},\mathbf{y}\in F,a\in A$.

The algebra $(M,*)$ is called a pre-generalized Laver table. If $(M,*)$ is an LD-system, then we shall call $(M,*)$ is a generalized Laver table. Every generalized Laver table is a permutative LD-system.

Let $|\mathbf{x}|$ denote the length of a string $\mathbf{x}$.
If $A$ is a set, then $(A^{\leq 2^{n}})^{+}=\{\mathbf{x}\in A^{+}:|\mathbf{x}|\leq 2^{n}\}$ is a generalized Laver table. The operation $*$ on $(A^{\leq 2^{n}})^{+}$ can be computed very quickly here even though
$$|(A^{\leq 2^{n}})^{+}|=|A|\cdot\frac{|A|^{2^{n}}-1}{|A|-1}.$$
The only hinderence to computing $*$ seems to be the length of the output.

$\mathbf{Theorem:}$ Suppose that $j\in\mathcal{E}_{\lambda},\alpha<\lambda$. Then $(j*j)(\alpha)\leq j(\alpha)$.

$\mathbf{Proof:}$ Let $\beta$ be the least ordinal with $j(\beta)>\alpha$. Then
$$V_{\lambda}\models\forall x<\beta,j(x)\leq\alpha,$$
so by applying elementarity, we have
$$V_{\lambda}\models\forall x < j(\beta),j*j(x)\leq j(\alpha).$$
Therefore, since $\alpha < j(\beta)$, we have $(j*j)(\alpha)\leq j(\alpha)$.

$\mathbf{Example:}$ There exists a generalized Laver table $M$ over the alphabet $\{0,1\}$ with $|M|=64$ and $x\in M,\alpha\in \textrm{crit}[X]$ with $(x*x)^{\sharp}(\alpha)>x^{\sharp}(\alpha)$. In particular, whenever $\simeq$ is a congruence on $M$ where if $x\simeq y$ then $\textrm{crit}(x)=\textrm{crit}(y)$ then $M/\simeq$ is not isomorphic to any subalgebra of an algebra of the form $\mathcal{E}_{\lambda}/\equiv^{\gamma}$.

## Hello everyone

Hello everyone,

I am Joseph Van Name, and I have recently joined Booles’ Rings. I currently know several of the people here on Booles’ Rings through either mathoverflow.net or through the New York City logic community. I enjoy reading the mathematical posts here on Booles’ rings, and I am glad to be a part of this community.

I have requested to join Booles’ Rings in part due to my recent research endeavors towards understanding Laver tables. Through Booles’ Rings, I intend to post data, images, computer programs, and of course short mathematical expositions about these generalizations of the notion of a Laver table. Of course, I will also make posts about other areas of mathematics that I have researched in the past including publications and notes and slides for past talks. I therefore plan on having two portions of my site with one portion containing all the information on Laver tables one could ask for while the other portion is about all my other research projects.

Hopefully, through Booles’ Rings, I will use generalizations of Laver tables to establish a much needed common ground between set theory (in particular large cardinals) and structures such as self-distributive algebras, knots, braids and possibly other areas. By relating large cardinals to more conventional areas of mathematics, I intend to help non set-theorists see large cardinals not as being irrelevant objects that lie high above the clouds but as objects of a practical importance despite their astonishing size.