The classical and generalized Laver tables can be computed quickly.

The Laver tables have the noteworthy distinction of being the only structures which originally arose in modern set theory from very large cardinals but which have been studied experimentally using computer calculations. Among other things, these computer calculation can be used to generate images of fractals that resemble the Sierpinski triangle.

Classical Laver tables computation

Recall that the classical Laver table is the unique algebra $A_{n}=(\{1,…,2^{n}\},*_{n})$ such that $x*_{n}(y*_{n}z)=(x*_{n}y)*_{n}(x*_{n}z)$, $x*_{n}1=x+1$ for $x<2^{n}$, and $2^{n}*_{n}1=1$. The operation $*_{n}$ is known as the application operation. Even though the definition of the classical Laver tables is quite simple, the classical Laver tables are combinatorially very complicated structures, and there does not appear to be an efficient algorithm for computing the classical Laver tables like there is for ordinary multiplication.

If $x,r$ are positive integers, then let $(x)_{r}$ denote the unique integer in $\{1,…,r\}$ such that $x=(x)_{r}\,(\text{mod}\,r)$. The mappings $\phi:A_{n+1}\rightarrow A_{n},\iota:A_{n}\rightarrow A_{n+1}$ defined by $\phi(x)=(x)_{\text{Exp}(2,n)}$ and $\iota(x)=x+2^{n}$ are homomorphisms between self-distributive algebras. If one has an efficient algorithm for computing $A_{n+1}$, then these homomorphisms allow one to compute $A_{n}$ efficiently as well. Therefore, the problem of computing $A_{n}$ gets more difficult as $n$ gets larger.

Randall Dougherty was able to write an algorithm that computes the application in $A_{48}$ around 1995. This algorithm is outlined in his paper [1] and will be outlined in this post as well. So far no one has written any algorithm that improves upon Dougherty’s algorithm nor has anyone been able to compute even $A_{49}$. However, with enough effort, it may be possible to compute in $A_{n}$ for $n\leq 96$ with today’s computational resources, but I do not think that anyone is willing to exert the effort to compute $A_{96}$ at this moment since in order to compute in $A_{96}$ one needs to construct a rather large lookup table.

The basic Laver table algorithms

We shall begin by outlining three algorithms for computing in classical Laver tables, and after discussing these three algorithms for classical Laver table computation, we shall explain Dougherty’s algorithm.

Algorithm 1:

The easiest to write algorithm for computing the classical Laver tables is simply the algorithm that directly uses the definition of a classical Laver table. In other words, in this algorithm, we would evaluate $x*1$ to $x+1$, and we evaluate $x*y$ to $(x*(y-1))*(x+1)_{\text{Exp}(2,n)}$ whenever $y>1$.

This algorithm is extremely inefficient. This algorithm works for $A_{4}$ on my computer, but I have not been able to compute in $A_{5}$ using this algorithm. Even though this algorithm is terrible for computing the application in classical Laver tables, a modification of this algorithm can be used to calculate the application operation in generalized Laver tables very efficiently.

Algorithm 2:

In this algorithm, we fill up the entire multiplication table for $A_{n}$ by computing $x*y$ by a double induction which is descending on $x$ and for each $x$ we compute $x*y$ by an ascending induction on $y$. Here is the code for constructing the multiplication for algorithm 2 in GAP (the multiplication table for $A_{n}$ is implemented in GAP as a list of lists).

table:=[]; table[2^n]:=[1..2^n];
for i in Reversed([1..2^n-1]) do table[i]:=[i+1];
for j in [2..2^n] do table[i][j]:=table[table[i][j-1]][i+1]; od; od;

I have been able to calculate $A_{13}$ using this algorithm before running out of memory.

The difference between algorithm 1 and algorithm 2 is analogous to two algorithms for computing the Fibonacci numbers. The following program fibslow in GAP for computing the Fibonacci numbers is analogous to algorithm 1.

fibslow:=function(x) if x=1 or x=2 then return 1; else return fibslow(x-1)+fibslow(x-2); fi; end;

This program takes about fibslow(x) many steps to compute fibslow(x) which is very inefficient. Algorithm 1 is inefficient for similar reasons. However, by computing the Fibonacci numbers in a sequence and storing the Fibonacci numbers in memory as a list, one obtains the following much more efficient algorithm fibfast for computing the Fibonacci numbers.

fibfast:=function(x) local list,i;
if x<3 then return 1;
else list:=[1,1]; for i in [3..x] do list[i]:=list[i-1]+list[i-2]; od; return list[x]; fi; end;

One can calculate the classical Laver tables much more quickly using algorithm 2 instead of using algorithm 1 for reasons similar to why the Fibonacci numbers are more easily computable using fibfast than fibslow.

Algorithm 3:

One of the first things that one notices when one observes the classical Laver tables is that the rows in the classical Laver tables are periodic, and this periodicity allows the classical Laver tables to be more quickly computed. Click here for the full multiplication tables for the Laver tables up to $A_{5}$.

Algorithm 3 is similar to algorithm 2 except that one computes only one period for each row. For example, instead of computing the entire 2nd row $[3,12,15,16,3,12,15,16,3,12,15,16,3,12,15,16]$ in $A_{4}$, in algorithm 3 one simply computes $[3,12,15,16]$ once and observe that $2*_{4}x=2*_{4}(x)_{4}$ whenever $1\leq x\leq 16$.

Using this algorithm, I am able to calculate up to $A_{22}$ on a modern computer before running out of memory. If one compresses the data computed by this algorithm, then one should be able to calculate up to $A_{27}$ or $A_{28}$ before running out of memory.

The lengths of the periods of the rows in classical Laver tables are all powers of 2 and the lengths of the periods of the rows in the classical Laver tables are usually quite small. Let $o_{n}(x)$ denote the least natural number such that $x*_{n}o_{n}(x)=2^{n}$. The motivation behind $o_{n}(x)$ lies in the fact that $x*_{n}y=x*_{n}z$ iff $y=z(\text{mod}\,\text{Exp}(2,o_{n}(x)))$, so $2^{o_{n}(x)}$ is the period of the $x$-th row in $A_{n}$. The maximum value of $o_{n}(x)$ is $n$, but in general $o_{n}(x)$ is usually quite small. We have $E(o_{10}(x))=2.943$, $E(o_{20}(x))=3.042$, and $E(o_{48}(x))=3.038$ (Here $E$ denotes the expected value. $E(o_{48}(x))$ has been calculated from a random sample from $A_{48}$ of size 1000000). Therefore, since $o_{n}(x)$ is usually small, one can calculate and store the $x$-th row in memory without using too much space or time even without compressing the computed data.

Dougherty’s algorithm

Dougherty’s algorithm for computing in the classical Laver tables is based on the following lemmas.

$\mathbf{Lemma}$ Suppose that $t\leq n$ and $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$. If the mapping $L$ is a homomorphism, then

  1. $2^{t}x*_{n}y=(y)_{\text{Exp}(2,t)}-2^{t}+2^{t}\cdot(x*_{n-t}(\frac{y-(y)_{\text{Exp}(2,t)}}{\text{Exp}(2,t)}+1))$ whenever $1\leq x\leq 2^{n-t}$ and $1\leq y\leq 2^{n}$ and
  2. $2^{t}x\circ_{n}y=2^{t}x+y$ whenever $1\leq y<2^{t}$ and $1\leq x<2^{n-t}$.

The following result by Dougherty gives examples for when the premise of the above Lemma holds.

$\mathbf{Theorem}$ (Dougherty) Suppose that $t=2^{r}$ for some $r$ and $n\leq 3t$. Then the mapping $L:A_{n-t}\rightarrow A_{n}$ defined by $L(x)=x\cdot 2^{t}$ is a homomorphism .

More generally Dougherty has proven the following result.

Assume that $s\leq n$ and $n-s\leq Gcd(2^{\infty},2s)$. Then the mapping $L:A_{n-s}\rightarrow A_{n}$ defined by
$L(x)=x\cdot 2^{s}$ is a homomorphism.

Now assume that $t=2^{r},n\leq 3t$ and that $x*_{n}y$ has already been computed whenever $x\leq 2^{t},y\leq 2^{n}$. Then we may compute $x*_{48}y$ for any $x,y\in A_{n}$ by using the following algorithm:

  1. If $x<2^{t}$ then $x*_{n}y$ has already been computed so there is no work to be done here.
  2. If $x$ is a multiple of $2^{t}$, then we may use Dougherty’s theorem to compute $x*_{n}y$ and we may use recursion on this same algorithm to compute the operation $*_{n-t}$.
  3. Suppose that the above cases do not hold. Then by Dougherty’s theorem we have $x=(x-(x)_{\text{Exp}(2,t)})\circ(x)_{\text{Exp}(2,t)}$. Therefore, we evaluate $x*_{n}y$ using the fact that
    $x*_{n}y$
    $=((x-(x)_{\text{Exp}(2,t)})\circ_{n}(x)_{\text{Exp}(2,t)})*_{n}y=((x-(x)_{\text{Exp}(2,t)})*_{n}((x)_{\text{Exp}(2,t)}*_{n}y)$.

Using the above algorithm and the precomputed values $x*_{48}y$ for $x\leq 2^{16}$, I have been able to compute 1,200,000 random instances of $x*_{48}y$ in a minute on my computer. One could also easily compute in $A_{48}$ by hand using this algorithm with only the precomputed values $x*_{48}y$ where $x\leq 2^{16}$ for reference (this reference can fit into a book).

Dougherty’s algorithm for computing the clasical Laver tables has been implemented here.

Precomputing the $x$-th row for $x\leq 2^{t}$.

In order for Dougherty’s algorithm to work for $A_{n}$, we must first compute the $x$-th row in $A_{n}$ for $x\leq 2^{t}$. However, one can compute this data by induction on $n$. In particular, if $n<3t$ and one has an algorithm for computing $A_{n}$, then one can use Dougherty's algorithm along with a suitable version of algorithm 3 to compute the $x$-th row in $A_{n+1}$ for $x\leq 2^{t}$. I was able to compute from scratch $x*_{48}y$ for $x\leq 2^{16}$ in 757 seconds.

Generalized Laver tables computation

Let $n$ be a natural number and let $A$ be a set. Let $A^{+}$ be the set of all non-empty strings over the alphabet $A$. Then let $(A^{\leq 2^{n}})^{+}=\{\mathbf{x}\in A^{+}:|\mathbf{x}|\leq 2^{n}\}$. Then there is a unique self-distributive operation $*$ on $(A^{\leq 2^{n}})^{+}$ such that $\mathbf{x}*a=\mathbf{x}a$ whenever $|\mathbf{x}|<2^{n},a\in A$ and $\mathbf{x}*\mathbf{y}=\mathbf{y}$ whenever $|\mathbf{x}|=2^{n}$. The algebra $((A^{\leq 2^{n}})^{+},*)$ is an example of a generalized Laver table. If $|A|=1$, then $(A^{\leq 2^{n}})^{+}$ is isomorphic to $A_{n}$. If $|A|>1$, then the algebra $(A^{\leq 2^{n}})^{+}$ is quite large since $|(A^{\leq 2^{n}})^{+}|=|A|\cdot\frac{|A|^{\text{Exp}(2,n)}-1}{|A|-1}$.

Let $\mathbf{x}[n]$ denote the $n$-th letter in the string $\mathbf{x}$ (we start off with the first letter instead of the zero-th letter). For example, $\text{martha}[5]=\text{h}$.

When computing the application operation in $(A^{\leq 2^{n}})^{+}$, we may want to compute the entire string $\mathbf{x}*\mathbf{y}$ or we may want to compute a particular position $(\mathbf{x}*\mathbf{y})[\ell]$. These two problems are separate since it is much easier to compute an individual position $(\mathbf{x}*\mathbf{y})[\ell]$ than it is to compute the entire string $\mathbf{x}*\mathbf{y}$, but computing $\mathbf{x}*\mathbf{y}$ by calculating each $(\mathbf{x}*\mathbf{y})[\ell]$ individually is quite inefficient.

We shall present several algorithms for computing generalized Laver tables starting with the most inefficient algorithm and then moving on to the better algorithms. These algorithms will all assume that one already has an efficient algorithm for computing the application operation in the classical Laver tables.

Algorithm A:

If $x,y\in\{1,…,2^{n}\}$, then let $FM_{n}^{+}(x,y)$ denote the integer such that in $(A^{\leq 2^{n}})^{+}$ if $FM_{n}^{+}(x,y)>0$ then $(a_{1}…a_{x}*b_{1}…b_{2^{n}})[y]=b_{FM_{n}^{+}(x,y)}$ and if $FM_{n}^{+}(x,y)<0$ then $(a_{1}...a_{x}*b_{1}...b_{2^{n}})[y]=a_{-FM_{n}^{+}(x,y)}$. If one has an algorithm for computing $FM_{n}^{+}(x,y)$, then one can compute the application operation simply by referring to $FM_{n}^{+}(x,y)$. The function $FM_{n}^{+}$ can be computed using the same idea which we used in algorithm 2 to calculate the classical Laver tables. In particular, in this algorithm, we compute $FM_{n}^{+}(x,y)$ by a double induction on $x,y$ which is descending on $x$ and for each $x$ the induction is ascending on $y$. I was able to calculate up to $FM_{13}^{+}$ using this algorithm. Using the Sierpinski triangle fractal structure of the final matrix, I could probably compute up to $FM_{17}^{+}$ or $FM_{18}^{+}$ before running out of memory.

Algorithm B:

Algorithm B for computing in the generalized Laver tables is a modified version of algorithm 1. Counterintuitively, even though algorithm 1 is very inefficient for calculating in the classical Laver tables, algorithm B is very efficient for computing the application operation in generalized Laver tables.

If $\mathbf{x}$ is a string, then let $|\mathbf{x}|$ denote the length of the string $\mathbf{x}$ (for example, $|\text{julia}|=5$). If $\mathbf{x}$ is a non-empty string and $n$ a natural number, then let $(\mathbf{x})_{n}$ denote the string where we remove the first $|\mathbf{x}|-(|\mathbf{x}|)_{n}$ elements of $\mathbf{x}$ and keep the last $(|\mathbf{x}|)_{n}$ elements in the string $\mathbf{x}$. For example, $(\text{elizabeth})_{5}=\text{beth}$ and $(\text{solianna})_{4}=\text{anna}$.

One calculates $\mathbf{x}*\mathbf{y}$ in $(A^{\leq 2^{n}})^{+}$ using the following procedure:

  1. If $|\mathbf{x}|=2^{n}$ then $\mathbf{x}*\mathbf{y}=\mathbf{y}$.
  2. $|\mathbf{y}|>o_{n}(|\mathbf{x}|)$, then $\mathbf{x}*\mathbf{y}=\mathbf{x}*(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$. In this case, one should therefore evaluate $\mathbf{x}*\mathbf{y}$ to $\mathbf{x}*(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$.
  3. If $|\mathbf{x}|*_{n}|\mathbf{y}|=|\mathbf{x}|+|\mathbf{y}|$ and $|\mathbf{y}|\leq \text{Exp}(2,o_{n}(|\mathbf{x}|))$, then
    $\mathbf{x}*_{n}\mathbf{y}=\mathbf{x}\mathbf{y}$. Therefore, one should evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{x}\mathbf{y}$ in this case.
  4. If 1-3 do not hold and $\mathbf{y}=\mathbf{z}a$ the one should evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $(\mathbf{x}*_{n}\mathbf{z})*\mathbf{x}a$.

It is not feasible to compute the entire string $\mathbf{x}*\mathbf{y}$ in $(A^{\leq 2^{n}})^{+}$ when $n$ is much larger than 20 due to the size of the outputs. Nevertheless, it is very feasible to compute the symbol $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{n}})^{+}$ whenever $n\leq 48$ by using a suitable modification of algorithm B. I have been able to compute on my computer using this suitable version of algorithm B on average about 3000 random values of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ in a minute. To put this in perspective, it took me on average about 400 times as long to compute a random instance of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ than it takes to compute a random instance of $x*y$ in $A_{48}$. I have also been able to compute on average 1500 values of $(\mathbf{x}*\mathbf{y})[\ell]$ in $(A^{\leq 2^{48}})^{+}$ per minute where the $\mathbf{x},\mathbf{y},\ell$ are chosen to make the calculation $(\mathbf{x}*\mathbf{y})[\ell]$ more difficult. Therefore, when $\mathbf{x},\mathbf{y},\ell$ are chosen to make the calculation $(\mathbf{x}*\mathbf{y})[\ell]$ more difficult, it takes approximately 800 times as long to calculate $(\mathbf{x}*\mathbf{y})[\ell]$ than it takes to calculate an entry in $A_{48}$. This is not bad for calculating $(\mathbf{x}*\mathbf{y})$ to an arbitrary precision in an algebra of cardinality about $10^{10^{13.928}}$ when $|A|=2$.

Algorithm C:

Algorithm C is a modification of algorithm B that uses he same ideas in Dougherty’s method of computing in classical Laver tables to compute in the generalized Laver tables $(A^{\leq 2^{n}})^{+}$ more efficiently.

$\mathbf{Theorem}$ Suppose that $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$ for all $x\in A_{n-t}$ and $L$ is a homomorphism.
Define a mapping $\iota:((A^{2^{t}})^{\leq 2^{n-t}})^{+}\rightarrow(A^{\leq 2^{n}})^{+}$ by letting
$\iota((a_{1,1},…,a_{1,2^{t}})…(a_{r,1},…,a_{r,2^{t}}))=a_{1,1},…,a_{1,2^{t}}…a_{r,1},…,a_{r,2^{t}}$. Then $\iota$ is an injective homomorphism between generalized Laver tables.

More generally, we have the following result.

$\mathbf{Theorem}$ Suppose that $L:A_{n-t}\rightarrow A_{n}$ is the mapping defined by $L(x)=x\cdot 2^{t}$ and the mapping $L$ is a homomorphism. Then

  1. Suppose that $|\mathbf{x}|$ is a multiple of $2^{t}$,
    $\mathbf{x}=(x_{1,1}…x_{1,2^{t}})…(x_{u,1}…x_{u,2^{t}})$, and
    $\mathbf{y}=(y_{1,1}…y_{1,2^{t}})…(y_{v-1,1}…y_{v-1,2^{t}})(y_{v,1}…y_{v,w})$.

    Furthermore, suppose that
    $$\langle x_{1,1},…,x_{1,Exp(2,t)}\rangle…\langle x_{u,1},…,x_{u,Exp(2,t)}\rangle*_{n-t}
    \langle y_{1,1},…,y_{1,Exp(2,t)}\rangle…\langle y_{1,1},…,y_{1,Exp(2,t)}\rangle\langle y_{v,1},…,y_{v,w}\rangle$$
    $$=\langle z_{1,1},…,z_{1,Exp(2,t)}\rangle…\langle z_{p-1,1},…,z_{p-1,Exp(2,t)}\rangle\langle z_{p,1},…,z_{p,w}\rangle.$$
    Then $\mathbf{x}*_{n}\mathbf{y}=(z_{1,1}…z_{1,Exp(2,t)})…(z_{p-1,1}…z_{p-1,Exp(2,t)})(z_{p,1}…z_{p,w}).$

  2. Suppose that $|\mathbf{x}|$ is a multiple of $2^{t}$ and $|\mathbf{x}|<2^{n}$. Furthermore, suppose that $1<|\mathbf{y}|<2^{t}$. Then $\mathbf{x}\circ_{n}\mathbf{y}=\mathbf{x}\mathbf{y}.$

One can compute $\mathbf{x}*_{n}\mathbf{y}$ recursively with the following algorithm:

  1. First we select a natural number $t$ with $t\leq n$ and $n-t\leq\text{Gcd}(2^{\infty},2t)$.
  2. If $|\mathbf{y}| > o_{n}(|\mathbf{x}|)$, then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to
    $\mathbf{x}*_{n}(\mathbf{y})_{\text{Exp}(2,o_{n}(|\mathbf{x}|))}$.
  3. if $|\mathbf{x}|=2^{n}$ then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{y}$.
  4. if $|\mathbf{x}| < 2^{t}$ then evaluate $\mathbf{x}*_{n}\mathbf{y}a$ to $(\mathbf{x}*_{n}\mathbf{y})*\mathbf{x}a.$
  5. Assume $|\mathbf{x}|$ is a multiple of $2^{t}$. Suppose that
    $\mathbf{x}=x_{1,1}…x_{1,2^{t}}…x_{u,1}…x_{u,2^{t}}$ and $\mathbf{y}=y_{1,1}…y_{1,2^{t}}…y_{v-1,1}…y_{v-1,2^{t}}y_{v,1}…y_{v,w}$. Suppose also that

    $\langle x_{1,1},…,x_{1,2^{t}}\rangle…\langle x_{u,1},…,x_{u,2^{t}}\rangle*_{n-t}
    \langle y_{1,1},…,y_{1,2^{t}}\rangle…\langle y_{v-1,1},…,y_{v-1,2^{t}}\rangle\langle y_{v,1},…,y_{v,w}\rangle$

    $=\langle z_{1,1},…,z_{1,2^{t}}\rangle…\langle z_{p-1,1},…,z_{p-1,2^{t}}\rangle\langle z_{p,1},…,z_{p,w}\rangle$.

    Then evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $z_{1,1}…z_{1,2^{t}}…z_{p-1,1}…z_{p-1,2^{t}}z_{p,1}…z_{p,w}$.

  6. Assume now that 1-5 do not hold and $2^{t}<|\mathbf{x}|<2^{n}$. Then let $\mathbf{x}=\mathbf{x}_{1}\mathbf{x}_{2}$ where $\mathbf{x}_{2}=(\mathbf{x})_{\text{Exp}(2,t)}$. Then $\mathbf{x}*_{n}\mathbf{y}=(\mathbf{x}_{1}\circ_{n}\mathbf{x}_{2})*_{n}\mathbf{y}= \mathbf{x}_{1}*_{n}(\mathbf{x}_{2}*_{n}\mathbf{y})$ (The composition in $A^{\leq 2^{n}}$ is a partial function). We therefore evaluate $\mathbf{x}*_{n}\mathbf{y}$ to $\mathbf{x}_{1}*_{n}(\mathbf{x}_{2}*_{n}\mathbf{y})$.

As with algorithms A and B, there is a local version of algorithm C that quickly computes the particular letter $\mathbf{x}*\mathbf{y}[\ell]$. Both the local and the global versions of algorithm C are about 7 or so times faster than the corresponding version of algorithm B. Algorithm C for computing generalized Laver tables has been implemented online here.

Conclusion

The simple fact that calculating $(\mathbf{x}*\mathbf{y})[\ell]$ is apparantly hundreds of times more difficult than calculating in $A_{48}$ is rather encouraging since this difficulty in calculation suggests that the generalized Laver tables have some combinatorial intricacies that go far beyond the classical Laver tables. These combinatorial intricacies can be seen in the data computed from the generalized Laver tables.

Much knowledge and understanding can be gleaned from simply observing computer calculations. Dougherty’s result which allowed one to compute $A_{48}$ in the first place was proven only because computer calculations allowed Dougherty to make the correct conjectures which were necessary to obtain the results. Most of my understanding of the generalized Laver tables $(A^{\leq 2^{n}})^{+}$ has not come from sitting down and proving theorems, but from observing the data computed from the generalizations of Laver tables. There are many patterns within the generalized Laver tables that can be discovered through computer calculations.

While the problems of computing the generalized Laver tables have been solved to my satisfaction, there are many things about generalizations of Laver tables which I would like to compute but for which I have not obtained an efficient algorithm for computing. I am currently working on computing the fundamental operations in endomorphic Laver tables and I will probably make a post about endomorphic Laver table computation soon.

All of the algorithms mentioned here have been implemented by my using GAP and they are also online here at boolesrings.org/jvanname/lavertables. While the problems of computing the generalized Laver tables have been solved to my satisfaction, there are many things about generalizations of Laver tables which I would like to compute but for which I have not obtained an efficient algorithm for computing.

I must mention that I this project on the generalizations of Laver tables is the first mathematical project that I have done which makes use of computer calculations.

1. Critical points in an algebra of elementary embeddings, II. Randall Dougherty. 1995.

Leave a Reply

Your email address will not be published. Required fields are marked *

sixty four ÷ eight =