# Hindman’s Theorem, partial semigroups and some of my most lacking intuitions (part 4)

Well, I hope you didn’t miss me while I was on my first summer vacation in three years. So let’s continue this series. If you remember, part 3 consisted mainly of the observation that FS-sets have a partial semigroup structure induced by $\mathbb{F}$ as well as me telling you that there’s a immediate correspondence between FS-sets and idempotent ultrafilters.

I’m slowly getting where I wanted to head all along with this series. When I write “my most lacking intuitions” in the title, I have my intuitions about central sets in mind. They are most lacking, I assure you. But with this series I wanted to clear my head a little. So let’s head down the rabbit hole, no questions asked.

## Towards the Central Sets Theorem

The Central Sets Theorem was conceived by Hillel Fürstenberg. I know relatively little about its history so I am still amazed by the fact that anyone could come up with it — it’s such a strange creature. Fürstenberg proved it for $\mathbb{Z}$ (but I’ll keep considering $\mathbb{N}$, the two situations are equivalent anyway). The notion of centrality has its origins in ergodic theory — unsurprisingly for Furstenberg. As fascinating and fruitful as the connection between ergodic Ramsey theory and Algebra in the Stone-Cech compactification is, I don’t plan to introduce the technology in this series, because it will take us too far from the path I have in mind (mind you, I haven’t even introduced the semigroup structure on $\beta \mathbb{N}$, so really, I shouldn’t introduce the ergodic point of view of which I know far less).

Fürstenberg devised the notion of central set (for subsets of $\mathbb{N}$) which was determined by recurrence phenomena in dynamical systems. Again, I don’t want to discuss the dynamical point of view but I’ll give the ultrafilter characterization later. The key for a connection to Ramsey theory was simple.

Theorem (Fürstenberg) Central sets (whatever they are) are partition regular.

Whatever central sets are (sorry for being temporarily mysterious), it is shocking how much algebraic structure central sets have to offer — which is what the Central Sets Theorem is all about. It took a while and until Neil Hindman and Vitaly Bergelson made the connection between the algebraic/ultrafilter side and the ergodic side apparent. Nevertheless, the precise level of the algebraic closure of central sets is still a mystery. And this mystery is the reason for this series.

## A detour: FP-sets and their condensations

One thing we should do before formulating the theorem is the following. What does an FS-set mean in the context of $\mathbb{F}$? Generally speaking, in a (partial) semigroup we have FP-sets (“finite products” instead of “finite sums”): given a sequence $(x_n: n\in \omega)$ we write

$$FP(x_n) = { \prod_{i \in s} x_i: s \in \mathbb{F} }.$$

Of course, in the partial semigroup scenario, we should also think of this as a statement restricted to defined products. However, usually (e.g., in Hindman’s Theorem) all products will be defined. Also, in the non-commutative case (which I wholeheartedly ignore in this series), this notation is supposed to be read “in order”, i.e., the products are in the natural order of $s \subseteq \omega$.

For a crucial example, consider $\mathbb{F}$ itself.

If we have a sequence $(s_n : n\in \omega)$ in $\mathbb{F}$ such that the $s_n$ are pairwise disjoint, then the FP-set will be just fine — all products are defined in our partial semigroup.

This is a critical example also because we can transfer such partial subsemigroups easily: If $FP(s_n) \subseteq \mathbb{F}$ and we have some $FS(x_n) \subseteq \mathbb{N}$, then we can consider $y_n := \sum_{i\in s_n} x_i$. As long as the $s_n$ are pairwise disjoint, we get

$$FS(y_n) \subseteq FS(x_n).$$

So we have a partial subsemigroup of $FS(x_n)$ induced by a partial subsemigroup of $\mathbb{F}$!

And this is actually a typical phenomenon thanks to Hindman’s Theorem. Remember,

Hindman’s Theorem If we partition an FS-set into finitely many pieces, one piece will contain an FS-set.

### Condensations and Intuitions

There’s one important question when it comes to developing an intuition: what should we expect when we partition again and again? One typical phenomenon is the following: if we take some $FS(x_n)$ and partition into two pieces where one piece contains exactly the elements of the generating sequence ${ x_n : n \in \omega }$. By Hindman’s Theorem, one part of the partition will contain an FS-set — but that’s not going to be ${ x_n : n \in \omega }$! Consider the case $x_n = 2^n$, then ${ 2^n : n \in \omega }$ certainly does not contain an FS-set, it does not even satisfy Schur’s Theorem!

So what happens in this case? Well, we can easily describe many FS-sets in the other part of the partition; e.g., take every other generator and add: $y_n= x_{2n} +x_{2n+1}$. Then $FS(y_n)$ is good for the second part of the partition.

More generally, we could take any pairwise disjoint $(s_n : n\in \omega)$ in $\mathbb{F}$, just make sure that no $s_n$ is a singleton. Then as above, $FP(s_n)$ induces an FS-subset of $FS(x_n)$ — which will lie completely in the “large” part of the partition.

A word of caution: in a certain sense, partitions as the one above are unusually simple because one part does not contain an FS-set. In general, you should expect all parts to contain FS-sets (for example, when separating different idempotent ultrafilters). Nevertheless, I would say that a huge chunk of arguments regarding strongly summable ultrafilters relies on such “simple” partitions — so they are extremely useful.

The point I’m trying to make is that whenever we repeatedly partition an FS-set, Hindman’s Theorem will give us homogeneous FS-sets — but you should expect the elements in the generating sequence to be sums of many elements of the original generators!

This is why a sequence $(y_n: n \in \omega)$ in some $FS(x_n)$ with $FS(y_n)\subseteq FS(x_n)$ is called a condensation of $(x_n :n \in \omega)$ — because generally speaking many elements from $x_n$ are condensed into one of the $y_n$ (of course, some $x_n$ might just be dropped completely). The term “condensation” is also used in arbitrary (partial) semigroups.

Oh and to be absolutely clear:

Sequences in $\mathbb{F}$ are always assumed to be pairwise disjoint (so that they are pairwise compatible in our partial operation)

Alright, that’s enough for this part, I think. Next time, I’ll finally talk about the Central Sets Theorem.

# Hindman’s Theorem, partial semigroups and some of my most lacking intuitions (part 3)

Ok, time for part 3. We’re not close to an end but I must apologize that I won’t be able to post in the next week. But let’s recap. In the first part I simply explained why semigroups are not partition regular and in the second part I wrote about FS-sets and, finally, introduced partial semigroups. One of the headings promised that partial semigroups will help us talk about FS-sets.

## Are we there yet?

At this point you might also ask why I talked about partial semigroups so much when I wanted to give you an easy description of FS-sets.

Well, it’s because I did. FS-sets are partial semigroups, in fact, they are a lot like $\mathbb{F}$!

Why? Because for a sequence $(x_i: i < N)$ (for some $N\leq \omega$)

$$FS(x_n) := { \sum_{i \in s} x_i: s \in \mathbb{F} }.$$

So we have an induced partial operation on $FS(x_n)$.

A word of warning: an FS-set can have more structure than $\mathbb{F}$ induces — just look at $FS(2^n) (= \mathbb{N})$ or in a different form, look at $FS(n)$ — same set, but much messier when it comes to the relationship between the generating sequence and the sets of finite sums!

In general, we cannot assume that each element of an FS-set has a unique description via some $s\in \mathbb{F}$ nor can we assume that the $\mathbb{F}$-induced operation captures all allowable sums. So, really, the whole affair is far from optimal!

Nevertheless, if we look at the partial semigroup operation on an FS-set induced by $\mathbb{F}$, we get a very natural partial semigroups structure with plenty of structure, even if we don’t catch all aspects.

From this point of view, we could re-phrase Hindman’s Theorem as follows.

Hindman’s Theorem (bad version) Partial semigroups are partition regular.

This is, of course, a most horrible example of general nonsense: hiding a concrete structure by using an abstract notion. We’re not getting an arbitrary kind of partial semigroup, we’re getting FS-sets, plain and simple.

The reason why I am writing this exposition is that, as much as I believe the preceding paragraph, I think there could something valuable in this formulation: if we could develop the notion of partial semigroup, we might just end up solving one of the big unknowns in this field. But before I can take you in that direction, we need to talk about ultrafilters.

## Ultrafilters on semigroups

When it comes to infinite (and sometimes even finite) Ramsey-type theorems, it is often easy (or at least nice) to give a proof via ultrafilters. The reason is very simple: if we partition a set into finitely many piece, an ultrafilter contains exactly one of the pieces. This allows for a simplistic strategy: construct the right kind of ultrafilter and it will do all the work for you!

In case of FS-sets and Hindman’s Theorem there is an even closer relation to ultrafilters, idempotent ultrafilters to be exact. I don’t want to prove anything here, so whatever idempotent ultrafilters are, they are a special kind of ultrafilter on semigroups (but one that exists in ZFC). They come up in the most popular proof of Hindman’s Theorem, the so-called Galvin-Glazer Theorem (so-called since neither of them were involved in getting the proof published and the way I’ll write it it’s really Galvin’s).

Galvin-Glazer Theorem If $p \in \beta \mathbb{N}$ is an idempotent ultrafilter, $A\in p$, then $A$ contains an FS-set.

Hindman’s Theorem is then an immediate corollary.

I love the odd history of Hindman’s Theorem, so allow me to digress. Around 1970 Galvin actually had the proof but didn’t know if idempotent ultrafilters existed (and he had a different name because he was thinking about them the “wrong” way). Then in 1972 Hindman used CH and, assuming “his” theorem, built an ultrafilter as Galvin needed. Then in 1974 Hindman proved “his” theorem combinatorially. Yet in 1975 Galvin met Glazer who told him that idempotents exist (which, looked at the “right” way had been known since the 1958!).

Coming back to the relation between FS-sets and ultrafilters, there’s also a reverse:

If $A$ contains an FS-set, then $A$ is in an idempotent ultrafilter.

In other words, $A$ contains an FS-set iff $A$ is in an idempotent ultrafilter.

Again, the proof is not important right now (don’t worry I’ll probably get back to it later). What is important is that, in this sense, the structure of FS-sets and idempotent ultrafilters is immediately connected.

And the point, in fact the point of this whole series, is that this relationship is missing for stronger algebraic Ramsey Theorems.

If you read this far, my thanks. I won’t be able to post anything next week, but there’s more to follow (and written but not yet prepped for posting). So stay tuned!

# Hindman’s Theorem, partial semigroups and some of my most lacking intuitions (part 2)

Yesterday I finally started this short series with some small thoughts regarding partial semigroups. If you don’t remember, all I did yesterday was to explain why semigroups are not, in general, partition regular using a simple partition of $\mathbb{N}$. I was trying to keep your hopes up by pointing out Schur’s Theorem.

## So how far can we push?

There are essentially two paths to choose from here. The next step in a decent text book would probably tell you about van der Waerden’s Theorem (1927), Rado’s Theorem (1970) or the Folkman-Rado-Sanders Theorem (Sanders being the first to publish it in his PhD thesis (19691968, see Jon Sanders’s comment below) but usually the last to be mentioned because Folkman had some “private communication” with Graham and Rothschild).

The first two theorems lead down a different path, towards partition regular matrices which, from the point of view of Schur’s Theorem, could be described as “getting more linear combinations”. The last one on the other hand could be described as “getting some algebraic closure” and this is where we came from in the last post. But I would like to skip ahead or else we’ll still be sitting here tomorrow next week.

Let me get back to talking to you about Hindman’s Theorem. Skipping its curious history, let’s see where Schur left us. We have sums of two elements — very often. But what about three? Or any finite number? Well, we know that repetition is a problem — after all, that’s what prevented subsemigroups from being a partition regular notion. So we have to drop repetitions. What does that leave us with? Distinct sums.

FS-set Given a sequence $(x_n: n<N)$ (with $N \leq \omega$, but we’re really intersted in $\omega$) let’s denote the set of distinct finite sums of elements from this sequence by $FS(x_n)$, i.e., $$FS(x_n) = \{ x_{i_0} + \ldots x_{i_k} : i_0, \ldots i_k<N \mbox{ pairwise different} \}.$$

So we collect all results of summing elements from our sequence without doing any repetitions. Fair enough. This seems to be as far as we can get without getting at an actual subsemigroup.

If we’re really trying to look at this structure, we should do a reality check. How rich are such sets? Well, first of all, $FS(2^n) = \mathbb{N}$, so, yeah, they can be pretty rich. On the other hand, you have the potential of huge gaps — $FS(10^n)$ has $1, 10, 11, 100, 101, 110, 111, 1000$ and so forth — immense gaps will appear, but at each new element from the sequence, we’ll see a large cluster of numbers at relatively close distance. That’s not bad given our counterxample for subsemigroups.

The surprising thing is that this property — to contain an FS-set — is actually partition regular. And this is exactly what Hindman showed in 1974.

Hindman’s Theorem FS-sets are partition regular.

What a great theorem! All we had to do was drop repetitions and we immediately get partition regularity. That’s a good deal. And yet, it begs the question: can we formalize the kind of algebraic closure appearing in an FS-set?

## Is there an easy way to describe FS-sets?

The thing with FS-sets is that they look awefully like subsemigroups. You can almost always add two elements together. And not just those from the underlying sequence. Given an element of an FS-set, all but finitely many elements for the underlying sequence can be added to it.

But also when we have two different sums chances aren’t too bad that we can add them — the only restriction is that the two sums must have been built from pairwise different elements from the underlying sequence.

Thankfully, we can describe this messy business efficiently by introducing one of my favorite and one of the most important structures: $\mathbb{F}$.

What is $\mathbb{F}$? Following other researchers (especially Andreas Blass) I will denote the finite non-empty subsets of $\omega$ by $\mathbb{F}$.

$$\mathbb{F} := [\omega]^{<\omega} \setminus \{ \emptyset \}.$$

Why on earth would I introduce a new notation for such a standard set-theoretic object, you ask? Well, mostly because I didn’t — others did. But it makes sense becaus our interest is in a certain algebraic structure on $\mathbb{F}$. (Personally, I also like the blackboard boldface F — it’s pretty.)

So what structure? There already exists a very well known and important algebraic structure on $[\omega]^{<\omega}$ — the group operation of symmetric difference $\Delta$. This operation gives us the countable Boolean group: every element is its own inverse, i.e., we’re talking about $\otimes_{i \in\omega} \mathbb{Z}_2$ here.

On the other hand, $[\omega]^{<\omega}$ has a very (I mean very) natural semigroup operation: taking the union, $\cup$. This operation is, of course, as far away from being a group operation as seems possible; every element is idempotent!

It’s not historically relevant (as far as I know), but the part that is interesting here is the “intersection” of these operations, i.e., when these two different operations agree.

Yes, I am making all this fuss just to talk about disjoint unions. You’ll say, that’s ridiculous! Disjoint unions are easy! Well, you’re right. And that’s exactly the point, actually. Addition is such a pain! It is subtle and messy (all that carrying over). Disjoint unions are soooo much easier, right?

(Right. Just you wait.)

So what about disjoint unions? Isn’t that a pretty tough restriction? If I was trying to do the usual mathematicians-love-obfuscation thing, I could probably make big deal out of the fact that restricting our operations to disjoint unions eliminates all multiples (which is, like, super important, because I was, like, talking about that all the time, right??? — that is, of course, getting it backwards, so don’t think that way, please!).

The thing that is interesting is that disjoint unions still have a kind of associativity law (warning: I’m going to use $\cdot$ now — don’t freak out — but we’re not in Kansas in $(\mathbb{N},+)$ anymore)

$$(a \cdot b) \cdot c = a \cdot (b \cdot c)$$
in the sense that, if one side is defined, so is the other and they are equal.

Of course, “they are equal” is not the point here — after all, we have restricted a (semi)group operation — so for us the key should be: if one side is defined then so is the other. In other words, our operation is relatively rich.

This is what Bergelson, Blass and Hindman dubbed “strong associativity” for in their seminal paper. Altogether, a set with a partial operation satisfying strong associativity, they called a partial semigroup.

We have a slight problem though: the empty operation would vacuously be strongly associative, so “rich” was really an exaggeration. That’s why you’d always want to assume that there’s really something going on. A pretty natural richness condition (which disjoint unions have) is the following.

Given finitely many elements, there’s another one that can be multiplied to any of them.

We might not be able to multiply two elements, but at least each element has some of elements it can be multiplied with.

What we’re looking at is simply a finite-intersection-property. Namely, the family of sets “can be multiplied with $x$” has the finite intersection property. (What we really want is the infinite finite intersection property, but that’s not too important right now.) For historical reference, if we have this, it’s called a adequate partial semigroup — and we’re not interested in anything else.

Technically, we will need to say from where we multiply. I usually prefer “from the right” and write the set as $\sigma(x) := \{ y : x\cdot y \mbox{ defined } \}$.

## Are you messing with me (again)?

After I have forced this new notion upon you, here’s the thing: every partial semigroup can be extended to a semigroup. All you have to do is to add one element, say $U$, which takes all the “undefined” values and otherwise acts as a multiplicative zero, i.e., $a \cdot U = U \cdot a = U$. It’s easy to check that, thanks to strong associativity, this operation is really associative.

What’s the point then? Partial semigroups have a huge advantage: they allow us to focus on the part of our algebraic semigroup structure that is important for us. Just like disjoint unions are the important structure for $\mathbb{F}$ because they eliminate, e.g., the irrelevant idempotency of individual elements under $\cup$. In addition, we will see that partial subsemigroups are much more abundant and, in my eyes, one key for understanding Hindman’s Theorem and its generalizations.

In the next part I will try to convince you that partial semigroups are indeed a good answer to the question “What kind of of structure is flexible enough to be survive partitioning”?

# Hindman’s Theorem, partial semigroups and some of my most lacking intuitions (part 1)

If you remember, I mentioned that I was working on a post on some research and it was getting out of hand. Well, it is still not finished, but long enough to start posting a series of posts.

It’s no secret that I love the mathematical world surrounding Hindman’s Theorem. Recently, I have been revisiting an old draft of mine. To get back into it, I want to jot down some informal notes about one of the research directions I think are worth pursuing — even though I have no proof of this (pardon the pun). At the heart of this line of thought lies the notion of partial semigroups and, especially, my ideas for weaker forms of that notion. To fully make sense of it I have to take you from the elementary algebraic structures to less known structures to ultrafilters on those and finally to filters related to all of this.

But let me start at the beginning so that you can still enjoy reading a little bit of what I want to express without being lost after 5 minutes.

Hindman’s Theorem is a typical Ramsey-type theorem, i.e., it tells us about a certain richness in some fixed mathematical structure (such as a graph in the original theorem by Ramsey), a richness which is partition regular:

Partition regular notions of richness If the structure is partitioned in to finitely many pieces (equivalently colored with finitely many colors), one piece (color) will have this kind of richness.

More concretely, Hindman’s Theorem is about algebraic structures, namely, semigroups. Semigroups, if you don’t remember, are simply sets with an associative operation

$$(a \cdot b) \cdot c = a \cdot (b \cdot c).$$

## semigroups? that’s easy! subsemigroups!

If you are looking for partition regular notions and you meet a semigroup, you can, of course, ask yourself:

Are semigroups partition regular?

(Un)fortunately, that’s not going to work.

To give you a counterexample, let’s look at the natural numbers, $\mathbb{N}$, with the usual addition. Unlike other research into semigroups, almost all of the work in this Ramsey-theoretic area is connected to $\mathbb{N}$.

Ah, I just noticed: I have made a mistake. If you’re a set theorist and $\mathbb{N} = \omega$, then actually, this is going to work because $0$ generates a subsemigroup ($\{0\}$), so whenever we partition $\omega$, one part will include a subsemigroup. (more generally, monoids will do this and even more generally, semigroups with idempotent elements.)

Oh well. But that’s “clearly” not the point (especially if you’re a set theorist). You’ll most likely be interested in large, hopefully infinite structures — and in any case, you’re not interested in trivial solutions by idempotent elements…

To get back to our example, we’ll ignore $0$ by adopting the convention most people in this area use:

$$\mathbb{N} = \omega \setminus \{0\}$$

## subsemigroup #Fail.

Subsemigroups have a nice property: they contain sets of ‘multiples’. Given any element in the subsemigroup, we find all its ‘multiples’ since we can add it to itself repeatedly. Of course, for $(\mathbb{N},+)$ this really means we’re talking about multiples in the natural sense.

If we have (thanks to the heading of this section) the suspicion that semigroups are not partition regular, we need to find a partition such that no part contains a set of multiples.

When I was lying in bed a couple of days ago, falling asleep but trying to put the ideas together for this post , it took me a while to come up with a partition. Since at the end I was actually asleep, I don’t really remember how I got there. So I’m afraid I can’t lead you to it, cannot communicate what intuitions about partitions of natural numbers came up in the process. If I would venture a guess, I probably just remembered the solution for a more complicated (i.e., weaker) structure. Not a grand revelation, I admit.

So how do we do this? Well, the thing about multiples is that they are evenly spaced. So if the parts of our partition have a large interval missing, say larger than some given number $a$, then that number’s multiples will ‘hit’ that interval, i.e., there’ll be a multiple of $a$ in that interval, not in our part of the partition.

So if we partition $\mathbb{N}$ into quickly increasing intervals, say for simplicity

$$[1,10), [10,100), [100,1000) \ldots$$

we can do the following: Collect every other interval, i.e.,

$$A_0 := \bigcup_{i\in \omega} [10^{2i},10^{2i+1})$$

and make $A_1$ its complement $A_1 = \bigcup_{i\in \omega} [10^{2i+1},10^{2i+2})$.

Then both $A_0$ and $A_1$ have increasingly large gaps, arbitrarily large gaps.

Now if we had a number in $A_0$, then it will lie in one of the intervals. But it’s multiples cannot skip the next interval — it’s far too big for that. I mean, if you start in $[100,1000)$, then you’re best chance really is $999$ — and of course $1998$ is far from being in the interval $[10000,100000)$… Of course the same argument goes through for any other number (and also for $A_1$), just look at the highest multiple still in the interval and then double it.

In other words, for any given number, not all multiples lie in the same part of the partition, i.e., neither $A_0$ nor $A_1$ contain a set of multiples — in particular, neither contains a subsemigroup of $\mathbb{N}$ — #Fail.

## I’m not just messing with you

Now you can ask yourself if this hope for partition regular structures within semigroups was all but a dream and there’s simply no algebraic structure that survives.

Luckily there’s evidence dating back all the way to 1917 guaranteeing that at least a tiny little bit of algebraic structure will survive partitioning. Issai Schur proved that if you partition $\mathbb{N}$ into finitely many pieces, you will always find two numbers $x,y$ such that $x, y$ and $x+y$ are in the same piece. (Of course, you’ll find infinitely many such pairs.)

It’s not important, but the proof is quite simple actually, using Ramsey’s Theorem (the original thing). Given a coloring of $\mathbb{N}$, use that coloring to define a coloring of all unordered pairs of natural numbers: give $\{x,y\}$ (with $x < y$) the color of $y-x$. By Ramsey’s Theorem, there exists a homogeneous infinite set. Pick any three numbers $x < y < z$ in that homogeneous set. Then $z – y, z – x$ and $y – x$ all have the same color — which solves our problem, since $z-y + y-x = z-x$ — and notice how associativiy comes into play.

### outlook

In the next post, I’ll try to point out how far we can push this search for algebraic structures.