Hindman’s Theorem, partial semigroups and some of my most lacking intuitions (part 7)

Hm, my writing process is slowing down a little (and on top of that I forgot to publish this draft) and there are other posts that I really want to write. I’m not really sure how I will proceed, but let’s keep pushing a little further for now. Last time I tried to build a bridge from central sets via idempotent ultrafilters back to partial semigroups. This is one of the key points of this series: connecting central sets and partial semigroups.

Back to partial semigroups

Here we are, lots of great results in our back, yet missing the perfect correspondence between Ramsey-type theorems and partition regular sets (or put differently, ultrafilters). I started out with the notion of (adequate) partial semigroups and now, I think, is the time to return to it.

I believe that a notion of partial semigroup could help solve this mysterious question. The goal of this entire series is to investigate a potential relationship between partial semigroups and central sets.

Minimal partial semigroups

I tried to convince you earlier that FS-sets are partial semigroups. They are, in fact, the minimal partial subsemigroups of $(\mathbb{N},+)$. Why is this? It’s by a rather simple induction argument

• Say $(S,\cdot)$ is an adequate partial semigroup.
• Take any $s_0\in S$.
• Inductively, $FP(s_0,\ldots, s_n)$ has all products defined.
• Pick $s_{n+1} \in \bigcap_{t \in FP(s_0,\ldots,s_n)} \sigma(t)$ (remember those $\sigma(t)$?) — since $S$ was a partial semigroup, this intersection will never be empty.
• Then all finite products of the $(s_i : i\in \omega)$ are defined.
• In other words, $FP(s_i) \subseteq S$ in the fullest sense.

I don’t think I ever mentioned that “to include an FS-set” (or FP-set) has an established abbreviation; such sets are called IP-set.

Most people will find it important to point out that “IP” does not abbreviate “idempotent” (for idempotent ultrafilters) but was originally meant to abbreviate “infinite parallelepiped” (which makes sense if you think of FS-sets in vector spaces until you realize that this still means “includes an infinite parallelepiped”).

So it seems that we could add some general nonsense by saying “partial semigroup” instead of “IP-set/includes and FS-set”. Unfortunately, this is not the case. Even though every partial semigroup contains and FS-set, not every set that contains an FS-set has a (compatible) partial semigroup structure.

Oh dear, I just notice something and I hope I haven’t made this mistake too often. I’m talking about FS-sets and $(\mathbb{N},+)$ here. So when I write “partial semigroup” for $A\subseteq \mathbb{N}$, then I mean “partial subsemigroup” in the sense that the usual addition restricted to $A$ forms a partial subsemigroup (as is the case for FS-sets). Oh well, I guess this series is getting too long after all and I’m beginning to loose track of what I’ve already written. I hope you might just enjoy reading it anyhow.

Weak partial semigroups

I think the key to partial semigroups will be to weaken the notion further. I think I will call those “weak partial semigroups” (even though I’d love to call them “very partial semigroups”…).

What I’m about to do is, I think, somewhat bad style. Let me alleviate this by some comments. Of course, the ideas in this series do not come out of nowhere. They stem from my studies in this area over the last 5 years (gee, has it really been that long?). Because of this, they are based on structures that you can frequently encounter in this area. So if you know the research area you will hopefully find all of this very familiar and think “why does he make such a fuss about this standard thing?”. That’s great! And if you’re not, rest assured there’s a method to my madness. I’m not sure I will get there anytime soon, but there’s still hope.

In my silly demonstration above that partial semigroups contain FS-sets you may have noticed that we didn’t really see strong associativity — especially, since we’re starting with a full semigroup anyway and have no worries about associativity.

What was however crucial is finite intersection property. And that’s already all there is to this “new” notion.

Weak partial (sub)semigroup If $(S,\cdot)$ is a semigroup, then $A\subseteq S$ is a weak partial subsemigroup, if the restriction of the partial semigroup operation to $A$ is adequate, i.e., the sets of the form
$$\sigma_A(a) := \{ b\in A : a\cdot b \in A\}$$ generate a filter.

Note: I just made up that $\sigma_A(a)$ notation to possibly help your understanding by making the connection to $\sigma(a)$.. I will get to a more classical formulation in a second.

In other words, the operation restricted to (a partial operation on) $A$ is, to some extent, a partial semigroup. We might not have the luxury of full associativity: a, b,c, abc, ab might be in such an A, but not bc (this actually happens in real life btw) — so we cannot compare $a(bc)$ with anything, in particular not with $(ab)c$.

But for any $a$ we will find many (a filter set of) $b$’s and for each of those $b$’s we will find a filter set of $c$’s such that $a,b,c,ab,bc, abc \in A$. That’s pretty good, don’t you think?

I’m not sure what I’ll do next. There are many paths to choose at this point. I’m not sure which one is best and I might just settle for the one that seems most easily self-contained. But don’t worry even if the next post is on a different topic.

Eternal preliminaries part 3, the problem of extending partial semigroups

In the previous post I have totally ignored my focus from the first post, namely that I want to focus on partial semigroups, not full semigroups. As mentioned in the first part this is not really a problem as any partial semigroup operation is in essence a restriction of a full semigroup operation. So the full semigroup is always a neat thing to fall back to when you’re in doubt. Nevertheless, if, as I claimed, it is an advantage to work with partial semigroups, can we not get around this problem?

Extending a partial operation to $\beta S$.

Luckily, it is very much possible to do extend a partial operation. (Un)fortunately, nobody’s perfect that is I don’t know how to get all the theory to run smoothly so in what follows I might have to fall back for some things. We’ll see.

So how do you extend the operation? Easy! Use the brute force method!

Extending the operation to $\beta S$ For $p,q \in \beta S$, consider those $A\subseteq S$ with $A^{-q} \in p$. These sets may or may not yield an ultrafilter. If they do, we call it $p \cdot q$.

Ok, that’s a bit weird. The way I wrote it there is no reason to believe that such $A$ will ever form a filter, let alone an ultrafilter! Now, since the operation is partial, we had to expect a little bit of weirdness. But accept (or ignore) the unreasonable definition at this point and I’ll try to explain why this could work.

First to note is: watch out! Maybe I’m trying to trick you with the definition, hiding behind the weirdness. As I wrote in the last post, the $A^{-q}$-notation is a “general nonsense” notation (even worse, I came up with it myself…), a simplified notation that hides complicated structures (though, hopefully, to a later advantage). So we better check it in detail.

• First, for semigroups we had defined $A^{-q} = \{ s \in S: s^{-1}A \in q\}$ — this still seems to work fine, we’re just checking if some set is in $q$.
• But what kind of set are we checking? Going into detail, we find $s^{-1}A = \{ t \in S: s \cdot t \in A\}$ — and this not clear at all! Remember, $s\cdot t$ might not be defined. What do we do then? Do we want to include or exclude those incompatible $t$?

But, assuming you’re a forgiving reader, I think it still makes sense: just make $s\cdot t$ to mean “$s \cdot t$ and it is defined”. In other words, the original definition should really read $s^{-1}A := \{ t\in \sigma(s) : s \cdot t \in A\}.$
This is fine from the point of view of a full semigroup (since always $\sigma(s) =S$) and it really captures what we want to capture: those $t$ that $s$ maps to $A$. Of course, the convention that $s\cdot t$ entails $t\in \sigma(s)$ is really standard ever since the paper by Bergelson, Blass and Hindman. So I’m luckily in good company. To finish the introduction, we get the same phenomenon as we did before.

$A \in p\cdot q$ if and only if there exists $V \in p, {( {W_v} )}_{v \in V} \in q$ such that $\bigcup_{v\in V} v \cdot W_v \subseteq A$.

Again, with the analogous convention that $v\cdot W_v = v \cdot (W_v \cap \sigma(v))$.

The most important observation

So after this short delay, we can say we have defined a partial operation on $\beta S$ where the product is $p\cdot q$ as defined in the previous part but only if the definition yields an ultrafilter. One question is rather immediate.

• What can go wrong to prevent the multiplication from being defined?
• That is, which $p, q$ actually yield an ultrafilter $p \cdot q$?
• In other words, how rich is the multiplication?

Luckily, this question can be answered rather easily. But let me begin with the most crucial observation and the goal of the rest of this post.

The semigroup $\delta S$ Given an (adequate) partial semigroup $(S,\cdot)$ and the above extension of the operation to $\beta S$, it turns out that $\delta S$ is a full semigroup.

Kaboom! This is extremely nice. Even though our operation is partial, we get a relatively good piece of $\beta S$ that is a full, compact, and (we’ll see later) right-topological semigroup — the three key properties for the next part of these preliminaries. This will be our motivation for the rest of this post. Luckily, the proof is essentially done by answering the above questions.

The partial semigroup $\beta S$.

As the heading says, the nicest thing about this extension is that it yields a partial semigroup. This is positively surprising because strong associativity seems difficult to conserve. The key observation is the following.

Proposition For $p,q \in \beta S$, the product $p \cdot q$ is defined if and only if $\{ s\in S: \sigma(s) \in q \} \in p$.

This is a pretty natural observation. You’d expect the product to work out on the ultrafilters if they contain sets where the multiplication behaves nicely. This is, of course, a common phenomenon with ultrafilters: properties of an ultrafilter are often reflected by its elements and vice versa.

Proof.

• Like any good ultrafilter proof, we start with a partition: $S= \{ s\in S: \sigma(s) \in q\} \cup \{ s\in S: \sigma(s) \notin q\} \in p$.
• Since $p$ is an ultrafilter, one part is in $p$.
• If the second part is in $q$, then $p \cdot q$ is not a filter, i.e., the forward direction of our equivalence holds by contraposition.
• Assume $A= \{ s\in S: \sigma(s) \notin q\} \in p$.
• Since $q$ is an ultrafilter, for any $a\in A$ we get $S \setminus \sigma(a) \in q$.
• But then $\bigcup_{a\in A} a \cdot S\setminus \sigma(a) = \emptyset$, so $p \cdot q$ is not a filter.
• If the first part, call it $B$ is in $p$, then $p \cdot q$ is an ultrafilter.
• The only problematic case (i.e., the one deviating from the proof of the brute definition for full semigroups) is that $\emptyset \notin p \cdot q$.
• To see this realise that $\bigcup_{v\in V} v \cdot W \supseteq \bigcup_{v \in V \cap B} v \cdot (W \cap \sigma(v)) \neq \emptyset$ since $V \cap B \in p$ and each $W\cap \sigma(v) \in q$.
• The rest is straight forward and not really fulfilling.
• Closure under taking supersets and finite intersection is just as easy to check.
• $p \cdot q$ is prime.
• Let $C_0 \dot\cup C_1 = C \in p\cdot q$.
• Now easily $s^{-1} C = s^{-1} (C_0 \dot\cup C_1) = s^{-1}C_0 \dot\cup s^{-1}C_1$.
• Since $q$ is prime, exactly one part is in $q$.
• In other words, $C^{-q}= C_0^{-q} \dot\cup C_1^{-q} (\in p)$.
• Since $p$ is prime, exactly one part is in $p$, say $C_i^{-q} \in p$.
• But this means by definition that $C_i \in p \cdot q$, as desired.

And once again, I have taken the liberty of skipping a little bit ahead. In fact, I will spend the bigger part of the next post talking about other tricks like $(C_0 \cup C_1)^{-q} = C_0^{-q} \cup C_1^{-q}$ that the notation has to offer.

But with this we can actually claim.

Theorem $\beta S$ is a partial semigroup.

The proof is long and rather boring. Using the previous proposition you can compare what it means for say $p (q \cdot r)$ and $(p \cdot q) \cdot r$ to be defined. Unsurprisingly, this comparison boils down to a comparison of elements in $S$ — where, of course, we have strong associatitivity. I’ll gladly update this post if there’s interest and you can find it as Proposition B.3 in my thesis.

It’s almost midnight and I’m getting tired so let’s wrap up the proof of the supposedly important observation.

Proof that $\delta S$ is a semigroup.

• By the proposition, we know that the multiplication is defined for all $p,q \in \delta S$ because $q$ contains all $\sigma(s)$. (yeah, only $q$ – you can see some more general observations looming around, right?)
• So the question is, whether the product is again in $\delta S$.
• But for that we just check that $(\bigcup_{t \in \sigma(s)} t \cdot (\sigma(t) \cap \sigma(s\cdot t) ) \subseteq \sigma(s)$ by strong associativity.
• So $\sigma(s) \in p\cdot q$ for all $s\in S$, as desired.

Ok, after this small detour, we’re ready for some ancient and brilliant theorems. Ellis’s Lemma and Hindman’s Theorem coming right up.

Eternal preliminaries part 1, semigroups and such

My thesis was a (somewhat) coherent structure and hence had one big chapter of preliminaries. It was rather painful (and in retrospect stupid) to create preliminaries for each paper based on the results from the thesis by taking that chapter and reducing it to only those terms necessary in each respective paper. To (over)compensate, I have been thinking for a while to write down some ‘eternal preliminaries’ here.

Semigroups and such

In the field that is often called ‘algebra in the Stone-Cech (sorry about the missing hacek) compactification’ (or ‘Hindman stuff’ for short), the structures of interest are infinite semigroups $(S,\cdot)$, i.e., $\cdot: S\times S \rightarrow S$ is associative.

However, ever since the amazingly rich paper by Vitaly Bergelson, Andreas Blass, and Neil Hindman on located words we have the notion of a partial semigroup. I like this more general notion for several reason which is why I will formulate everything in terms of partial semigroups here.

Definition Following the above paper, I’ll call $(S,\cdot)$ a partial semigroup if the map $\cdot$ is a partial binary operation for which equations of the type $(s \cdot t) \cdot u = s \cdot (t \cdot u)$ hold in the sense that whenever one side is defined, so is the other and they are equal.
This is often called strong associativity.

The most important example of a partial semigroup consists of the finite, non-empty subsets of $\omega$,
$\mathbb{F} := \{ s \subseteq \omega : 0 < |s| < \omega \}$
with the partial operation
$s \cdot t = s \cup t \mbox{ iff } \max(s) < \min(t),$
in other words restricting the union operation to so called ordered unions. Sometimes, another operation is interesting, the more general restriction to disjoint unions, but most of the time the difference between the two restriction won’t matter much to us so we’ll stick to the ordered unions for now. This example is useful right here because it shows two important properties that are typical for partial semigroups, the first resulting in the following observation.

Proposition Every partial semigroup can be extended to a full semigroup: For every partial semigroup $(S,\cdot)$ there is a full semigroup $(T,\star)$ such that $S\subseteq T, \cdot \subseteq \star$.

Proof.

• Given a partial semigroup $(S,\cdot)$, simply adjoin a new zero element.
• I.e., let $T: = S \dot\cup \{ 0 \}$ (wherer $0$ is a new element) and define $v \star w = v\cdot w$ if defined in $S$ and $v \star w = 0$ otherwise.
• It’s easy to check that the operation is associative (thanks to strong associativity).

Even though partial subsemigroups are much more abundant than subsemigroups (which is why I like them so much), the above fact is extremely useful when it comes to the background theory of partial semigroups (especially, ultrafilters on them which will be our main goal for the next section): we can simply pretend that we have a full semigroup and use that well-developed theory to help us along, after which we can restrict our interests again to the partial subsemigroup.

Actually, most of the time we really are in a situation like $(\mathbb{F},\cdot)$ — we mostly start with a full operation which we reduce to a partial semigroup operation so as to simplify the algebra (as in the case of $\mathbf{F}$). Before I get to the first important property of partial semigroups that must be assumed for all that follows, let me give you the most important class of examples of partial subsemigroups — FP-sets.

Examples Given any (partial) semigroup $(S,\cdot)$ and a sequence $\mathbf{x}$ in $S$ the FP-set of $\mathbf{s}$, $FP := \{ \prod_{i \in s} x_i : s \in \mathbb{F} \}$ (where products are always in increasing order of indices) has a partial semigroup structure induced by $(\mathbb{F},\cdot)$:
$\prod_{i \in s} x_i \cdot \prod_{i\in t} x_i = \prod_{i \in s\cup t} x_i \mbox{ iff } \max(s) < \min(t).$
In case our (partial) semigroup is written additively, we write FS-set etc.
Note that for commutative (partial) semigroups we can also consider the partial semigroup structure induced by disjoint unions rather than ordered unions since the products/sums still make sense in the commutative world.

These examples are essentially my favorite reason for thinking primarily in partial semigroups — FP-sets with this partial semigroup structure are such a critical component of algebra in the Stone-Cech compactification that we might as well make this explicit. It also has a certain slickness to it — Hindman’s Theorem (one of our main goals here) could be formulated very nicely with this terminology — but that slickness is somewhat misleading; let’s discuss that when we get there (when I would love to weaken the notion further…)

The main problem with partial semigroups is, of course, their partiality; they could be so partial that they are meaningless, i.e., the operation could have empty domain or a very small domain. Thankfully, the examples above have a very rich partial structure: for any finite number of elements we find (many) elements that are compatible with all those finitely many. That’s why I usually assume the following property for partial semigroups.

Definition and convention Consider a partial semigroup $(S,\cdot)$.
For each $s \in S$ we denote the set of (right-)compatible elements by $\sigma(s) := \{ t \in S: s\cdot t \mbox{ is defined} \}$.
We say that $S$ is adequate if ${(\sigma(s))}_{s\in S}$ has the infinite (or strong) finite intersection property and we denote the generated filter by $\sigma(S)$. Unless specifically stated otherwise partial semigroups are adequate.

For most theoretical things it’s not really important that the filter contains only infinite sets, but that’s what we’re really interested in in our applications, so we might as well assume it. On a side note, it is useful not to assume that the filter $\sigma(S)$ is free, for example when we have an identity in the partial semigroup. The name adequate (without the infinity assumption) is again taken from the paper by Bergelson, Blass and Hindman.

And that’s about all you need to know about partial semigroups for now. In the next post of this series-to-be I will continue with ultrafilters on (partial) semigroups.

Understanding the Central Sets Theorem

To write the first post on the new domain I thought I might just write a little about what I’ve been studying recently — the Central Sets Theorem.

This theorem dates back to the 70s and the original formulation and proof are due to Hillel Furstenberg. In its current form as found say in De, Hindman, Strauss it is probably the strongest algebraic partition theorem around. I had encountered the theorem many times before, in books, lectures, papers and talks but I never truly developed an understanding for it. Since I recently felt it might give me an edge in a problem I’m working on I decided to take a better look.

Detour 1 — metamathematics

How do you achieve an understanding of a theorem? In an incomplete list I would include the following

• Understand its most important application or corollary
• Understand its statement
• Understand its proof
• Improve its proof
• Understand how to come up with the proof
• Give a different proof
• Improve the theorem

I would say this list is in increasing order of understanding but that’s open for discussion.

I might write about the history (and applications) of the Central Sets Theorem some other time, but here I want to focus on its formulation; in fact, I don’t even want to write about what it means to be central (sorry) except that it is a partition regular notion.

Formulation

So, what does the usual formulation look like?

Central Sets Theorem
Imagine you are given finitely many sequences in a commutative semigroup $(S,+)$, say $\mathbf{y^0}, \ldots, \mathbf{y^\alpha}$ as well as a central set $C \subseteq S$.
Then you can find a sequence $\mathbf{a}$ in $S$ as well as a sequence $\mathbf{h}$ of non-empty, disjoint and finite subsets of $\mathbb{N}$ such that for $\beta \leq \alpha$ $FS ( {a_n} + {\sum_{i \in h_n} y_i^\beta} ) \subseteq C.$

Complicated, no? I mean, a random bunch of sequences, some strange set and you find some other sequence and some weird subsets of of the natural numbers and then the IP-set of some strange sums are in that strange set — ye what?

Let’s cut it down a little and just consider the case $\alpha = 0$.

simple Central Sets Theorem
Imagine you are given a sequence $\mathbf{y}$ in a commutative semigroup $(S,+)$ as well as a central set $C \subseteq S$.
Then you can find a sequence $\mathbf{a}$ in $S$ as well as a sequence $\mathbf{h}$ of non-empty, disjoint and finite subsets of $\mathbb{N}$ such that $FS ( {a_n} + {\sum_{i \in h_n} y_i} ) \subseteq C.$

Detour 2 — oversimplification

Even this special case of the standard formulation somehow focuses on aspects that get me sidetracked. So I attempted to formulate it in a way that gives (me) better focus.

Now, the theorem says all kinds of complicated things about the existence of a sequence of disjoint finite subsets of $\mathbb{N}$. Can I get around this? I thought I should be able to. Let’s start with a much weaker version of the theorem.

A weak simple Central Sets Theorem
Imagine you are given a subsemigroup $T \subseteq \mathbb{N}$ as well as a central set $C \subseteq \mathbb{N}$.
Then you can find a sequence $\mathbf{a}$ in $\mathbb{N}$ as well as a sequence $\mathbf{b}$ in $T$ so that $FS ( {a_n} + {b_n} ) \subseteq C.$

I find this weaker version much easier to understand. It just says that I can always translate infinitely many elements from a given subsemigroup into the central set; additionally the finite sums stay within the set.

This is much weaker than the statement before. Of course, given a sequence $\mathbf{y}$ we could consider the generated subsemigroup and use the weaker version. But this would not guarantee the result of applying the Central Sets Theorem — Furstenberg’s theorem gives much more control over which elements are picked since there are no repititions in the sums of the generators.

Partial Semigroups

So where does this leave us? Well, when I hear finite subsets of $\mathbb{N}$ I think of my favourite structure — in fact the favourite structure for a lot of algebra in the Stone-Cech compactification on $\mathbb{N}$, the semigroup $\delta \mathbb{F}$. But let’s step back a little. The best way to think about $\delta \mathbb{F}$ is in terms of partial semigroups.

A partial semigroup operation on a set $S$ is a map $\cdot: S \times S \rightarrow S$ such that associativity $s \cdot (t \cdot u) = (s \cdot t) \cdot u$ holds in the sense that if one side is defined so is the other and they are equal. A partial semigroup is adequate if the sets
$\sigma(s) := \{ t\in S : {s \cdot t} \mbox{ is defined} \}$
generate a filter, i.e., finitely many elements have a common compatible element.

This notion was introduced by Bergelson, Blass and Hindman in the 90s. It tells us that the operation, although partial, is associative in a strong way. Additionally, it makes sure the operation is not just empty but defined for many elements (well, ok it could be just one for all, but that’s not the point).

For ultrafilters the critical point is the following.

The semigroup $\delta S$
Given an adequate partial semigroup and $p,q$ ultrafilters containing all $\sigma(s)$. Then the operation
$p \cdot q = \{ A \subseteq S : \{ s : \{ t : s \cdot t \in A \} \in q \} \in p \}$
is well-defined and associative and semi-continuous. In other words, $\delta S$ is a closed semi-continuous semigroup.

Now this is somewhat surprising. Even though our operation is partial, these ultrafilters are a full semigroup! With all the bells and whistles it takes to do algebra in the Stone-Cech compactification.

What does this have to do with the Central Sets Theorem?

Denote the non-empty, finite subsets of $\mathbb{N}$ by $\mathbb{F}$. Consider the restriction of $\cup$ on $\mathbb{F}$ defined by
$s + t \mbox{ defined } \Longleftrightarrow \max(s) \cap \min(t) = \emptyset.$
Then in fact this constitutes a partial semigroup, adequate at that.

This partial semigroup structure could be called the free partial semigroup in the following sense: given any sequence $\mathbf{s}$ in any semigroup $S$ we can consider the induced partial semigroup on the set of finite sums ${FS( \mathbf{s} ) }$: we only allow sums where the index sets are disjoint (so that we are closed under our partial operation). Then all $FS$-sets are naturally isomorphic (in the appropriate sense of partial semigroups).

The weak version revisited

To come back to the weak version of the Central sets theorem — partial semigroups are exactly what it talks about. So let us reformulate,

simple Central Sets Theorem
Imagine we are given a partial subsemigroup $T$ of $(S,+)$ as well as a central set $C \subseteq \mathbb{N}$. Then we find sequences $\mathbf{a}$ in $\mathbb{N}$ and $\mathbf{t} \in T$ such that $FS ( {t_n} ) \subseteq T$ and
${FS( a_{n} + t_{n}) \in C.}$

Now this sounds much closer to the original theorem. Since any sequence generates a partial semigroup on its $FS$-set (isomorphic to $\mathbb{F}$), this is in fact the Central Sets Theorem for just one sequence.

Leaving the simplification

However, the actual theorem is more than just some kind of induction on the above version. It is considerably stronger and here it is time to let go of the simplifications of partial semigroups again. For the theorem really does talk about $FS$-sets, i.e., partial semigroups isomorphic to $\mathbb{F}$. The strength lies in the fact that the infinite sequences can be chosen uniformly in the sense that we pick from the different partial semigroups in the same prescribed way.

Central Sets Theorem
Imagine you are given finitely many $FS$-sets in a commutative semigroup $(S,+)$, say ${FS( {\mathbf{y^0}} )}, {\ldots}, {FS( {\mathbf{y^\alpha}} )}$ as well as a central set $C \subseteq S$.
Then you can find a sequence $\mathbf{a}$ in $S$ as well as one disjoint sequence $\mathbf{h}$ in $\mathbb{F}$ such that for all $\beta \leq \alpha$ $FS ( {a_n} + {\sum_{i \in h_n} y_i^\beta} ) \subseteq C.$

To see this strength at work it is time to look at the classical application.

Central sets in $( \mathbb{N},+)$ contain arbitrarily long arithmetic progressions
Take $\mathbf{y^\beta}$ to be the multiples of $\beta$ (for $\beta \leq \alpha$). Then the central set theorem guarantees we find $a_1, h_1$ such that for all $\beta \leq \alpha$ $(a_1 + \beta \cdot \sum_{i\in h_1} i) \in C.$

For this application is obviously critical that the to-be-translated elements can be chosen uniformly. That’s all for now but I hope I can write a follow up some other time.