Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: mathematics

Topic: Kuratowski 14 problem


view this post on Zulip Todd Trimble (May 27 2024 at 13:56):

As an aside under the Basic Topology thread, I had mentioned the Kuratowski 14 problem, which @Eric M Downes began responding to. To avoid diverting the Basic Topology discussion any further, I'm creating this new topic.

(Perhaps his response could be moved over here, but I'm not sure how to do that.)

view this post on Zulip Todd Trimble (May 27 2024 at 13:56):

To get the ball rolling, let me recall the originating problem. Suppose given a topology on a set XX, and let C:PXPXC: PX \to PX be the monad on the power set that sends a subset to its topological closure relative to the given topology. The monad conditions here simply say that

SC(S),CC(S)C(S),STC(S)C(T)S \subseteq C(S), \qquad CC(S) \subseteq C(S), \qquad S \subseteq T \Longrightarrow C(S) \subseteq C(T)

for all subsets S,TS, T. It follows that CC=CCC = C. Let ¬:PXPX\neg: PX \to PX be the function that sends a subset SS to its complement. Then ¬¬=1PX\neg \neg = 1_{PX}.

view this post on Zulip Todd Trimble (May 27 2024 at 13:56):

Let KK (for Kuratowski) be the monoid generated by the operations C,¬C, \neg, in other words the smallest submonoid of the monoid of endofunctions [PX,PX][PX, PX] that contains C,¬C, \neg. You can think of an element of KK as represented by a word in the letters C,¬C, \neg, and since C2=CC^2 = C and ¬2=1\neg^2 = 1, we can always reduce words that have two CC's or two ¬\neg's next to each other. Thus we can rewrite elements of KK as words that alternate back and forth between CC and ¬\neg.

view this post on Zulip Todd Trimble (May 27 2024 at 13:57):

The "interior" operation is defined by I=¬C¬I = \neg C \neg, and this is a comonad. Thus for example we have I1I \leq 1, and this implies for one thing that I(C¬C)C¬CI(C \neg C) \leq C \neg C, in other words that

¬C¬C¬CC¬C\neg C \neg C \neg C \leq C \neg C

and hitting this with CC, we get C¬C¬C¬CCC¬C=C¬CC \neg C \neg C \neg C \leq CC \neg C = C \neg C:

C¬C¬C¬CC¬C.(1)C \neg C \neg C \neg C \leq C \neg C. \qquad (1)

view this post on Zulip Todd Trimble (May 27 2024 at 13:57):

On the other hand, the operation C¬CC \neg C is order-reversing. Thus, if we apply C¬CC \neg C to the inequality ICCIC \leq C, we get (C¬C)C(C¬C)IC(C \neg C) C \leq (C \neg C) IC, which gives C¬CC¬CICC \neg C \leq C \neg C I C, or

C¬CC¬C¬C¬C,(2)C \neg C \leq C \neg C \neg C \neg C, \qquad (2)

and now combining (1) and (2), we get C¬C¬C¬C=C¬CC \neg C \neg C \neg C = C \neg C. This gives another rewrite equation, and the conclusion is that there are at most 14 different operations in KK (since any more will be reducible to one already on the list), and here they are:

1,¬,C,¬C,C¬,¬C¬,C¬C,¬C¬C,1, \qquad \neg, \qquad C, \qquad \neg C, \qquad C \neg, \qquad \neg C \neg, \qquad C \neg C, \qquad \neg C \neg C,

C¬C¬,¬C¬C¬,C¬C¬C,¬C¬C¬C,C¬C¬C¬,¬C¬C¬C¬.C \neg C \neg, \qquad \neg C \neg C \neg, \qquad C \neg C \neg C, \qquad \neg C \neg C \neg C, \qquad C \neg C \neg C \neg, \qquad \neg C \neg C \neg C \neg.

view this post on Zulip Todd Trimble (May 27 2024 at 13:57):

But are there any further reductions that we missed, that would whittle down this list of 14 to something even smaller?

No one reading this will be surprised to hear the answer "no" (but why?). By the way, notice that throughout the analysis above, we've only used that fact the CC is a closure operator on PXPX -- not that it is a topological closure operator particularly. But even if we assume further that CC is topological, there are no further reductions to be made: Kuratowski showed in the topological case that these 14 are distinct, by explicitly producing a set SRS \subseteq \mathbb{R} (here taking the standard Euclidean topology on R\mathbb{R}) so that the sets you get by applying these operators to SS give 14 distinct subsets.

Finding such a subset, which I'm calling the original Kuratowski 14 problem, is a moderately challenging problem for an undergraduate topology course, but there's an example in the Wikipedia article I linked to. I don't know if it's the same one as what Kuratowski came up with.

view this post on Zulip Todd Trimble (May 27 2024 at 13:58):

But it's amusing to see whether this 14 result can be achieved for other sorts of closure operators. For example, people talk about the Kleene star operation. This comes up in computer science and formal language theory. Start with an alphabet AA (say A={a,b}A = \{a, b\} for the purposes of this discussion). There's a free monoid AA^\ast, consisting of words generated from the alphabet. Now, given a language LL over AA, which by definition is a subset of AA^\ast, one can close up LL under concatenation and under adjoining (if need be) the empty word as well, to get a new language. This is usually denoted LL^\ast (unfortunately!), and it's called the Kleene star of LL. To disambiguate it from the abstract free monoid on LL, I think I'll denote it by (L)\ast(L) instead. For mathematicians, it's the smallest submonoid of AA^\ast that contains LL, hence it fits into the general scheme of closure operations we were discussing in the other thread.

view this post on Zulip Todd Trimble (May 27 2024 at 13:58):

So, what about it? Is there a 14-set of languages in P(A)P(A^\ast) for the operators {,¬}\{\ast, \neg\}?

Don't knock yourself out thinking about it. I think it's probably a pretty hard problem. (I know the answer because I know a relevant reference in the literature. But it's still hard, I think.)

Here's a far easier problem: prove that \ast isn't topological. (A closure operator on PXPX is topological if it preserves finite joins.)

view this post on Zulip Eric M Downes (May 27 2024 at 17:11):

my answer to the topological part

view this post on Zulip Todd Trimble (May 27 2024 at 17:47):

Most of this looks good! Yes to the demonstration that \ast is not topological. Hope you don't mind a few nitpicks along with the check marks.

view this post on Zulip Todd Trimble (May 27 2024 at 17:47):

You should be writing ():P(A)P(A)\ast(): P(A^\ast) \to P(A^\ast), not ():AA\ast(): A^\ast \to A^\ast. Similarly, write XP(A)X \in P(A^\ast), not XAX \in A^\ast.

view this post on Zulip Todd Trimble (May 27 2024 at 17:47):

You are correct that the fixed points for any Moore closure operator form a complete lattice. It's also correct that the meet of any collection of fixed points equals the intersection as computed in the original power set.

view this post on Zulip Todd Trimble (May 27 2024 at 17:48):

This has a nice generalization: notice that a fixed point is the same as an algebra for the monad, considering a closure operator as a monad. Notice further the general fact that the forgetful functor from the category of algebras down to the category that the monad is defined on preserves and reflects limits. In the context of posets, limits are given by meets. So your observation here is a special case of a general fact about categories of algebras.

(As an aside: this is a good way to teach category theory: see how the concepts play out when the categories are posets. It turns out you can learn a lot that way. Paul Taylor frequently uses this pedagogical fact in his book.)

view this post on Zulip Todd Trimble (May 27 2024 at 17:48):

Finally, you are correct that the join of a collection of fixed points (within the lattice of fixed points) equals the closure of the ordinary union: again, this is true for any closure operator. You can also see it as a corollary of something more general: in the case of posets, the Kleisli category of a monad coincides with the Eilenberg-Moore category, and there is a general statement about how coproducts work in Kleisli categories, which specializes to the observation you are making here.

view this post on Zulip Todd Trimble (May 27 2024 at 17:48):

I do not believe XY=(XY)\ast X \cap \ast Y = \ast(X \cap Y). For example, suppose X={aa}X = \{aa\} and Y={aaa}Y = \{aaa\}. Then XY=X \cap Y = \emptyset and (XY)={e}\ast(X \cap Y) = \{e\} where ee is the empty word. But XY\ast X \cap \ast Y contains elements like aaaaaaaaaaaa.

view this post on Zulip Eric M Downes (May 27 2024 at 18:11):

Ahhh right, so XX and YY have to be on anti-chains in order to have ()*() distribute. You know I saw this problem arise with AA^* being mono-generated and didn't even think to pull that back to the proof. Ugh. Thanks!

view this post on Zulip Evan Washington (May 28 2024 at 18:49):

As a nice application of this, there are 14 different modalities in the propositional modal logic S4\mathsf{S4}, where a modality is a string of modal operators and negations, such as ¬¬¬\neg \Diamond \neg \Diamond \Diamond \neg, modulo equivalence in S4\mathsf{S4}.

view this post on Zulip Todd Trimble (May 28 2024 at 20:30):

Thanks! For some reason, that application had never occurred to me.

view this post on Zulip Eric M Downes (May 28 2024 at 20:43):

Oh wow! Yeah the rules (1) and (3) laid out in nlab [[S4 modal logic]] are clear analogues for rules on closures! And (2) looks contravariant, which at first seemed clear but now I wonder... implication being usually subset-like, this seems reverse. Anyway I'm surprised the article doesn't mention this connection more clearly. Maybe we can update it once the precise analogy is clearer.

view this post on Zulip Evan Washington (May 29 2024 at 00:03):

The box \Box should actually be interpreted as an interior operator, while the diamond \Diamond is a closure operator. Also note that the axioms are equivalent to their duals, so pp\Box p \to p is equivalent to ppp \to \Diamond p. The correspondence follows from the fact that S4\mathsf{S4} is complete with respect to this topological interpretation

view this post on Zulip Valeria de Paiva (May 29 2024 at 02:17):

As a nice application of this, there are 14 different modalities in the propositional modal logic S4, where a modality is a string of modal operators and negations, such as ¬◊¬◊◊¬, modulo equivalence in S4.

hmm, I think you're going a little too fast there. This is so if you only want to consider "classical modal logics", to double the 7 modalities in the traditional picture in this. Many of us would like a more constructive version of modal logic where the Diamond is stronger, just like the constructive existential is stronger than the classical one. Especially if you're moving on to a proof-relevant semantics, you may not want =¬¬\Diamond= \neg\Box\neg

view this post on Zulip Julius Hamilton (May 29 2024 at 03:15):

I have been busy working so I have not been able to think through the following question, but I want to state it here to return to later.

Someone online asked if there is a modal logic in which pqpqp \vee q \to \Box p \vee \Box q is provable, and someone responded it is the system Ver.

In investigating how one might prove pqpqp \vee q \to \Box p \vee \Box q, as usual I asked myself about the definitions of these symbols.

I know that for a two element set SS, there are 16 possible functions S×SSS \times S \to S.

It’s interesting that this value is 2222^{2^2}, instead of 2222*2*2. By currying S×SSS \times S \to S, I think one can see why this is correct.

I then thought about how all these Boolean operators could be expressed in terms of binary arithmetic. For example, xyx \vee y I am pretty sure is equivalent to (x+y)(xy)(x + y) - (x \cdot y) (mod 2), which is XOR minus AND. Maybe this is wrong.

I am wondering if there is an interesting discovery or implication in defining all Boolean operations in terms of what I think are the operations of a ring, ++ and \cdot (on integers).

It is common to see Boolean operations defined with truth tables. I think truth tables can be represented with matrices. So I also began to wonder how the “composition” of two Boolean operations could be calculated with matrix operations. But then I wondered how many definable matrix operations there are M×MMM \times M \to M, and how each one corresponds to a different way of “putting two Boolean operations together”.

(I know there are many possible matrix operations such as pointwise-addition, the matrix product, the other products like the Kronecker product, etc.)

Since a matrix is a set with multiple indices, I wanted to think about simple combinatorial solutions to the above questions, but haven’t had time yet. Curious if the people here know of interesting ideas relating these questions to the above discussions regarding category theory, algebra, etc.

view this post on Zulip Todd Trimble (May 29 2024 at 04:23):

Valeria wrote in part

This is so if you only want to consider "classical modal logics",

The context of this thread is classical logic, since we're stipulating ¬¬=1\neg \neg = 1 in the lead-up to stating the Kuratowski 14 problem, which is the topic here, not modal logic.

view this post on Zulip Eric M Downes (May 29 2024 at 04:34):

Julius Hamilton said:

xyx \vee y I am pretty sure is equivalent to (x+y)(xy)(x + y) - (x \cdot y) (mod 2), which is XOR minus AND. Maybe this is wrong.

I am wondering if there is an interesting discovery or implication in defining all Boolean operations in terms of what I think are the operations of a ring, ++ and \cdot (on integers).

I recommend looking at the ring endomorphism x1xx\mapsto1-x in a generic ring and comparing the relation between domain-products vs codomain-products to the expressions one obtains from inclusion-exclusion / Venn diagrams in probability, combinatorics, etc. Check that it works for at least a three-fold product. There is more that can be said, concerning the möbius function on an arbitrary poset, such as when the monoid of the ring is idempotent and commutative, but already the above at least is quite interesting. (Only request so we don't clutter up this thread: if you make progress, continue that discussion in a different thread unless it directly ties back to a kuratowski-type problem. I want to respect Todd's intentions in making this thread.)

view this post on Zulip Eric M Downes (May 29 2024 at 04:37):

Todd Trimble said:

Valeria wrote in part

This is so if you only want to consider "classical modal logics",

The context of this thread is classical logic, since we're stipulating ¬¬=1\neg \neg = 1 in the lead-up to stating the Kuratowski 14 problem, which is the topic here, not modal logic.

But I think this raises a worthwhile question:

In non-classical logic, what numbers does one get? This page appears to collect papers referencing the kuratowski 14 problem, making a literature review a bit easier.

view this post on Zulip Eric M Downes (May 29 2024 at 04:39):

This relates to the problem on general lattices I mentioned in my original comment. Such as non-distributive lattices or complemented lattices that are not uniquely-complemented.

view this post on Zulip Eric M Downes (May 29 2024 at 04:44):

In particular. Every lattice has a uniquely complemented completion, but even for finite non-distributive lattices, this completed lattice is infinite. So, in a uniquely-complemented non-distributive lattice, do you still only get 14 as an upper bound. I think so, because there's no obvious involvement of the meets and joins at all; we're starting with a specfic object and using a closure that doesn't explicitly reference meets and joins, and classical negation. (And its the negation that needs to satisfy equations concerning meets and joins.)

So maybe the question is: if we have a weakened negation, and we move to the uniquely-complemented completion of our original lattice, how many of the kuratowski-reachable elements are in the original lattice? There are likely better questions to be asked as well, concerning Cauchy completion, but I'm not insightful enough to pose them. :)

Anyway, if @Valeria de Paiva has a favorite modal logic, maybe she can state how she wants the two operations to work, so we can explore the generalized problem.

view this post on Zulip Eric M Downes (May 29 2024 at 04:53):

For a friendly and concrete kind of non-distributivity and lack of unique complements, one might look at the kuratowski problem for the submonoid lattice of a non-freely-generated monoid of some given rank. So a congruence of the free monoid. One might then ask when a smaller upper bound can be placed, based on some fact about the congruence.

view this post on Zulip Eric M Downes (May 29 2024 at 05:17):

If we could establish that for say a Heyting algebra, the number was still 14 (or even some other finite number n), then I think the problems are very straightforward. You take the unique morphism defining your lattice from the free lattice, and you "just" have to check the images of the 14 (n) elements under that morphism. So this is probably not easy, but does anyone know wht happens when you try?

view this post on Zulip Todd Trimble (May 29 2024 at 10:53):

ring endomorphism x1xx \mapsto 1−x in a generic ring

That's not a ring endomorphism! :-)

view this post on Zulip Todd Trimble (May 29 2024 at 10:54):

Moving on, however: you (Eric) seemed to want to consider generalizing this Kuratowski 14 business from closure operators on power set lattices to closure operators on uniquely complemented lattices. (I'm pretty sure by the way that most of that cornucopia page by Mathematrucker is about the classical Kuratowski closure-complement problem for topological spaces, although it's been a few years since I've looked at it [and I don't find it easy to navigate that page].)

view this post on Zulip Todd Trimble (May 29 2024 at 10:54):

Unique complementation is an interesting but IIRC a fairly strong condition to put on a lattice. In a distributive lattice DD, a complement of an element is unique when it exists, and if every element in DD is complemented, then DD is a Boolean algebra. IIRC, a uniquely complemented modular lattice must be distributive, and therefore a Boolean algebra as well, and there are other results of this type. In fact for a long while (I'm looking at this to refresh my memory), it was thought that uniquely complemented lattices had to be distributive. Dilworth, whom you mentioned, produced the first examples of uniquely complemented nondistributive lattices, and apparently impressively in 1945, showed that every lattice could be embedded in a uniquely complemented one, and yet to this day apparently not a great deal is known about these critters.

view this post on Zulip Todd Trimble (May 29 2024 at 10:54):

Back here you wrote

Eric M Downes said:

(I don't think submonoid lattices are in general uniquely complemented though, so either I am missing something or the objects in the poset must live in some kind of formal completion of the monoid theory? I wonder if those objects have any meaning.)

I think you were misunderstanding something when you wrote that. The ambient lattice I was discussing when you raised this question (originally at the Basic Topology thread) was still a power set lattice, P(A)P(A^\ast). The closure operator in question takes a subset to the smallest submonoid of AA^\ast containing it, or the underlying subset of that smallest submonoid, to say it super-pedantically. Now it's true that the lattice of fixed points of that closure operator is the submonoid lattice, but that's not the thing the closure operator was acting on to begin with (the same closure operator restricted there would just be the identity!).

view this post on Zulip Todd Trimble (May 29 2024 at 10:56):

Possibly a similar misunderstanding persists where you write

For a friendly and concrete kind of non-distributivity and lack of unique complements, one might look at the kuratowski problem for the submonoid lattice of a non-freely-generated monoid of some given rank.

because there I wonder "what Kuratowski problem?" because complements of elements tend not to exist at all in submonoid lattices, never mind the fact that even when they do they are not uniquely defined. What is the Kuratowski closure-complement problem supposed to be in that context?

view this post on Zulip Todd Trimble (May 29 2024 at 10:57):

Same question would apply to "If we could establish that for say a Heyting algebra, the number was still 14". Elements in Heyting algebras tend not to have complements (they have negations, of course, but those are usually not complements). But while we're on that topic, I'll mention something I find kind of charming:

view this post on Zulip Todd Trimble (May 29 2024 at 10:57):

The left adjoint takes a Heyting algebra HH to the poset of its regular elements, i.e., xHx \in H such that ¬¬x=x\neg\neg x = x. It turns out this poset is a Boolean algebra in its own right. I'll say it even better. Double negation ¬¬\neg\neg is a closure operator on the poset HH (it even preserves conjunction!), in particular it's idempotent, and when we consider the splitting of this idempotent,

HrReg(H)H,H \overset{r}{\to} \mathrm{Reg}(H) \hookrightarrow H,

the map r:HReg(H)r: H \to \mathrm{Reg}(H), which sends xx to ¬¬x\neg\neg x, is a Heyting algebra map and gives the unit of the adjunction RegU\mathrm{Reg} \dashv U.

I find this collection of facts sort of amazing, and they are closely connected with the double negation translation.

view this post on Zulip Todd Trimble (May 29 2024 at 10:57):

The right adjoint takes a Heyting algebra to the subposet of its complemented elements. This is a Heyting subalgebra as it turns out, and this Heyting subalgebra is a Boolean algebra, and the inclusion

Comp(H)H\mathrm{Comp}(H) \hookrightarrow H

is the counit of the adjunction UCompU \dashv \mathrm{Comp}.

(Insert standard joke that the left adjoint is the free functor, but the right adjoint is the fascistic functor: the murderous regime that kills off any non-complemented elements, leaving only the complemented ones behind.)

view this post on Zulip Eric M Downes (May 29 2024 at 12:18):

Todd Trimble said:

ring endomorphism x1xx \mapsto 1−x in a generic ring

That's not a ring endomorphism! :-)

Fair enough. Do you agree x1xx\mapsto1-x is a ring isomorphism on the same underlying set?
R=(R,,+,0R,1R)Rˉ=(R,,,1R,0R)\mathcal{R}=(R,*,+,0_R,1_R)\mapsto\mathcal{\bar{R}}=(R,\odot,\oplus,1_R,0_R)
This is what I meant.

view this post on Zulip Nathan Corbyn (May 29 2024 at 12:21):

It determines an automorphism of the set RR, but I can't see how you could define the operations to make this a homomorphism in general

view this post on Zulip Eric M Downes (May 29 2024 at 12:24):

Todd Trimble said:

Possibly a similar misunderstanding persists where you write

For a friendly and concrete kind of non-distributivity and lack of unique complements, one might look at the kuratowski problem for the submonoid lattice of a non-freely-generated monoid of some given rank.

because there I wonder "what Kuratowski problem?" because complements of elements tend not to exist at all in submonoid lattices, never mind the fact that even when they do they are not uniquely defined. What is the Kuratowski closure-complement problem supposed to be in that context?

Yes, that's what I'm talking about. A generic submonoid lattice does not have unique complements. But any lattice can be extended to a lattice that does. Due to Dilworth and there are more recent results that show you can even preserve existing unique complements when they do exist.

view this post on Zulip Todd Trimble (May 29 2024 at 12:25):

Let me denote x1xx \mapsto 1-x by ϕ\phi. If \odot means by definition the operation such that ϕ(xy)=ϕ(x)ϕ(y)\phi(x \cdot y) = \phi(x) \odot \phi(y), i.e., define xy=ϕ(ϕ1(x)ϕ1(y))x \odot y = \phi(\phi^{-1}(x) \cdot \phi^{-1}(y)) (so xy=x+yxyx \odot y = x + y - xy I believe), and similarly xy=ϕ(ϕ1(x)+ϕ1(y))x \oplus y = \phi(\phi^{-1}(x ) + \phi^{-1}(y)), then of course I agree. (Here ϕ\phi is an involution so I could have written that more simply, but I'm making a more general point.)

view this post on Zulip Eric M Downes (May 29 2024 at 12:28):

Nathan Corbyn said:

It determines an automorphism of the set RR, but I can't see how you could define the operations to make this a homomorphism in general

I claim this suffices

a,bR\forall a,b \in R let
ab:=a+baba\odot b := a+b-ab
ab:=a+b1a\oplus b := a + b - 1

The additive identity in RR becomes the multiplicative identity in Rˉ\bar{R}.

view this post on Zulip Todd Trimble (May 29 2024 at 12:34):

Yes, that's what I'm talking about. A generic submonoid lattice does not have unique complements. But any lattice can be extended to a lattice that does. Due to Dilworth and there are more recent results that show you can even preserve existing unique complemented when they do exist.

(I'm clipping messages because of this ridiculous character limit imposed by Zulip.) So just to be super clear, you mean to consider the closure-complement problem not on a submonoid lattice say [although as I tried to explain, I don't know what closure operator you're considering on that lattice], but on some uniquely complemented extension?

Immediately this raises other questions. Does Dilworth just show the existence of a uniquely complemented extension? Or does he do this in some sort of controlled functorial way? Is there some sort of universal way that he is discussing? Does the closure operator on the original lattice extend to a closure operator on the extension?

view this post on Zulip Eric M Downes (May 29 2024 at 12:54):

He provides a constructive proof, albiet with painful fraktur fonts and there are more recent results that provide much nicer constructions, that preserve existing complements, which I will endeavor to find for you from my disorganized bibliography. Your patience is appreciated.

Regarding closure. I expect there is probably a natural extension, if we know what category the lattice is from, etc. But we don't need to choose a specific operator to get possibly interesting results.

Because extending the closure c:LLc:L\to L is not hard; we don't need a specific closure for this problem, though we could adopt one. I don't know what the galaxy-brain natural extension is, but how about the "dumbfuck unnatural extension"? :)

cˉ:LL; cˉ(x)={c(x)xLxx∉L\bar{c}:\overline{L}\to\overline{L};~\bar{c}(x)=\begin{cases}c(x)&x\in L\\x & x\not\in L\end{cases}

There are others, possibly dumber, possibly smarter. It is possible there is one that would result in the largest number of Kuratowski elements, and others that would result in a smaller number. That is the kind of thing one would expect to realize by studying the problem? "Studying an extension which is legal but arbitrary is a waste of time." is certainly an opinion one could have, and one might even be right, but that is quite different from "there are no legal extensions".

view this post on Zulip Eric M Downes (May 29 2024 at 13:03):

(Apologies if swearing is a no-no here... in stat physics where I grew up, "dumbfuck" was a technical term meaning "choose the simplest thing you can that's easy to simulate / calculate that does the job". I accept that is quite inelegant. I certainly don't mean anyone here (other than possibly myself!) is dumb, and I will happily adopt a different term if anyone is bothered.)

view this post on Zulip Todd Trimble (May 29 2024 at 13:09):

Quick response: I don't mind "dumbfuck", and certainly didn't think it meant anything personal. Seems like a handy term for certain contexts.

view this post on Zulip Todd Trimble (May 29 2024 at 13:10):

But just instinctively, I have a hard time believing the dumbfuck extension you propose is going to work. :-)

view this post on Zulip Todd Trimble (May 29 2024 at 13:12):

For example, I can imagine maybe that some element yLy \in \overline{L} sits between xx and c(x)c(x), and this would seem to break monotonicity.

view this post on Zulip Todd Trimble (May 29 2024 at 13:14):

Just don't know right now. I think I'd like to understand better what Dilworth does (which sounds interesting, by the way) before heading into wild and woolly discussions.

view this post on Zulip Todd Trimble (May 29 2024 at 13:16):

Also by the way, I think a lot of lattice theory is cool, and wish I knew more. (Rota tells a story about some professor at MIT who stopped him in the hall and demanded, "Admit it! All lattice theory is trivial!" Wonder who that dumbfuck was?!)

view this post on Zulip Morgan Rogers (he/him) (May 29 2024 at 13:21):

Eric M Downes said:

I will happily adopt a different term if anyone is bothered

I had a lecturer who would say describe an 'obvious' solution as "a/the simple-minded thing". Since what's obvious/simple/dumb is so subjective, I think the most neutral description you could use is "a first guess"?

view this post on Zulip Eric M Downes (May 29 2024 at 13:41):

Great! See you've already knocked one possible extension down. This is probably a better one

cˉ(x)={c(x)xLx{c(y); Lyx}x∉L\bar{c}(x)=\begin{cases}c(x)& x\in L\\ x\vee\{c(y);~L\ni y\leq x\} & x\not\in L\end{cases}

I agree though, first I should work on the smarter proof that works on the kleene-star, and once I find the newer paper, we can open up a new topic for understanding the unique-complementation completion (what a mouthful) for anyone interested. You aren't the only one who would like to learn more about lattice theory, I'm also in that bucket. :)

(And I don't know who the interlocutor was... But Gian-Carlo Rota was such an amazing human and mathematician, he was with us for too short a time. I love reading his papers.)

view this post on Zulip Eric M Downes (May 29 2024 at 13:54):

Also, for the next hour, shameless plug for our seminar, I'll be listening to JS Pacaud Lemay talk about differential categories.
https://us06web.zoom.us/j/82162005589?pwd=GHW8upy3LyerjxQvoGMYV91nVTqFHb.1

view this post on Zulip Todd Trimble (May 29 2024 at 13:54):

There's too much to learn of course, and the older you get, the more you feel like you'd better budget your time wisely ("you" meaning "me", perhaps), so I can't promise I'd spend a lot of time on unique complementation questions, alas. (The Kuratowski closure-complementation problem is for me a semi-amusing diversion, not at all central to my interests.)

Yeah, Rota was great and wrote beautifully. John Baez is familiar with some of my fuller opinions about the man, but I won't rehash them here.

view this post on Zulip John Baez (May 29 2024 at 13:58):

I took 2 philosophy courses from Rota, but no math courses. He was a great guy. I'll use this opportunity to tell one of my favorite Rota stories. Once he was at a social gathering where the host only offered decaf coffee. Coming from a traditional Italian background, this was sacrilege to Rota. So, he went around the room offering people caffeine pills to dissolve in their coffee.

But this hints at a darker fact: he was reliant on these pills. I don't know if he went as far as Erdos, who used amphetamines. But if mathematicians consider themselves "machines for turning coffee into theorems", it's not surprising that some get carried away and go for stronger stuff.

view this post on Zulip John Baez (May 29 2024 at 14:00):

Anyway, Rota enjoyed nothing better than a good story or joke, so you have to take everything he ever wrote with a grain of salt. He reminds me a bit of Wittgenstein's remark that you could write a good philosophy book that consisted of nothing but jokes.

view this post on Zulip Oscar Cunningham (May 29 2024 at 14:22):

The thing about Erdős and amphetamines is often exaggerated. I think people get the impression that he found a supply of them because he thought they would make him better at mathematics. But in fact they were prescribed to him for depression following his mother's death. This was when he was already 58 years old; after he had done most of the mathematics he's famous for. He did find that he couldn't do mathematics without them, but that seems like a standard effect of depression.

He did drink a lot of coffee though.

view this post on Zulip Eric M Downes (May 29 2024 at 16:26):

Ok! Here is a very readable review of unique complementation by Goerge Grätzer from the AMS Notices, that he uploaded to researchgate, mentioning Dilworth's original construction and some newer theorems, among other things.

At some point I would like to summarize/present the results he mentions and combine them with some notes from Vaughn Pratt on residuated lattices, but until then, the curious/impatient can look at the above.

view this post on Zulip Todd Trimble (May 29 2024 at 19:40):

Okay, very good; thank you, Eric!

view this post on Zulip Todd Trimble (May 29 2024 at 19:40):

There are some aspects of Grätzer's account that I find confusing, in the spirit of questions like "wait, isn't that trivial?" or "wait, didn't Whitman already do that before Dilworth?". I also wish Grätzer would just come out and state the relevant universal properties where they hold. But if I'm reading him right, the relevant constructions should indeed be functorial.

view this post on Zulip Todd Trimble (May 29 2024 at 19:41):

It would be nice if all the relevant constructions are appropriately 2-functorial. And I think they should be, although it's possible I'm making a slip somewhere.

view this post on Zulip Todd Trimble (May 29 2024 at 19:41):

The category of posets is a 2-category: 0-cells are posets, 1-cells are monotone maps, 2-cells are pointwise inequalities fgf \leq g between maps. The same can be said for lattices: the join and meet operations are separately and also jointly monotone. The same can be said for uniquely complemented lattices, if these are defined as lattices satisfying the property that every element has a unique complement (notice that lattice maps automatically take complements to complements; you don't have to posit unique complementation as an extra operation that needs to be preserved, which in any case would be an antitone, not monotone, operation on the 2-cells).

view this post on Zulip Todd Trimble (May 29 2024 at 19:41):

Anyway, if we start with a closure operator on a poset, in other words a monad in the 2-category of posets, the hoped-for 2-functor

F:PosUCLF: \mathsf{Pos} \to \mathsf{UCL}

from posets to the 2-category of uniquely complemented lattices would preserve monad structures, since the notion of monad is definable in the language of 2-categories. So we would get in this way a closure operator on a uniquely complemented lattice. Warning, though: in this construction, the 14-problem collapses, because if C:LLC: L \to L is a closure operator internal to the 2-category of uniquely complemented lattices, then automatically C¬=¬CC \neg = \neg C because of the preservation of complementation property just mentioned. So this path sort of peters out. We can extend closure operators, all right, by this method, but not in any way that's all that interesting for the Kuratowski question. :-(

view this post on Zulip Todd Trimble (May 29 2024 at 19:41):

Just to close out this segment of the discussion, I want to construct the free lattice on a poset by abstract nonsense, using a coend formula. I believe Whitman was the first to construct the free lattice, in 1941, and what he did was much more explicit and syntactic (and useful for calculations) than anything I'll do. One thing that bugs me a little about Grätzer's account of Dilworth's "bombshell" 1945 paper, on page 699, is that he doesn't mention Whitman's earlier results on the free lattice.

view this post on Zulip Todd Trimble (May 29 2024 at 19:42):

So here goes. Following Lawvere, we know how to construct abstractly the free lattice on an nn-element set. The monad is given by the formula

UF[n]=Nat(Un,U)UF[n] = \mathrm{Nat}(U^n, U)

where U:LatSetU: \mathsf{Lat} \to \mathsf{Set} is the forgetful functor and UnU^n is its n-th cartesian power. There's a functor

FinSetSet:nUF[n]\mathsf{FinSet} \to \mathsf{Set}: n \mapsto UF[n]

and given a poset PP, there's also a functor

FinSetopPos:nPn\mathsf{FinSet}^{op} \to \mathsf{Pos}: n \mapsto P^n

and now just take the tensor product of the last two named functors:

UFFinSetP=n:FinSetUF[n]Pn.UF \otimes_{\mathsf{FinSet}} P^\bullet = \int^{n: \mathsf{FinSet}} UF[n] \cdot P^n.

view this post on Zulip Todd Trimble (May 29 2024 at 19:42):

The crucial fact needed to ensure that this gives the free lattice on PP is that the functor PPnP \mapsto P^n preserves reflexive coequalizers, for each nn, using the fact that (1) P×QP \times Q preserves reflexive coequalizers in the separate arguments P,QP, Q, because Pos\mathsf{Pos} is cartesian closed, and (2) if a functor in two variables preserves reflexive coequalizers separately, then it preserves them jointly. I suppose this type of thing has been known since the 60's or so, but I can point to some relevant Cafe discussion here and in the comments, with a handy literature reference given by Richard Garner here.

view this post on Zulip Morgan Rogers (he/him) (May 30 2024 at 08:52):

Sounds related to Lemma 0.17 of Johnstone's Topos Theory, which shows that a "product" of reflexive coequalizer diagrams has a diagonal which is a (reflexive) coequalizer diagram; if it's the case that any reflexive coequalizer in a product category can be presented as a diagonal in that way then it proves that result.

view this post on Zulip Eric M Downes (May 30 2024 at 09:03):

Nice! I definitely want to understand that better!

view this post on Zulip Todd Trimble (May 30 2024 at 10:35):

Morgan: yes, indeed, and that's probably where I first learned it. (Eric: yes, that book is sometimes called the "Baby Elephant".) It can be seen directly using the fact that the walking reflexive fork DD is a sifted category, meaning that the diagonal functor Δ:DD×D\Delta: D \to D \times D is final. So that if you have two such reflexive forks F,GF, G in Pos\mathsf{Pos}, i.e., functors F,G:DPosF, G: D \to \mathsf{Pos}, then

[colimdDF(d)]×[colimdDG(d)]colim(d,d)D×DF(d)×G(d)colimdDF(d)×G(d)[\mathrm{colim}_{d \in D} F(d)] \times [\mathrm{colim}_{d' \in D} G(d')] \cong \mathrm{colim}_{(d, d') \in D \times D} F(d) \times G(d') \cong \mathrm{colim}_{d \in D} F(d) \times G(d)

where the first isomorphism uses the fact that the product preserves reflexive coequalizers in separate variables, and the second uses the fact that Δ:DD×D\Delta: D \to D \times D is final. This shows that the coequalizer of the product diagram F×G:DPosF \times G: D \to \mathrm{Pos} is the product of the coequalizers.

view this post on Zulip Todd Trimble (May 30 2024 at 10:39):

Eric, possibly these notes, which I wrote up for John Baez some time back for a paper he and Nina Otter were writing, would be useful in understanding this result better. It fits into a Lawvere theories = cartesian operads story.

view this post on Zulip Todd Trimble (May 30 2024 at 10:50):

Oscar Cunningham said:

The thing about Erdős and amphetamines is often exaggerated. I think people get the impression that he found a supply of them because he thought they would make him better at mathematics. But in fact they were prescribed to him for depression following his mother's death. This was when he was already 58 years old; after he had done most of the mathematics he's famous for. He did find that he couldn't do mathematics without them, but that seems like a standard effect of depression.

He did drink a lot of coffee though.

That impression people get was certainly the one promoted by Paul Hoffman's book. I think this is the first time I hear that he got started on them through a prescription for depression.

view this post on Zulip Todd Trimble (May 31 2024 at 11:51):

I was a little hasty above where I claimed to produce a left adjoint to the forgetful functor from lattices to posets. What I actually did is produce the free functor PosLat(Pos)\mathsf{Pos} \to \mathrm{Lat}(\mathsf{Pos}) to the category of internal lattice objects in the category of posets. Probably I had a brain glitch where I was conflating the category of such internal lattices with the category of lattices itself. (I may come back in a while to fix this.)

view this post on Zulip Eric M Downes (Jun 03 2024 at 14:41):

Todd Trimble said:

Warning, though: in this construction, the 14-problem collapses, because if C:LLC: L \to L is a closure operator internal to the 2-category of uniquely complemented lattices, then automatically C¬=¬CC \neg = \neg C because of the preservation of complementation property just mentioned. So this path sort of peters out. We can extend closure operators, all right, by this method, but not in any way that's all that interesting for the Kuratowski question. :-(

On the flipside, this isn't all bad news. This means the general proof for Kleene-stars applies to these extended lattices, without modification. One could then ask how many of the kuratowski elements lie in the original poset, etc.

The Green's relations for a monoid define a few interrelated meet-semilattices which are not uniquely complemented, and are distinct from the submonoid lattice; a,bM;aLb    cM;ac=b\forall a,b \in M; a\leq_L b\iff \exists c\in M; ac=b. (The posets say things about the ideals of the monoid.) We can always adjoin a 0 (= top) to MM obtaining a complete lattice, and we can extend this lattice as Grätzer specifies. There might be some kind of interesting interactions with the monoid variety and kuratowski closure.

One might also play around with re-complementing the Macneille completion but my ideas there are much more vague.