You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
What sort of additional equipment is necessary on top of a monoidal structure in order to be able to talk about "idempotent monoid objects"?
probably something along these lines https://ncatlab.org/nlab/show/monoidal+category+with+diagonals
what you want to say is that the identity is equal to the squaring map, so you need to be able to define a squaring map
hence you need to be able to "duplicate elements of the monoid", in internal phrasing
I was looking at that, but I couldn't figure out when we have such a "monoidal category with diagonals" besides when the category is fully cartesian
it does say that the monoidal product will be cartesian if it has both diagonals and deletion
The nLab page gives an example: pointed sets with the smash product.
the monoidal product on Rel w/ the graph of Set's diagonal might qualify for this—lemme think about it
What is "idempotent monoid object" supposed to mean? Are idempotent monads supposed to be an example?
ooh
@Nathanael Arkor isn't that category still cartesian?
i just assumed something which would specialize to the ordinary notion of an idempotent monoid in Set
it isnt cartesian
no, that's what I'm trying to avoid. I don't want to be specific to Set, or even to cartesian monoidal categories, because I suspect I have an example of this, but the category is not cartesian monoidal
I know the point of the question is that you are looking for a definition, but it would be nice to have some more clues about what it is you are trying to define :upside_down:
you want to avoid your definition producing ordinary idempotent monoids when you interpret it in Set? :mischievous:
For example: at least one example of a category in which you know how to define the object, and what the object is supposed to be there.
Since we apparently have two entirely different interpretations already.
Or at least, I think they're entirely different.
mm, yeah i think they are different
idempotence in reference to a Set monoid is talking about the multiplication, whereas idempotence in reference to a monad is talking about application of the monad
Sorry, let me start at the beginning. I have a category of functors which are monoids with respect to the Day convolution monoidal structure on . Moreover, the monoidal structure on which induces the Day convolution monoidal structure on the functor category admits a "diagonal map" just as you mentioned.
is this some kind of infinitary generalization of species? :thinking:
oh wait ur not using the groupoid core either, thats fairly different
let's call that monoidal structure , and its diagonal map
which monoidal structure on Set are you using?
just the cartesian one?
no, it's a different one besides the cartesian one
(was thinking that cocartesian seemed plausible in this context)
it's actually literally the coproduct of the cartesian and cocartesian structures
o.O
coproduct in which category?
in
the cartesian and cocartesian structures aren't sets, so how can you take their coproduct in set...?
you mean pointwise, or
Yes, the functor category admits "pointwise" coproducts
ah
wasnt sure if maybe there was some fancy category of monoidal structures that might have different coproducts or sth
so A ⊗ B is A × B + A + B, then...
interesting... is this supposed to be something like "one of A and B, but maybe both"
Right. And now we're talking about that tensor on lifted via Day convolution to the endofunctor category on
right, exactly
in other words, we're talking about the category of lax monoidal functors that cohere the monoidal structure of on with the monoidal structure of the product on
wait what
where did that "in other words" come from? i didnt notice anything about lax monoidal functors so far
A monoid with respect to Day convolution is the same as a lax monoidal functor
ooooooh right
ok, hold on a moment
lemme think about what day convolution looks like in this case
so a representative of an element (F ⊗ G)(S) is elements F(A), G(B), and functions A × B → S, A → S, B → S...
yup, exactly
:thinking:
or you could just say
yeah but i wanted to expand it
is this for haskell purposes by any chance :upside_down:
so the monoid here takes a little getting used to, but now what I want to talk about is "idempotent" monoids. because we have an obvious diagonal map
i smell some kind of zipping
busted :)
i'm formalizing "alignable" functors
hmm, well, i was already suspicious when i saw your ⊗
but this clinches it
right, ⊗ is These
I went through the laws for the Semialign
class and some of them I was able to dismiss as free theorems, most of the others follow from having a symmetric monoidal functor, but the "idempotent" law is what I'm trying to unravel
/me looks up Semialign
zip :: f a -> f b -> (These a b -> c) -> f c
HA
i typed that before actually opening the haddock
ok im not quite seeing how this corresponds to a monoid object though
you mean you don't get how a lax monoidal functor corresponds to a monoid in some structure?
no wait
sorry i think i had some things backwards in my head
lemme chew on it for another minute
align :: f a -> f b -> f (These a b)
this is the fundamental operation of the Semialign
class
we can make the laxity more obvious if we write it as align :: (f a, f b) -> f (These a b)
no sorry yeah i got it
(i wasnt thinking of it in lax functor terms)
ok, join align
ha, obnoxious phrasing
hmm, interesting
i think this is align . dup
, where dup
is the diagonal map for the cartesian monoidal structure
and join These
is the diagonal map for the other monoidal structure
yeah
but
um, one sec
okay, this is interesting... so align
isn't the monoid multiplication, alignWith
is
yes
align
is the laxity of a monoidal functor, which is equivalent to a monoid wrt Day convolution. alignWith
is the append operation of that monoid
hence (align | alignWith)
as the minimal complete definition
i bet we have duplication for the day convolution
have you checked that already?
I haven't
I did think about it a bit, but it didn't seem to work
oh really
hm, ok, so we have x in F(A) and we need to give an element of (F ⊗ F)(A)
which we can do by giving some F(X), F(Y), and These X Y → A
I have a duplication for the These monoidal structure, but that's the opposite of what it seems I need to obtain a natural transformation from F to its Day convolution with itself
yea
i see what u mean
hmmm
im tempted to say what if we just went with F(A), F(A), and These A A → A by being left-biased
but that gives me a bad vibe
maybe i'm approaching this the wrong way in trying to somehow involve the Day convolution monoid in my search for an "idempotent monoid"
maybe what we have instead is that the duplication is preserved by the functor
uh, what's that thing with bimonoids
no idea
the comultiplication is a monoid homomorphism and the multiplication is a comonoid homomorphism
or something like that
oh
frobenius algebra?
i'm not sure if frobenius algebra is the same thing, but i remember seeing that name somewhere. "bimonoid" is what I read about
oh looks like they arent the same
i just saw "both monoid & comonoid structures" & jumped to conclusions :-)
does join These :: a -> These a a
work as an associative cosemigroup?
yeah
A bimonoid is really different from a Frobenius monoid. The most exciting bimonoid axiom is this one:
multiplication is a homomorphism for comultiplication (and vice versa)
The most exciting Frobenius monoid axiom is this one:
comultiplication commutes with left and right multiplication (and vice versa)
and dup :: a -> (a, a)
is obviously also one. So maybe:
uncurry align . dup ≡ fmap (join These)
is talking about some kind of comonoid homomorphism
hmm, you know what
now im getting distracted by the fact that there is a canonical partial monoid structure on every type for ⊗ = These
one that's useful in separation logic, even :3c
actually, well, the one i've seen in separation logic is with ⊗ = × and uses Maybe instead, but it's basically the same idea
"if you only get one input then thats your result; if you get two equal inputs thats your result; if you get conflicting inputs, no result"
hmmm well maybe that's not as canonical as it could be, i guess you could also go for no result period on two inputs
aha
if we go from x ∈ F(A) to the representative (x, x, id_{A ⊗ A}), then i believe we get a transformation F → F ∘ (-)²
Sorry, I'm not following. Are you talking about join (alignWith id) :: f a -> f (These a a)
?
where that's the tensor power i mean
umm i might be, lemme see
no im not
this doesnt require any monoid structure or anything
actually wait wtf am i saying
/me stops and thinks
OH i did miswrite sorry :((
setwise, the type of the family is F(A) → (F ⊗ F)(A ⊗ A)
sorry, do those two symbols stand for different things?
yeah
the first one is day convolution, the second one is These
ok, thanks
so what is the definition of this morphism F(A) → (F ⊗ F)(A ⊗ A)?
so that's almost the type of a duplication map in the functor category, except that it's changing the argument
and we can instead see it as F → F² ∘ (-)², where (-)² is the tensor power, not the literal power
the definition is
for an element x of F(A)
ok, I understand
:)
so i think the law is that, uh...
ok so we have this thing that is presumably a natural transformation. but we don't have a trivial monoid to get rid of the (-)², instead we have a comonoid on every A
well we don't even have a comonoid do we
we do have a monoid to get rid of the F^2
yeah
we have a cosemigroup, I haven't thought about the unit
most sets don't have a function to the empty set
that half of it doesn't appear in the semialign class anyway
yeah but the deletion map of a comonoid has to go to the tensor unit
anyway i believe the law is uh.... doing the weird "duplication" in the functor category and then the monoid multiplication is equal to the whiskering of Set's actual duplication
"the whiskering of Set's actual duplication" = fmap (join These)
i think this is basically like... taking the hypothetical "doing duplication in the functor category and then monoid multiplication is the identity" law, removing a term from the hypothetical definition of duplication in the functor category, and moving it to the other side of the equation
instead of putting an inverse to the Set duplication into the functor duplication, we put the Set duplication on the right-hand side
interesting
i gotta go tho :T
good luck, this seems neat
the more i think about this the more the law seems to be a statement about bimonoids. a monoidal functor is so called because it carries every monoid on the source category to a monoid on the target category. i think all this law is saying is that it sends a bimonoid to a bimonoid
@sarahzrf Thanks for all the help and insights! :wave:
@John Baez Do you know if there is a concept like a weakened bimonoid where one structure is only a (co)semigroup? Are these well studied?
Asad Saeeduddin said:
John Baez Do you know if there is a concept like a weakened bimonoid where one structure is only a (co)semigroup? Are these well studied?
The concept certainly exists: it sounds like you're talking about a semigroup in the category of comonoids (in some symmetric monoidal category), which will be the same as a comonoid in the category of semigroups (in that same symmetric monoidal category). Using theorems about internalization one can quickly prove a lot of basic facts about these gadgets. But I've never heard anyone talk about them!