You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Apologies in advance because this question will turn out to be fairly technical.
Lately I have been studying a lot the properties of the distribution monad $D$ defined here. I am stuck in trying to understand a fairly mysterious claim made in this paper by Bart Jacobs, where at the end of page 7 he claims that "Moreover, ."
I am having trouble understanding this claim, and there must be a mistake in this line of reasoning, but I can't find where.
Where is the mistake? Is there another monoidal closed structure on the category of convex spaces, different from the cartesian one (which, I believe, is not closed, so I'm even more puzzled if possible) induced from an oplax structure on that I cannot find?
Inspecting the failure of the map to be invertible, one sees that it is surjective, but not injective: "splitting" a distribution on into marginalizations forgets all "correlation" (apologies if I am misusing probability theory lingo!) and as a result we can't reconstruct a distribution on given only its marginalizations. It then seems that receives a "universal bilinear map" in a similar fashion to tensor product of vector spaces (and maybe still as a quotient of under suitable relations?)... and this is where my speculations became too handwavy, I stopped, and I got lost.
I really have no idea how the map in question, and even less the isomorphism that Jacobs claims to exist, is found.
(PS: I hereby invoke my categorical probability gurus @Paolo Perrone , and @Tobias Fritz :-) )
It looks like you got all the ingredients to resolve this conundrum, so here's a hint: yes, the two monoidal structures are different! And the analogy with vector spaces is a good way to think about it, since it works just the same for convex spaces (and for algebras of commutative monads on cartesian closed cats quite generally, by results of Kock).
The tensor product has the universal property of turning "biaffine" maps into ordinary affine maps, which are the morphisms of convex spaces. To show , it should be possible to prove that the laxator has the universal property of the universal biaffine map , based on the universal properties of the free algebras and .
Let us know if it makes some more sense now.
So, let's see if I understand what's going on: in order to account for "affine bilinearity" I thought one has to consider an equivalence relation that identifies
and maybe something else that mimicks ...
I don't know what confuses me, probably these relations already hold in as a consequence of it being a free algebra? Or maybe the fact that is a "stupid" map sending the pair to , and I don't see how this can have the right universal property: a biaffine map induces an affine map , probably sending to where .
Yep, I think you're on the right track with that final sentence! Note that it's enough to establish a bijection between biaffine maps and arbitrary maps that is natural in . To simplify this problem further, it's enough to establish a natural bijection between either of these sets of maps and the set of maps that are affine in the second argument. And that should be pretty clear.
Concerning the things before that, I'm not sure since I don't understand your triples in angle bracket notation. What do those mean?
Is there another monoidal closed structure on the category of convex spaces, different from the cartesian one?
One well-known monoidal closed structure on convex spaces, where the internal hom is the convex space of all convex linear maps, is not cartesian but only semicartesian.
Right, and that's the monoidal structure that Fosco has denoted . It's quite intriguing that all the things that all the standard properties of the tensor product of vector spaces generalize to algebras of commutative monads. And then the case of convex spaces is another instance of that. (The fact that the monoidal structure is semicartesian is because the free convex space monad has the additional property of being affine, meaning . That's clearly not true for the vector space monad!)
I'm starting to see all these things through a unified lens, indeed. I never noticed that one can intuitively motivate the machinery of strong monads using vector spaces as analogue and a canonical laxator ;-) so cool.
(OT : Since we're here discussing this sub-topic, is there a fancy explanation for the nomenclature <<strong monad>> as well? What exactly is "strong" in a strong monad?)
In my headcanon, the terminology comes from the fact that giving a strength to an endofunctor on is the same thing as enriching it over , and both those words have a close meaning in real life. (I don't know if that is the original intention)
"Mastering others is a tensorial strength;
being enriched is true power.”
― Lao Tzu, Tao Te Ching
(couldn't resist the pun)
nice headcanon anyway :-)
It seems that in Kock's original paper, a “functor” between -enriched categories is a functor of the underlying (un-enriched) categories, and a “strong functor” is a -enriched functor.
There is some relevant material at this nlab page, but feel free to add!
Also here, from the closed point of view.
I find the tensor product of convex spaces an absolute nightmare to compute with. The other day I was chatting with @John Baez and said I think is infinite dimensional, and John pointed out what I totalled missed that since , so is a tetrahedron... but I don't know how you'd figure that out without using the universal property. So for example I have absolutely not a clue what looks like, and I still suspect it might be infinite dimensional...
I think that if and are - and -dimensional, then embeds into .
You can imagine it as embedding the two smaller pieces as skew to one another in the higher-dimensional space, and then drawing all line segments that connect the two.
So looks like plus two skew lines, one on each face?
I was envisioning it as a tetrahedron with vertices and some edges removed.
The skew lines correspond to the two copies of ?
I think that you've got it right
Spencer Breiner said:
I think that if and are - and -dimensional, then embeds into .
You can imagine it as embedding the two smaller pieces as skew to one another in the higher-dimensional space, and then drawing all line segments that connect the two.
It sounds like you're talking about the join of convex spaces, which is best known in the special case of join of polytopes. That is actually the coproduct of convex spaces, which is a third monoidal structure that is not isomorphic to either the cartesian one or the tensor!
You can see this from the fact that your dimension is (up to the constant) additive, while for the tensor product it should be multiplicative.
(At least to leading order.)
If embeds into and embeds into , then their tensor product embeds into .
This should be a consequence of .
Tobias Fritz said:
If embeds into and embeds into , then their tensor product embeds into .
Ah! A wild Segre embedding appears! There's also a discussion on how the map arises from a lax monoidal structure over the projective space functor!
These ugly dimension counts (and many other aspects of convex spaces as well) become much simpler if one works with convex cones rather than convex spaces. Here, by a convex cone I simply mean an algebra of the nonnegative linear combinations monad, which is the same as the distribution monad but with the normalization condition dropped. Then the free object on generators really is -dimensional (it's just ) rather than weirdly -dimensional, and the dimensions really just multiply under tensor and add under coproduct. Moreover, the coproduct is at the same time the product, so convex cones actually form a category with biproducts! They're much closer to vector spaces, which one can also understand by noting that they're the semimodules over the rig of nonnegative reals.
I'm generally not a fan of abandoning one category in favour of another one just because the latter is better behaved, but in this case it seems to be warranted: most things that one wants to do with convex spaces can be done more elegantly with convex cones instead. This is closely related to homogenization tricks.
Indeed, the "Segre embedding" for the distribution monad is used in algebraic statistics to describe the variety of independent joints. For us, that's the (image of) the lax monoidal structure of the monad.
Interesting connection! I think the dimension count is indeed the same as in the Segre embedding, but the reason for the dimension shift is a bit different: in projective space, we lose a dimension because of quotienting. For the distribution monad, we lose a dimension because is a subobject of determined by the normalization condition. So it's "sub" rather than "quotient".
@Tobias Fritz that's exactly how I started seeing this story, making sense of some constructions in affine geometry, and finding that some of them are universal constructions for convex spaces; there are some interesting adjunctions between semimodules and convex sets, as well as convex sets and semilattices. (I was about to rediscover some of them independently from Jacobs, which was a pleasant find)
I would have said that the point of the story is that an -dimensional projective space can be covered by affine spaces "somewhat canonically" (not to be intended literally, I just mean that the choice of affine charts is the same in every dimension), and each of these affine spaces has a choice of convex structure
Tobias Fritz said:
Interesting connection! I think the dimension count is indeed the same as in the Segre embedding, but the reason for the dimension shift is a bit different: in projective space, we lose a dimension because of quotienting. For the distribution monad, we lose a dimension because is a subobject of determined by the normalization condition. So it's "sub" rather than "quotient".
This might be a little bit of a sidetrack, but I still have hope to find a nice Markov category that's based on quotients rather than subobjects.
And now, the product of two coverings made of n+1 and m+1 pieces forms a covering made of...
Thanks for the correction, Tobias! I should have thought through that a bit more :blush:
Jules Hedges said:
I find the tensor product of convex spaces an absolute nightmare to compute with. The other day I was chatting with John Baez and said I think is infinite dimensional, and John pointed out what I totalled missed that since , so is a tetrahedron... but I don't know how you'd figure that out without using the universal property. So for example I have absolutely not a clue what looks like, and I still suspect it might be infinite dimensional...
Let me try to guess what it is.
First, I would guess that as a convex space is the colimit of the convex subspaces . I don't know if this is true, but suppose it is! Since tensoring with anything is a left adjoint, distributes over colimits. So, it seems should be the colimit of the tetrahedra .
So we get the colimit of bigger and bigger tetrahedra - for specificity, imagine making a regular tetrahedron bigger and bigger by rescaling it, and take the union of all these tetrahedra.
This sounds like to me. So I think the answer is with its usual convex structure.
Now I'm reading the whole thread, and I see this agrees with Tobias' answer.
That's a great way to see it -- assuming that you mean rather than :wink: Since the tetrahedra are 3-dimensional.
Yes, I meant . The 4 corners of the tetrahedron took over my soul and made my type .
But that's the usual sort of fencepost error that tends to kick in with convex sets!
Or simplexes, for that matter: the -simplex having vertices.