Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Help with an equivalence of coends


view this post on Zulip Dylan Braithwaite (Oct 13 2021 at 18:42):

Hi. I’m trying to work out the details of proposition 4.5 from @Toby Smithe’s Bayesian Update’s Compose Optically, but I’m stuck on the third isomorphism (highlighted in the image below). Namely the part sending Cˇ(V(Mˆ(I),Bˇ),Aˇ)\v{\mathcal{C}}\left(\mathbf{V}(\^{M}(I), \v{B}), \v{A}\right) to V(Mˆ(I),C(B,A))\mathbf{V}\left(\^{M}(I), \mathcal{C}(B, A)\right). The paper says this is by the Yoneda lemma, but I can’t see how to apply that here. I wonder if anyone can help shed some light on how it works?

If I set F=V(Mˆ(I),hom(B,)):CVF = \mathbf{V}\left(\^{M}(I), \mathrm{hom}(B, -)\right) : \mathcal{C} \to \mathbf{V}, then this amounts to saying Cˇ(F,Aˇ)F(A)\v{\mathcal{C}}(F, \v{A}) \cong F(A), which is almost like the regular Yoneda lemma, but the natural transformation is backwards. And I can’t work out how that could be true in this case. Is there some magic going on to do with being under a coend?

view this post on Zulip Dylan Braithwaite (Oct 13 2021 at 18:42):

2BDC1318-4D7B-488E-AC66-6E3FC72B0FB0.jpg

view this post on Zulip Matteo Capucci (he/him) (Oct 14 2021 at 06:49):

Uhm indeed the claimed isomorphism is

Cˇ(V(Mˆ(I),Bˇ),Aˇ)=Cˇ(V(C(I,M),C(B,)),C(A,))V(C(I,M),C(B,A))\v{\mathcal{C}}\left(\mathbf{V}(\^{M}(I), \v{B}), \v{A}\right) = \v{\mathcal{C}}\left(\mathbf{V}(\mathcal{C}(I,M), \mathcal{C}(B,-)), \mathcal{C}(A, -)\right) \cong \mathbf V(\mathcal C(I,M), \mathcal C(B,A))

but as you say (co)Yoneda goes the other way

view this post on Zulip Reid Barton (Oct 14 2021 at 09:46):

Is this related to https://categorytheory.zulipchat.com/#narrow/stream/229199-learning.3A-questions/topic/Optics.20-.20grates/near/240073184?

view this post on Zulip Reid Barton (Oct 14 2021 at 09:47):

arXiv is not working for me at the moment so I can't see more context, but Gr could stand for "grate". And the second coend looks odd in that M^\hat M appears twice covariantly.

view this post on Zulip Dylan Braithwaite (Oct 14 2021 at 10:05):

The Gr in the Bayesian updates paper stands for ‘Grothendieck’, in that it arises from the Grothendieck construction of the functor Stat. The equivalence between that and the third coend is proved earlier in the paper.

view this post on Zulip Dylan Braithwaite (Oct 14 2021 at 10:05):

But I’ll have a look over that thread anyway, thanks! I thought it strange too that the variance of B changes between the two lines

view this post on Zulip Toby Smithe (Oct 14 2021 at 10:23):

Hey, weird! Thanks for pointing this out. I'll check it later today; maybe I missed an "op" somewhere.

Generally speaking these days I find the optics machinery excessive for my purposes, and so I just work with the 'Grothendieck' form. A more concise presentation (but without the optics stuff) is in https://arxiv.org/abs/2109.04461 -- when that's available...

view this post on Zulip Toby Smithe (Oct 14 2021 at 10:30):

Reid Barton said:

And the second coend looks odd in that M^\hat M appears twice covariantly.

In the second appearance, it's not really appearing covariantly: M^(I)\hat M(I) is just a 'set' (V-object).

view this post on Zulip Reid Barton (Oct 14 2021 at 10:44):

Ah, now the arXiv is working for me again.

view this post on Zulip Reid Barton (Oct 14 2021 at 10:45):

The functoriality in Definition 4.1 seems problematic: the body of the definition is contravariant in M^\hat M

view this post on Zulip Reid Barton (Oct 14 2021 at 10:46):

(Also, in 4.5, the coYoneda embedding is contravariant.)

view this post on Zulip Reid Barton (Oct 14 2021 at 10:57):

Maybe you wanted Cˇ=VCat(C,V)op\check {\mathcal{C}} = \mathbf{VCat}(\mathcal{C}, \mathbf{V})^{\mathrm{op}}? That seems to make things work out, at least locally

view this post on Zulip Toby Smithe (Oct 14 2021 at 11:02):

Hey, yes- I had a quick look and also came to the conclusion that that would be my missing "op". But this just makes me like the optical form even less. I'll go through the rest later and update the preprint accordingly. Eventually I imagine I'll submit it somewhere for publication, so it's good to catch these things (particularly such embarrassing ones!).

view this post on Zulip Dylan Braithwaite (Oct 14 2021 at 11:11):

Toby Smithe said:

Hey, weird! Thanks for pointing this out. I'll check it later today; maybe I missed an "op" somewhere.

Great, thanks!

Generally speaking these days I find the optics machinery excessive for my purposes, and so I just work with the 'Grothendieck' form. A more concise presentation (but without the optics stuff) is in https://arxiv.org/abs/2109.04461 -- when that's available...

Yeah that’s on my reading list too. Though I’m mostly interested in the embedding into optics for now, because I’m starting to look at how we can incorporate Bayesian lenses into the rest of the cybernetics-as-optics program going on at Strathclyde

view this post on Zulip Toby Smithe (Oct 14 2021 at 11:19):

Right. I suspect that this embedding probably isn't the right answer, though! You can do "Para(BayesLens)" just as well as you might do "Para(Optic)", and that's also something I discuss in the more recent preprint above. So if you're just interested (eg) in variational autoencoders, then you don't need to think much about optics, I believe. On the other hand, you might be worried (like I am) in the different "shapes" of the two kinds of cybernetics -- which of course is something I described in my MSP talk yesterday. (And which I'll also be talking about in the UNAM seminar next week..)

view this post on Zulip Jules Hedges (Oct 14 2021 at 12:30):

Reid Barton said:

Ah, now the arXiv is working for me again.

They had some planned maintenance downtime today

view this post on Zulip Jules Hedges (Oct 14 2021 at 12:34):

Toby Smithe said:

Right. I suspect that this embedding probably isn't the right answer, though! You can do "Para(BayesLens)" just as well as you might do "Para(Optic)", and that's also something I discuss in the more recent preprint above. So if you're just interested (eg) in variational autoencoders, then you don't need to think much about optics, I believe. On the other hand, you might be worried (like I am) in the different "shapes" of the two kinds of cybernetics -- which of course is something I described in my MSP talk yesterday. (And which I'll also be talking about in the UNAM seminar next week..)

Yup, we need to get to the bottom of this; but this stream isn't the right place for such things

view this post on Zulip Dylan Braithwaite (Oct 14 2021 at 12:34):

Toby Smithe said:

Right. I suspect that this embedding probably isn't the right answer, though! […]

Yeah I agree the Grothendieck lens viewpoint is a lot cleaner. But there are a lot of draws to sticking with optics as well, particularly from the perspective of an implementation - There’s an idea that some day we could have a library or language for programming in Para(Optic(…)) that could interchangeably be used for doing game theory/statistical inference/… just by swapping out an implementation for the underlying categories.

My hope is to find another optic category that reflects the bayesian structure more directly than presheaves/copresheaves. It will definitely be interesting to think more about what the relationships are between the various perspectives though

view this post on Zulip Toby Smithe (Oct 14 2021 at 13:05):

Yeah, there's plenty to figure out! One thing I plan to spend some time thinking about is Paolo's recent paper https://arxiv.org/abs/2110.06591 -- I think that what he's doing there is something like "a categorification of Bayesian lenses". But that's not in the optical tradition either.