You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hi. I’m trying to work out the details of proposition 4.5 from @Toby Smithe’s Bayesian Update’s Compose Optically, but I’m stuck on the third isomorphism (highlighted in the image below). Namely the part sending to . The paper says this is by the Yoneda lemma, but I can’t see how to apply that here. I wonder if anyone can help shed some light on how it works?
If I set , then this amounts to saying , which is almost like the regular Yoneda lemma, but the natural transformation is backwards. And I can’t work out how that could be true in this case. Is there some magic going on to do with being under a coend?
2BDC1318-4D7B-488E-AC66-6E3FC72B0FB0.jpg
Uhm indeed the claimed isomorphism is
but as you say (co)Yoneda goes the other way
Is this related to https://categorytheory.zulipchat.com/#narrow/stream/229199-learning.3A-questions/topic/Optics.20-.20grates/near/240073184?
arXiv is not working for me at the moment so I can't see more context, but Gr could stand for "grate". And the second coend looks odd in that appears twice covariantly.
The Gr in the Bayesian updates paper stands for ‘Grothendieck’, in that it arises from the Grothendieck construction of the functor Stat. The equivalence between that and the third coend is proved earlier in the paper.
But I’ll have a look over that thread anyway, thanks! I thought it strange too that the variance of B changes between the two lines
Hey, weird! Thanks for pointing this out. I'll check it later today; maybe I missed an "op" somewhere.
Generally speaking these days I find the optics machinery excessive for my purposes, and so I just work with the 'Grothendieck' form. A more concise presentation (but without the optics stuff) is in https://arxiv.org/abs/2109.04461 -- when that's available...
Reid Barton said:
And the second coend looks odd in that appears twice covariantly.
In the second appearance, it's not really appearing covariantly: is just a 'set' (V-object).
Ah, now the arXiv is working for me again.
The functoriality in Definition 4.1 seems problematic: the body of the definition is contravariant in
(Also, in 4.5, the coYoneda embedding is contravariant.)
Maybe you wanted ? That seems to make things work out, at least locally
Hey, yes- I had a quick look and also came to the conclusion that that would be my missing "op". But this just makes me like the optical form even less. I'll go through the rest later and update the preprint accordingly. Eventually I imagine I'll submit it somewhere for publication, so it's good to catch these things (particularly such embarrassing ones!).
Toby Smithe said:
Hey, weird! Thanks for pointing this out. I'll check it later today; maybe I missed an "op" somewhere.
Great, thanks!
Generally speaking these days I find the optics machinery excessive for my purposes, and so I just work with the 'Grothendieck' form. A more concise presentation (but without the optics stuff) is in https://arxiv.org/abs/2109.04461 -- when that's available...
Yeah that’s on my reading list too. Though I’m mostly interested in the embedding into optics for now, because I’m starting to look at how we can incorporate Bayesian lenses into the rest of the cybernetics-as-optics program going on at Strathclyde
Right. I suspect that this embedding probably isn't the right answer, though! You can do "Para(BayesLens)" just as well as you might do "Para(Optic)", and that's also something I discuss in the more recent preprint above. So if you're just interested (eg) in variational autoencoders, then you don't need to think much about optics, I believe. On the other hand, you might be worried (like I am) in the different "shapes" of the two kinds of cybernetics -- which of course is something I described in my MSP talk yesterday. (And which I'll also be talking about in the UNAM seminar next week..)
Reid Barton said:
Ah, now the arXiv is working for me again.
They had some planned maintenance downtime today
Toby Smithe said:
Right. I suspect that this embedding probably isn't the right answer, though! You can do "Para(BayesLens)" just as well as you might do "Para(Optic)", and that's also something I discuss in the more recent preprint above. So if you're just interested (eg) in variational autoencoders, then you don't need to think much about optics, I believe. On the other hand, you might be worried (like I am) in the different "shapes" of the two kinds of cybernetics -- which of course is something I described in my MSP talk yesterday. (And which I'll also be talking about in the UNAM seminar next week..)
Yup, we need to get to the bottom of this; but this stream isn't the right place for such things
Toby Smithe said:
Right. I suspect that this embedding probably isn't the right answer, though! […]
Yeah I agree the Grothendieck lens viewpoint is a lot cleaner. But there are a lot of draws to sticking with optics as well, particularly from the perspective of an implementation - There’s an idea that some day we could have a library or language for programming in Para(Optic(…)) that could interchangeably be used for doing game theory/statistical inference/… just by swapping out an implementation for the underlying categories.
My hope is to find another optic category that reflects the bayesian structure more directly than presheaves/copresheaves. It will definitely be interesting to think more about what the relationships are between the various perspectives though
Yeah, there's plenty to figure out! One thing I plan to spend some time thinking about is Paolo's recent paper https://arxiv.org/abs/2110.06591 -- I think that what he's doing there is something like "a categorification of Bayesian lenses". But that's not in the optical tradition either.