You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Today at 17:00 UTC, as per usual :smile:
Abstract:
A fundamental component of many machine learning algorithms is differentiation. Thus, if one wishes to abstract and generalize aspects of machine learning, it is useful to have an abstract perspective on differentiation. There has been much work on categorical differential structures in the past few years with the advent of differential categories, Cartesian differential categories, and tangent categories. In this talk I'll focus on Cartesian reverse differential categories, a recent variant of Cartesian differential categories, and touch on how they can be used in abstract machine learning.
YouTube: https://www.youtube.com/watch?v=ljE9CWEUzJM
Zoom: https://topos-institute.zoom.us/j/5344862882
This was a shockingly clear and enjoyable talk!
I am reminded of this work by Matthijs Vákár: https://arxiv.org/abs/2103.15776, where forward- and reverse-mode differentiation are similarly linked by taking the fiberwise opposite of a fibration (Section 6). He talks about assumptions on the fibration that guarantee that the total category is cartesian closed, and the point of all this is to extend automatic differentiation to higher-order functions in a principled way. Cool stuff!