You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
If you're into machine learning there are lots of talks here:
I'm giving a talk there this Thursday, June 16th, at 18:00 Lisbon time, which is 10:00 in California:
Shannon entropy is a powerful concept. But what properties single out Shannon entropy as special? Instead of focusing on the entropy of a probability measure on a finite set, it can help to focus on the "information loss", or change in entropy, associated with a measure-preserving function. Shannon entropy then gives the only concept of information loss that is functorial, convex-linear and continuous. This is joint work with Tom Leinster and Tobias Fritz.
You can watch the talk on Zoom by clicking on the talk title.
This will be a lot like the talk I gave at the tutorial at CUNY with @Tai-Danae Bradley in May, and you can watch that one here.