You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello all! This is the thread of discussion for the talk of Toby St. Clere Smithe, "Cyber Kittens, or First Steps Towards Categorical Cybernetics".
Date and time: Tuesday July 7, 16:00 UTC.
Zoom meeting: https://mit.zoom.us/j/7055345747
YouTube live stream: https://www.youtube.com/watch?v=Is5mWZcCVf0&list=PLCOXjXDLt3pZDHGYOIqtg1m1lLOURjl1Q
Slides for my talk later: cyberkittens-act-2020.pdf
We start in 30 minutes!
(In case anyone is interested: the idea commented during the talk of "inflating diagrams" actually goes back to Bartlett, Douglas, Schommer-Pries, Vicary (where they do it with an embedding to R3 of the presentation) here: https://arxiv.org/abs/1509.06811 Although they only appear on that slide, so this is probably not important here.)
A question on the slides: are the vertical wires to be understood as cups/caps?
Hey Mario, can you refer to a particular diagram? I have lots of vertical wires!
I think it is slide 38
(Oh, I think that has to be the monoidal product and the direction of the wires tells you everything else, right?)
Yes, you're right there (I think), which is why I had to put flèches on all my wires :)
Yeah, the plain "dot" should have had an in it, probably
Here's the video!
https://www.youtube.com/watch?v=y82hKxDeT6w&list=PLCOXjXDLt3pYot9VNdLlZqGajHyZUywdI
The examples with active inference and variational autoencoders are tantalizing. What's the vision here? Scaling up the complexity of active inference agents?
Hey @Van Bettauer, I missed your question, sorry! There are, in the way of these things, many visions here. One of them is certainly trying to understand how complicated agents can be made up out of simple parts (like "canonical circuits" in the cortex). Another vision is simply to understand what it means to be a self-sustaining system embedded in a world, and how such systems can come together to form meta-self-sustaining systems (like corporations). Maybe both of these are just what you meant by "scaling up the complexity". But I think of it the other way around: rather than "scaling up complexity", I'm interested in "reducing complexity", from big hard-to-understand systems, to smaller bits that I can understand. As we saw in @Jules Hedges's talk (and elsewhere), there can be non-compositional "emergent effects". But this kind of compositional approach at least alerts us to their existence, so we can try to subject them to precise analysis, as well.
Another vision, more distant, perhaps: maybe one day, by understand what it means to be a self-sustaining system, we'll be able to build a more "self-sustaining" society.