You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
while i'm on here turning twitter convos into zulip topics:
@Eigil Rischel i remember you asking about linear logic-esque variants of categories where you can 'only use morphisms once'—curious, what was that for? (also, i recall replying to the thread, did u ever look into that?)
Basically I'm thinking about "formal graphical models".
Suppose you have a (directed, acyclic) graph G. The vertices are variables in a system under consideration. An edge tells you that may depend directly on .
Now, the usual thing to do in statistics is to consider a model of this graph - you ask for a (measurable, metric, topological...) space of outcomes for each variable, as well as a "stochastic map" , which computes the value of a variable in terms of its dependencies, or "causes".
The point of a model like this is that you can calculate both a probability distribution on the product space of all the variables, but also "interventional distributions" - what happens if I take variable and, instead of letting it be given in terms of its causes, just set it directly to some value? This gives you a stochastic map
Now a way to treat this categorically, which seems to be due to Brendan Fong (https://arxiv.org/abs/1301.6201), is to cook up a "syntactical category" out of your graph, which is a free symmetric monoidal category generated by an object for every vertex, maps , and a comonoid on each object. Then you can think of the various interventional maps as purely syntactical constructions in this free category. A model is then a strong monoidal functor , where is the category of stochastic maps.
Now I wanted to threat transformation between these causal models which involve a change of graph.
The obvious thing to look at is a monoidal functor which preserves the comonoids.
The problem with this is, these don't necessarily preserve the interventional distribution maps! You can completely fuck up the causal structure like this. So I started looking for what's special about these interventional maps. And one thing I came up with is that you can build them without using any of the generating maps more than once. My intuition is that, like, the generating map represents some process that happens out in the physical world. So if you have , you can calculate from , then calculate from that - this is the right thing. But you can also calculate , then , then run the stochastic map again to get a new . That's wrong.
That's how I got on to this. I thought about what you tweeted - it really seems to be the right way to do what I asked for, but I couldn't get it to work. I'm still not sure what the right way to do this is - currently I just put distinguished maps in there as extra data, but it's super inelegant