You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
New paper by Rémy Tuyéras, "A category theoretical argument for causal inference" https://arxiv.org/abs/2004.09999
This paper is amazing. Do you know of any other papers at the direct intersection of statistics and CT? And if so, how do I find more? (I know that Brendan Fong did his MA thesis on categorified Beysian networks, and that Peter MCullagh published a paper about categorified statistical models way back in 2002 but that's about it. My list is very short.)
@Brendan Fong 's Master's thesis is related, @Evan Patterson is also working on CT + Stats
Thanks James! I'll look into it. I'm collecting literature because I want to do my own (statistics) Masters' thesis on something in this area. Any idea who is both knowledgable and friendly enough to tolerate a green grad student's questions about open problems in the area?
@Evan Patterson is here sometimes, and he's both friendly and knowledgeable. I'm not knowledgeable, but I'm friendly. I kept telling Brendan to publish his master's thesis (he did his PhD with me). I haven't read Remy Tuyeras' new paper but from the abstract it looks important.
Hi Oliver, my list of papers is not much longer than yours. Recently both Tobias Fritz and Bart Jacobs' group have done really nice work on synthetic and diagrammatic approaches to probability/stats, which you might be aware of. However, these papers are closer to probability than to statistics, at least compared to the works by Fong and McCullagh that you mentioned. I have not yet read the new paper in this thread but it looks very interesting.
In about a month I will put out a PhD thesis at the direct intersection of statistics and CT. It will have elements recognizable from many of the above works, as well as some elements of its own that I hope people will find interesting.
I'd be happy to chat about stats + CT and answer any questions that I can. For the next month I will be working manically to finish this dang thesis so that I can graduate on time, but after that my schedule will open up.
That sounds great. I'd love to discuss when you have more time. In the mean time, I'll collect my thoughts and take a look at Fritz and Jacobs' work.
Lol, @John Baez careful who you say these things to. Now you'll be the first person I'll bother when I have a question...
Such as this one: (and I think this is still relevant to causal inference), a friend and I conjectured that products of graphs are computed by tensor products of connection matrices but weren't able to prove it. Do you know if this is true? And if so, does it extend to weighted graphs or causal diagrams?
The difficulty with your question is that "products of graphs" means many different things. This Wikipedia article lists 10 different definitions:
By "connection matrix" I'll assume you mean "incidence matrix": in graph theory people talk about the incidence matrix of a graph.
So: I promise you that for one of the ten listed definitions of "product of graph", the incidence matrix of a product of graphs is the tensor product of their incidence matrices!
If not, there are a few more to choose from: in Associative Products of Graphs (https://link.springer.com/article/10.1007/BF01472575), they enumerate all of them.
Looking at the Wikipedia article I think the product you want is the one that graph theorists call the tensor product of graphs. So, I'm claiming the incidence matrix of the tensor product of graphs is the tensor product of the incidence matrices of those graphs!
(Pro tip: what graph theorists call the "tensor product" of graphs is what category theorists would call the "cartesian product", or simply "product", of graphs. But what graph theorists call the "cartesian product" of graphs is something else! :dizzy:)
@Evan Patterson Yes, please let us know when you are finished! Or even if you're willing to share a draft, that would be phenomenal (but I completely understand if you prefer not too).
I've been following a lot of people in this area and have been using these the techniques to study their quantum variants, and I've constantly been amazed at how much the categorical approach says about the structure of quantum probability as well!
@Oliver Shetler , The notion of disintegration was used to study Bayesian inversion by Culbertson and Sturtz (https://arxiv.org/abs/1205.1488). Besides Jacobs' work, there is also work of Clerc, Danos, Dahlqvist, Garnier, Panangaden, and others who studied disintegration and Bayesian inversion and disintegrations in a way similar to what Panagaden discussed in his talk a few weeks ago (though their earlier articles do not discuss this connection). "Pointless learning" is a good place to start and follow that up with "Bayesian inversion by ω-complete cone duality" (this last paper has an excellent overview and continues where Panangaden's talk left off). These papers assume you're comfortable with some measure theory and L_p-space stuff though! As you know, Fong's masters thesis is also an excellent place to start (and if I'm not mistaken, that's the first place Bayesian inference was drawn as a string diagram), and where he left off, Cho and Jacobs continued. They introduced a categorical notion of a.e. equivalence, which is crucial for uniqueness results in statistics. Fritz' recent paper (https://arxiv.org/abs/1908.07021) continues from there and is incredibly accessible (and you'll find many open questions there, too!). The general relationship between disintegrations and Bayesian inversion is described in https://arxiv.org/abs/2001.08375 (although I focused more on the quantum case, there are some results in the finite, classical setting, many of which I suspect are valid in the standard Borel setting). You'll also find more references in the linked articles. I hope this list isn't overwhelming :grimacing:
The difficulty with your question is that "products of graphs" means many different things. This Wikipedia article lists 10 different definitions: * [Graph products](https://en.wikipedia.org/wiki/Graph_product). By "connection matrix" I'll assume you mean "[incidence matrix] (https://en.wikipedia.org/wiki/Incidence_matrix#Graph_theory)": in graph theory people talk about the incidence matrix of a graph. So: I promise you that for *one* of the ten listed definitions of "product of graph", the incidence matrix of a product of graphs is the tensor product of their incidence matrices!
Wow, there was lot of room for ambiguity here! I should have been clearer by citing my example. I was referring to the category of graphs as defined in Lawvere's Conceptual Mathematics (Where each object is comprised of two sets: Arrows and Objects, with two morphisms Source and Target). And I was referring to the standard categorical product (the terminal object in the category of discrete two-legged cones).
(Pro tip: what graph theorists call the "tensor product" of graphs is what category theorists would call the "cartesian product", or simply "product", of graphs. But what graph theorists call the "cartesian product" of graphs is something else! :dizzy:)
Okay, sounds like there's a whole diverse array of graph products. Thank you for the thorough explanation.
Thanks for the Wikipedia link. An outline of the proof was sitting there the whole time!
@Arthur Parzygnat Thank you! We should discuss more. Do you use zoom?
I hope this list isn't overwhelming :grimacing:
Nope! This is exactly what I'm asking for! Lit review here I come! 🧐
Glad to be of help, @Oliver Shetler! Btw, it looks like you're using the "code" format to quote me instead of the quote format, which you invoke by write three left quotes and the word "quote" at the start of a line:
giving a quote like this.
If you leave out the word "quote" you'll get
something like this.
Glad to be of help, Oliver Shetler! Btw, it looks like you're using the "code" format to quote me instead of the quote format, which you invoke by write three left quotes and the word "quote" at the start of a line:
Thanks!
I'd like to also point out that "graph products" is also used to refer to specifying an operation on algebraic gizmos (eg groups) via a simple graph.
Oliver Shetler said:
Arthur Parzygnat Thank you! We should discuss more. Do you use zoom?
I hope this list isn't overwhelming :grimacing:
Nope! This is exactly what I'm asking for! Lit review here I come! 🧐
I've been using it for these seminars, but otherwise I don't use zoom much. Feel free to message me if you have questions.