Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: event: MIT Categories Seminar

Topic: May 21 - Jade Master's talk


view this post on Zulip Paolo Perrone (May 20 2020 at 22:51):

Hello all,
Here's the official thread of Jade's talk, "The open algebraic path problem".

view this post on Zulip Paolo Perrone (May 20 2020 at 22:51):

Youtube live: https://youtu.be/XMAl15VHMpg

view this post on Zulip Paolo Perrone (May 20 2020 at 22:52):

Zoom meeting: https://mit.zoom.us/j/280120646
Meeting ID: 280 120 646

view this post on Zulip Paolo Perrone (May 21 2020 at 15:55):

Hello all! In 5 minutes we start.

view this post on Zulip Paolo Perrone (May 21 2020 at 16:00):

30 seconds!

view this post on Zulip Sam Tenka (May 21 2020 at 16:32):

question: could we think of the last idea about partitioning as a principled way to interpolate between Dijkstra and FloydWarshall? (Repeated use of) the former is good for solving the same problem as F.W. for very sparse graphs (~ those with favorable partitionings?!)

view this post on Zulip Brian Pinsky (May 21 2020 at 16:38):

So, I know approximately nothing about tropical geometry, but I know the rig you mentioned for weighted graphs comes up. Is there some algebro-geometric way of interpreting what you're doing?

view this post on Zulip Brian Pinsky (May 21 2020 at 16:51):

My computer's audio is having trouble so it looks like I can't join breakout rooms (in case someone is wondering what happened to me). This was a really nice talk. Thanks Jade

view this post on Zulip Ittay Weiss (May 21 2020 at 16:54):

Brian, enriching in [0,\infty ] with \ge is precisely Lawvere's generalized metric spaces. Then Jade's construction is the free (Lawvere) metric space construction. If that count as geometric intuition....

view this post on Zulip Jade Master (May 21 2020 at 16:57):

Sam Tenka said:

question: could we think of the last idea about partitioning as a principled way to interpolate between Dijkstra and FloydWarshall? (Repeated use of) the former is good for solving the same problem as F.W. for very sparse graphs (~ those with favorable partitionings?!)

Thinking about this a bit more. It seems like the Floyd-Warshall algorithm computes shortest paths by enlarging your graph one vertex at a time. I.e. it produces a sequence of graphs

G1G2G3 G_1 \hookrightarrow G_2 \hookrightarrow G_3 \ldots

where each graph in the sequence has one vertex more. Each time you enlarge the graph, you can use pushouts to compute the updated solution to the algebraic path problem. Djikstra's algorithm on the other hand, only computes the shortest distances from a given starting node. I think that this corresponds to a similar sequence of graphs, except now these graphs are matrices whose first dimension is 1. The solution to the algebraic path problem on this sequence of graphs could also be computed using pushouts in a similar way.

So yes, I do think that this strategy could interpolatea between Dijkstra and Floyd--Warshall. Really any enlargement of your graph is allowed.

view this post on Zulip Jade Master (May 21 2020 at 16:59):

Brian Pinsky said:

So, I know approximately nothing about tropical geometry, but I know the rig you mentioned for weighted graphs comes up. Is there some algebro-geometric way of interpreting what you're doing?

Maybe! I'm really not sure but I would love to find out :)

view this post on Zulip Jade Master (May 21 2020 at 17:01):

Ittay Weiss said:

Brian, enriching in [0,\infty ] with \ge is precisely Lawvere's generalized metric spaces. Then Jade's construction is the free (Lawvere) metric space construction. If that count as geometric intuition....

Right that's another connection. By the way, the free forgetful adjunction for enriched categories in general is constructed in this paper: https://www.sciencedirect.com/science/article/pii/0022404974900188

view this post on Zulip Brendan Fong (May 21 2020 at 17:14):

There's some lovely work on the open automata case in a note by Walters, Sabadini, and Rosebrugh: https://arxiv.org/abs/0712.2525. Simon Cho and I have been working on expanding it out more thoroughly from a bicategorical and decorated/structured cospans perspective. It fits very nicely with what you're doing Jade

view this post on Zulip Brendan Fong (May 21 2020 at 17:17):

Also I'm very curious about the problem of finding good decompositions, which seems to me to be at the heart of a number of algorithms (at least in the max flow case that I've looked into most). As Sam suggests, many seem to start with an assumption about what sort of decomposition is natural for the graph at hand, and then take advantage of the compositional structure as much as possible. But they're rarely phrased explicitly in this language

view this post on Zulip Gershom (May 21 2020 at 17:27):

I confess I don't understand the "expanding your graph" interpretation of Floyd-Warshall. I think of it more as "taking a further step along 'all paths' in your graph, one step at a time".

It is also not clear to me if Dijkstra's algorithm is as general -- i.e. can it be used in all settings with a quantale-valuation to get equivalent results to Floyd-Warshall or does it take more advantage of some special features present in just the standard setting? (I think it may actually be equally general, I just haven't worked it out for sure).

view this post on Zulip Gershom (May 21 2020 at 17:32):

In general, I think the problem of "finding a good decomposition" tends to be as hard as the problem you're trying to solve in the first place. A nice, simple, example of this is the problem of picking a pivot in quicksort. You can get better results under assumptions like "when most decompositions are good", or often you work to pick a decomposition that is "on average not terrible" to defeat malicious data sets.

view this post on Zulip Jade Master (May 21 2020 at 17:34):

Brendan Fong said:

There's some lovely work on the open automata case in a note by Walters, Sabadini, and Rosebrugh: https://arxiv.org/abs/0712.2525. Simon Cho and I have been working on expanding it out more thoroughly from a bicategorical and decorated/structured cospans perspective. It fits very nicely with what you're doing Jade

Yay. I'm glad. This paper also looks really great.

Brendan Fong said:

Also I'm very curious about the problem of finding good decompositions, which seems to me to be at the heart of a number of algorithms (at least in the max flow case that I've looked into most). As Sam suggests, many seem to start with an assumption about what sort of decomposition is natural for the graph at hand, and then take advantage of the compositional structure as much as possible. But they're rarely phrased explicitly in this language

Agreed. This part usually involves a good deal of cleverness it seems. It would be good to get mathematical tools which measure how "good" a given cut is.

view this post on Zulip Jade Master (May 21 2020 at 17:39):

Gershom said:

I confess I don't understand the "expanding your graph" interpretation of Floyd-Warshall. I think of it more as "taking a further step along 'all paths' in your graph, one step at a time".

It is also not clear to me if Dijkstra's algorithm is as general -- i.e. can it be used in all settings with a quantale-valuation to get equivalent results to Floyd-Warshall or does it take more advantage of some special features present in just the standard setting? (I think it may actually be equally general, I just haven't worked it out for sure).

The sequence of graphs I am thinking of adds one vertex and all edges connecting that vertex to the already existing nodes in each step. Whenever you add another node and the edges attached to it, then you need to minimize again. This minimization can be thought of as the pushout of the smaller graph, and the graph with the new node and all of it's adjacent nodes.

view this post on Zulip Jade Master (May 21 2020 at 17:41):

Gershom said:

In general, I think the problem of "finding a good decomposition" tends to be as hard as the problem you're trying to solve in the first place. A nice, simple, example of this is the problem of picking a pivot in quicksort. You can get better results under assumptions like "when most decompositions are good", or often you work to pick a decomposition that is "on average not terrible" to defeat malicious data sets.

I tend to agree. Finding good decompositions is about finding subgraphs which "compose well"

view this post on Zulip Jade Master (May 21 2020 at 17:43):

Did my explanation of the expanding graphs for the Floyd-Warshall algorithm make sense? It might be hard to see the picture...

view this post on Zulip John Baez (May 21 2020 at 18:06):

I haven't carefully checked, but I think Dijkstra works for any commutative quantale. Someone should take a bunch of shortest path algorithms, starting with Dijkstra and Floyd-Warshall, try to generalize them to commutative quantales, and put them into a nice categorical framework. The last step might let one create new algorithms. I think the best opportunity for new algorithms is in solving problems with partial data, changing data, etc.

view this post on Zulip Gershom (May 21 2020 at 18:33):

Jade Master said:

The sequence of graphs I am thinking of adds one vertex and all edges connecting that vertex to the already existing nodes in each step. Whenever you add another node and the edges attached to it, then you need to minimize again. This minimization can be thought of as the pushout of the smaller graph, and the graph with the new node and all of it's adjacent nodes.

This description seems to be what happens in the tightest inner loop -- how one updates a single cell with regards to a single extension of a path. The trick to Floyd-Warshall (and so what I would think of as its "heart") is at a bit of a higher level, I think. In particular, it takes advantage of a fair number of operations re-associating and commuting so that one can extend fewer times than one might naively imagine. This is the dynamic programming aspect -- taking advantage of "common subexpressions" in the naive formula in a structured way.

view this post on Zulip Jade Master (May 21 2020 at 18:55):

Gershom said:

Jade Master said:

The sequence of graphs I am thinking of adds one vertex and all edges connecting that vertex to the already existing nodes in each step. Whenever you add another node and the edges attached to it, then you need to minimize again. This minimization can be thought of as the pushout of the smaller graph, and the graph with the new node and all of it's adjacent nodes.

This description seems to be what happens in the tightest inner loop -- how one updates a single cell with regards to a single extension of a path. The trick to Floyd-Warshall (and so what I would think of as its "heart") is at a bit of a higher level, I think. In particular, it takes advantage of a fair number of operations re-associating and commuting so that one can extend fewer times than one might naively imagine. This is the dynamic programming aspect -- taking advantage of "common subexpressions" in the naive formula in a structured way.

Right. The nice observation of Floyd-Warshall is that in the case when you're adding just one more node, computation of the corresponding pushout takes the nice form that you are describing.

view this post on Zulip Gershom (May 21 2020 at 18:58):

John Baez said:

I haven't carefully checked, but I think Dijkstra works for any commutative quantale. Someone should take a bunch of shortest path algorithms, starting with Dijkstra and Floyd-Warshall, try to generalize them to commutative quantales, and put them into a nice categorical framework. The last step might let one create new algorithms. I think the best opportunity for new algorithms is in solving problems with partial data, changing data, etc.

One restriction that's important to consider is that Dijkstra's algorithm does not work on graphs with negative weights. Floyd-Warshall does, and furthermore can be used to detect "negative cycles" as well.

view this post on Zulip Jacques Carette (May 21 2020 at 19:07):

Anyone interested in general versions of Floyd-Warshall should look at Dioids and *-semirings in general. Russell O'Connor wrote a nice blog post (in Haskell) explaining why: http://r6.ca/blog/20110808T035622Z.html

view this post on Zulip Hendrik Boom (May 21 2020 at 19:43):

Warshall's algorithm starts with all the one-step paths.
Then for the first vertex it adds all paths that start anywhere, end anywhere, and have only the first vertex as intermediate place.
Then for thesecond vertex it adds all paths that start anywhere, end anywhere, and have only the first two vertices as intermediate places.
etc., until it has all paths that start anywhere, end anywhere, and have any vertices as intermediate places.

It can be coded like an in-place matrix multiply, with the matrix being multiplied by itself while it is being modified by placing the resulting elements in itself too, but you have to be very careful of the proper nesting of the three loops. The one iterating through the intermediate vertices has to be the outer loop. In the form A_ij times A_jk (summed over j) the j loop is the outer one.

-- hendrik

view this post on Zulip Paolo Perrone (May 21 2020 at 21:38):

Hey all, here's the permanent video:
https://youtu.be/04azeAGbn9U
The HD version should be ready soon, at the same link.

view this post on Zulip Gershom (May 22 2020 at 06:13):

Jacques Carette said:

Anyone interested in general versions of Floyd-Warshall should look at Dioids and *-semirings in general. Russell O'Connor wrote a nice blog post (in Haskell) explaining why: http://r6.ca/blog/20110808T035622Z.html

Related to Floyd-Warshall and petri nets is this paper from '08 "Petri Nets are Dioids" https://www.semanticscholar.org/paper/Petri-Nets-Are-Dioids-Baldan-Gadducci/f22418833902f4fb5c80cabae2934150ccb52451

view this post on Zulip Jade Master (May 22 2020 at 19:23):

Gershom said:

John Baez said:

I haven't carefully checked, but I think Dijkstra works for any commutative quantale. Someone should take a bunch of shortest path algorithms, starting with Dijkstra and Floyd-Warshall, try to generalize them to commutative quantales, and put them into a nice categorical framework. The last step might let one create new algorithms. I think the best opportunity for new algorithms is in solving problems with partial data, changing data, etc.

One restriction that's important to consider is that Dijkstra's algorithm does not work on graphs with negative weights. Floyd-Warshall does, and furthermore can be used to detect "negative cycles" as well.

Right. If I recall correctly, there are rigs you can choose so that graphs valued in it can have negative weights...but the transistive closure doesn't always exist.

view this post on Zulip Jade Master (May 22 2020 at 19:23):

Gershom said:

Jacques Carette said:

Anyone interested in general versions of Floyd-Warshall should look at Dioids and *-semirings in general. Russell O'Connor wrote a nice blog post (in Haskell) explaining why: http://r6.ca/blog/20110808T035622Z.html

Related to Floyd-Warshall and petri nets is this paper from '08 "Petri Nets are Dioids" https://www.semanticscholar.org/paper/Petri-Nets-Are-Dioids-Baldan-Gadducci/f22418833902f4fb5c80cabae2934150ccb52451

Ah thanks so much. This seems like a very relevant and useful connection.

view this post on Zulip Jade Master (May 22 2020 at 19:24):

[Quoting…]
What do you think goes wrong?

view this post on Zulip John Baez (May 22 2020 at 19:24):

I think it's good to ignore negative weights if one is trying to get a good theory along the lines of Jade's. In a world where one can run around the block and go -3 miles, shortest path problems are gonna be weird!

view this post on Zulip Jade Master (May 22 2020 at 19:26):

Hendrik Boom said:

Warshall's algorithm starts with all the one-step paths.
Then for the first vertex it adds all paths that start anywhere, end anywhere, and have only the first vertex as intermediate place.
Then for thesecond vertex it adds all paths that start anywhere, end anywhere, and have only the first two vertices as intermediate places.
etc., until it has all paths that start anywhere, end anywhere, and have any vertices as intermediate places.

It can be coded like an in-place matrix multiply, with the matrix being multiplied by itself while it is being modified by placing the resulting elements in itself too, but you have to be very careful of the proper nesting of the three loops. The one iterating through the intermediate vertices has to be the outer loop. In the form A_ij times A_jk (summed over j) the j loop is the outer one.

-- hendrik

Thanks Hendrik. You mean that the outermost loop has to be the one which adds new vertices which are allowed to be intermediate steps?

view this post on Zulip Jade Master (May 22 2020 at 20:02):

John Baez said:

I think it's good to ignore negative weights if one is trying to get a good theory along the lines of Jade's. In a world where one can run around the block and go -3 miles, shortest path problems are gonna be weird!

Right. Suppose your graph has a loop consisting entirely of negative weights. Let MM be the the matrix representing your graph. Then the sum n0Mn\sum_{n\geq 0} M^n won't converge. Every time you multiply the matrix by itself (using min and +), the values between two points in your loop will get more negative.

view this post on Zulip John Baez (May 22 2020 at 20:05):

Right, so I think you should feel completely free to ignore commutative rigs that aren't commutative quantales, when you're developing your vision here.

view this post on Zulip Hendrik Boom (May 24 2020 at 20:24):

Thanks Hendrik. You mean that the outermost loop has to be the one which adds new vertices which are allowed to be intermediate steps?

Yes, exactly.

view this post on Zulip Jade Master (Jan 28 2021 at 02:43):

Btw I have made a major update to this paper: https://arxiv.org/abs/2005.06682
The main changes are that I improved a lot of the exposition, and there is a new section on functional open matrices. Functional open matrices are a subclass of open matrices for which the theory developed here, in my opinion, has some more practical implications.

view this post on Zulip Jade Master (Jan 28 2021 at 02:46):

Namely there are two relevant facts.

  1. If F is the functor which sends a matrix to the solution of its algebraic path problem, then for functional open matrices M:XYM: X \to Y and N:YZN : Y \to Z, F(MN)F(M \circ N) can be computed as the matrix product F(M)F(N)F(M)F(N).

view this post on Zulip Jade Master (Jan 28 2021 at 02:49):

  1. The solution to the algebraic path problem for a composite of functional open matrices which occurs in exactly n-steps can be computed using a sort of binomial theorem: Screenshot-from-2021-01-27-18-47-48.png

view this post on Zulip Jade Master (Jan 28 2021 at 02:52):

This equation says that on the composite of functional open matrices M and N, the solution of the algebraic path problem which is restricted to the boundary and occurs in exactly n-steps is computed by summing over i+j=n the contributions from M which occur in i steps with the product of the contribution from N which occurs in j steps.

view this post on Zulip Jade Master (Jan 28 2021 at 02:53):

If this doesn't make sense maybe it will help to know that for a matrix MM, the minimum paths which occur in n-steps between all vertices is represented by the power MnM^n.

view this post on Zulip Jade Master (Jan 28 2021 at 02:54):

Also you can read the paper for more details but the black box indicates that the open matrix is restricted to its boundary.

view this post on Zulip Jade Master (Jan 28 2021 at 02:55):

Anyway, I turned the formula above into some code, in the case of markov processes: https://github.com/Jademaster/compositionalmarkov