You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
John Baez said:
Sophie Libkind has already unified the input-output doctrine and variable sharing doctrine (which she called the resource sharing doctrine) in her paper
- Sophie Libkind, An algebra of resource sharing machines.
By the way, this paper has a good explanation of the distinction between the two doctrines, starting in the abstract:
To some, open dynamical systems are input-output machines which interact by feeding the input of one system with the output of another. To others, open dynamical systems are input-output agnostic and interact through a shared pool of resources. In this paper, we define an algebra of open dynamical systems which unifies these two perspectives.
I'm getting very interested in this issue. Consider Sophie's discussion of the difference. First, the machine/input-out/directed wiring paradigm:
When two machines compose, one machine is the designated sender and the other is the designated receiver. The sender emits information which directs the evolution of the receiver.
In the special case where a system is both the sender and the receiver, this interaction describes feedback. When machines compose:
- communication is uni-directional. Information travels from the sender to the receiver but not vice versa.
- interaction is active. The communication channel from sender to receiver is specifically engineered to enable the passing of information. The receiver does not evolve without input from the sender.
When people communicate by passing notes, they are composing as machines where the note plays the role of the engineered communication channel.
Now the variable-sharing/undirected wiring paradigm:
Two resource sharers compose by simultaneously affecting and reacting to a shared pool of resources. When resource sharers compose:
- communication is undirected. Each system may both affect and be affected by the state of the shared pool of resources. Through this medium, they may both affect and be affected by each other.
- interaction is passive. The communication channel is incidental to the fact that the systems refer to the same resource. The rules for how each system affects and reacts to state of the resource is independent of the action of other systems on the pool.
When people communicate verbally, they are composing as resource sharers where the shared resource is "vibrations in the air space." All participants in a conversation affect and are affected by the changing
state of the air between them.
It seems to me a little odd to distinguish sharply between people sending each other written messages and people having an in-person conversation as belonging to different paradigms.
If I play chess, especially with a computer, it might seem right to view it in directed terms: I make a move and receive an update as to the position once the computer has responded. But I might take the board position as a shared resource between the two players.
Perhaps she's chosen such an illustration to hint at the somewhat conventional nature of the distinction.
I see in Sophie and Keri D'Angelo's recent Dependent Directed Wiring Diagrams for Composing Instantaneous Systems that they're bringing Mealy machines and stock-flow diagrams into contact with each other, especially sec. 5.2.
They're normally seen as belonging to different paradigms.
It's not that stock-flow diagrams belong to some paradigm (directed or undirected), it's how you compose them that belongs to some paradigm.
Our first batch of papers on stock-flow diagrams composed them in an undirected way, by identifying stocks. I believe all the software we have so far uses only this method. But Nate Osgood is constantly militating for also composing them in a directed way, where the value in a stock of one determines a flow function in another. (The 'flow function' says how the value of a flow is a function of the values of various stocks and other variables.)
Sophie has created software that lets you compose Petri nets in both directed and undirected ways, and the same ideas should work for stock-flow diagrams.
The only reason we haven't yet introduced directed composition for stock-flow diagrams is simply that software takes time to write and there are not enough people who can write the software. We need more people who know category theory, can program in AlgebraicJulia, and want to work on this project!
John Baez said:
The only reason we haven't yet introduced directed composition for stock-flow diagrams is simply that software takes time to write and there are not enough people who can write the software. We need more people who know category theory, can program in AlgebraicJulia, and want to work on this project!
oh, really? I could be interested, depending on the expected time commitment. Is there a page (or github?) for the project?
Sure, there's a github for StockFlow.jl:
https://github.com/AlgebraicJulia/StockFlow.jl
(read the long description near the bottom). There are also papers describing it. The first paper is short:
The second goes into a lot more detail, and discusses newer features, with a lot of code shown at the end, but is aimed at an audience who knows little category theory:
The time commitment would be up to you, at least if you're doing something that nobody else relies on yet. You'd probably need to talk to us sometimes.
John Baez said:
It's not that stock-flow diagrams belong to some paradigm (directed or undirected), it's how you compose them that belongs to some paradigm.
Ok, so then one issue is whether there's a good combination:
The main theorem of this paper (Theorem 5.3) unites these two flavors of composition in a single framework for open dynamical systems.
But maybe there's something more I'm after, not just a question of being able to compose in either of two ways, but of how we carve up the world into kinds of interacting subsystems.
I guess my intuition is that the directed form of communication is the more fundamental, and that undirected sharing is a limit case. The motion of one end of rod needs to be transmitted down its length.
But maybe there's space for an undirected composition of pure identification.
In physical systems interaction is always bidirectional and there is no notion of "input" and "output", as I briefly explain in Section 2 of my paper Double categories of open systems: the cospan approach. Directed wiring diagrams don't capture these features. That's why I take the cospan approach (which is closely allied to the operad of undirected wiring diagrams). I consider this fundamental to understanding the physical world - so it's interesting that people are able in some situations to act as if causation is directional. I examine an example.
It's interesting that the "arrows of space" people construct to embed unidirectional communication in the physical world all ultimately devolve to the arrow of time.
However, I think even in the physical paradigm, there's a distinction (perhaps again ultimately only conventional) between interaction by sharing a resource and direct bidirectional interaction (coupling) between systems.
James Deikun said:
It's interesting that the "arrows of space" people construct to embed unidirectional communication in the physical world all ultimately devolve to the arrow of time.
Yes! I didn't come out and say that in my paper, for some reason, but my example should make that clear. Maybe I should say "arrow of time" out loud, or maybe that would just open another Pandora's can of worms.
James Deikun said:
However, I think even in the physical paradigm, there's a distinction (perhaps again ultimately only conventional) between interaction by sharing a resource and direct bidirectional interaction (coupling) between systems.
Agreed! We could take particles in two subsystems and "couple" them by letting them interact gravitationally (or admitting that they do, in fact, interact gravitationally). Or we could take particles in two subsystems and decree that they are the same (or admit that they are the same, since the subsystems overlap) - that's the "resource sharing" approach.
The second seems mathematically simpler since you don't need to specify a way the two particles are interacting. That's why we've focused on the second so far.
When you start specifying ways that two parts of two subsystems can interact, it seems you need a hand-crafted operad that has operations for all these ways. I'm not opposed to that. It would be fun to try it.
I tend to think that directed representations are good for processes (temporal networks). Semantics for these tend to involve states of resources as input/output wires and transformations (interactions) of those states as nodes.
Undirected directed diagrams are good for representing spatial networks, with the nodes representing persistent resources and edges (or often higher simplices) representing interactions. Semantics for these diagrams tend to be dynamical systems and/or their orbits, tracking change over time (sometimes obscured by a focus on fixed points).
Ultimately, these should interact, with processes modifying spatial networks, and process transitions triggered by dynamical changes to persistent variable. The Carnot engine is a good example, with process steps corresponding to different network arrangements of the engine and reservoirs.
Another point in the interrelation of doctrines: Lecture 10 in @David Jaz Myers's DOTS lecture series sees him use "clocks" to turn the compositional theory of Moore machines via lenses into a behavioral system theorem (Jan Willems style) composing by sharing variables.
This is part of the Future Work announced in the DOTS paper:
Examine the Yoneda theory of systems theories (extending the work on representable morphisms of systems theories in Chapter 5 of [Mye21]). We will show that the simplest form of Willems’ behavioral approach to systems theory, as categorified in the sheaf approach of Schultz, Spivak, and Vasilakopolous [SSV19], is a discrete opfibration classifier in the 2-category of systems theories. In particular, features of systems theories representable by maps (such as trajectories, steady states, etc. but also control-barrier functions and cocycles) give (sometimes lax) morphisms of systems theories into Willems’ style behavioral systems theories (as demonstrated in the manuscript [Mye21]). This gives a robust class of compositionality theorems. We will explore how time variation in system behaviors arises out of a choice of a category of clock-systems which represent time-varying behavior, and connect this with the sheaf theoretic approach of [SSV19].
John Baez said:
That's why I take the cospan approach (which is closely allied to the operad of undirected wiring diagrams).
And allied to properads too:
Hmm, why not a 'Double Properadic Theory of Systems'? Can't we run these wiring diagrams backwards ?(Unwiring diagrams?)
There's plenty of interest in variable sharing, do people look at de-identifying variables? A couple of financially independent people come together and pool resources, then later separate. A community of people who can each carry out many tasks starts to specialise.
I guess this is just along the lines of:
Spencer Breiner said:
Ultimately, these should interact, with processes modifying spatial networks, and process transitions triggered by dynamical changes to persistent variable.
I guess just like multiplication is easier than factorization, and marrying fortunes is easier than divorce settlements, that decomposition of systems is generally less tractable.
Yes, one should probably become an expert on composition first.
David Corfield said:
There's plenty of interest in variable sharing, do people look at de-identifying variables? A couple of financially independent people come together and pool resources, then later separate. A community of people who can each carry out many tasks starts to specialise.
In environmental impact analysis, there is an interesting example. As a biophysical process, a cow can be seen as producing meat and milk. Say 1 cow produces 200kg of meat, and 1ton of milk. But raising the cow, and maintaining her space, veterinary checks, etc. all that has an environmental impact of, say, 100 kgCO2 equivalent. Now, how do we allocate the responsibilities of this impact on the meat and milk products? Since they both "share the cow", and since a cow cannot be physically separated into a "meat production" and a "milk production" process, this question cannot be settled by "biophysical" arguments.
Here, experts spend a lot of time discussing which allocation method is the best. (they mostly rely on economic principles and models).
Peva Blanchard said:
In environmental impact analysis, there is an interesting example.
Yes, interesting. I wonder if we can think of parallel examples in biology, where Nature in effect has to solve the problem of the benefits of dividing or combining functions in an organism. On another thread, we were talking about the mechanism discussed in Symmetries of Living Systems where there's gene duplication in a gene regulatory network. So long as the input trees to two nodes remain isomorphic, they belong to the same fibre of the symmetry fibration, and their dynamics will be synchronised. But then new connections to one may occur to allow for symmetry breaking, desynchronisation and differentiation of function.
John Baez said:
one should probably become an expert on composition first.
Sound advice, I'm sure.
Luckily there are experts on composition to hand. I think I'm missing something here.
So undirected wiring diagrams corresponding to cospans of finite sets represent Petri nets with no transitions, and these act as interface maps. The DOTS paper presents a typical such map arising from this cospan :
So what expressions are we allowed to include as the finite sets here? It seems like we're being led to think of some coproduct on the left and a single finite set on the right, and imagining a composition happening. But what dictates that I can't also think of unwiring via ?
Also, presumably we're allowed monoidal juxtapositions, such as .
But what dictates that I can't also think of unwiring via ?
Any cospan of finite sets is allowed. There are 81 different maps and 9 different maps , which come in different kinds, so your expression is ambiguous. However, undirected wiring diagrams can have "loose ends", which is what your phrase "unwiring" seems to indicate.
Fong and Spivak show this in their picture of what you can do in a hypergraph category:
I'm talking about that little edge that ends in a dot.
I don't know if this is what you meant by "unwiring". Wires are allowed to end or split.
Thanks for that! Perhaps 'unwiring' is a bad word, but I can put the issue quite succinctly.
Consider these diagrams from the DOTS paper:
The interface interaction in the first diagram acts on the pair of Petri nets to yield the Petri net in the second diagram. Above I was wondering about running the diagram backwards, see how the dual to the cospan in the first diagram, so that's a cospan of the form which splits the central wire, acts on the Petri net in the second diagram. Would it produce the two Petri nets from the first diagram?
Then I seemed to be persuaded that there was a problem, something like if there was some quantitative data associated to , we wouldn't know how to split it when we separated out the two sub-nets. But if were merely looking at the Petri nets as given, there's no problem with this decomposition, is there?
It sounds like you're wondering whether composing open Petri nets with the dual of the given cospan acts as the inverse of composing with that cospan. It doesn't, since the dual of a cospan is not its inverse.
You can calculate it out, and I recommend doing that. But I know in my bones that there's no way you can "chop a connected Petri net into two pieces" by composing it with some cospan of finite sets.
(Given how hard it is to communicate in writing, I'm not even sure you were wondering if you could chop a connected Petri net into two parts.)
I guess I'm stuck at the stage of "It must yield something, but I don't know what"
John Baez said:
You can calculate it out, and I recommend doing that.
Right. I'll shut up and calculate.
You'll see that composing copans by pushout is like gluing things together.
Ok, so in this case we just get the same Petri net but with an extra copy of in the interface.
What can be said in general about interface interactions acting on Petri nets? Something like interface interactions can identify places and can delete and duplicate open variables. I guess they can also glue in some noninteracting places.
In the theory of open Petri nets, interface interactions (= cospans of finite sets) can't delete places, and they can't add in new places. They can identify places. Here I am only talking about what they do to places. An open Petri net has transitions, places, and also a map from a finite set (or two, or three,...) to the set of places.
If we apply the interface map to a closed Petri net, doesn't that add a place with no transitions?
John is thinking of using decorated cospans, where we split the interface into input and output and only glue together the entire input of one system to the entire output of another.
In the operadic approach, we don't separate the interface into input and output and instead compose systems by first disjoint unioning them, and then acting on the right by a(n undecorated) cospan.
You're right that by acting on it with "bad" cospans we can add dummy variables and other such nonsense. This is why I personally like to restrict my cospans to always have their left leg be a surjection and their right leg be an injection; equivalently, this is an equivalence relation on the domain set together with an injection from the codomain set to the blocks of the associated partition. Composing on the right by such a cospan is a two-step process, sharing and hiding: first we take the variables and set them equal according to the equivalence relation (that is, we push out over the left leg) and then we select of these some to re-expose as public (or, dually, hide the others --- this is precomposing by the right leg). These have the nice side effect of never "double labelling" a variable.
As undirected wiring diagrams, these cospans (left leg surjective, right leg injective) have no "passing" or "dead" wires. No two outer ports are connected (no passing wires) and every outer port is connected to some inner port (they are "live").
But they don't correspond as directly to hypergraph categories.
Sophie and I talk a bit about this in our paper:
Screenshot_20251014-131341~2.png
David Corfield said:
Then I seemed to be persuaded that there was a problem, something like if there was some quantitative data associated to , we wouldn't know how to split it when we separated out the two sub-nets. But if were merely looking at the Petri nets as given, there's no problem with this decomposition, is there?
By "unwiring" I took you to mean decomposition, rather than composition. For concreteness, let's call the two components and , and the outer system , and for the wiring diagram. Then asserting a composition is primarily creating a constraint on what kind of system can be. On the other hand, it is not true that we can take any system with interface and decompose it along ; some systems won't satisfy the relevant constraints.
Spencer Breiner said:
By "unwiring" I took you to mean decomposition, rather than composition. For concreteness, let's call the two components $$A$$ and $$B$$, and the outer system $$X$$, and $$f$$ for the wiring diagram. Then asserting a composition $$X=f(A,B)$$ is primarily creating a constraint on what kind of system $$X$$ can be. On the other hand, it is *not* true that we can take any system with interface $$X$$ and decompose it along $$f$$; some systems won't satisfy the relevant constraints.
An interesting case of this is if represents a parallel product (aka non-interacting product or disjoint union). Then of course will only factor through if it is disconnected.
I've been wondering for a while about trying to assign a quantity like "how much is lost by supposing that were of the form ", or "how much energy would it take to cause to take the form " (something like a bond-energy). In general, I'm quite interested in the idea of "approximate homomorphisms" between systems...
David Corfield said:
If we apply the interface map to a closed Petri net, doesn't that add a place with no transitions?
You're right. We can add as many of those as we want.
(We never do, which is probably why I got this wrong.)
David Jaz Myers said:
In general, I'm quite interested in the idea of "approximate homomorphisms" between systems...
I think you said elsewhere that this might be so in the context of a "best" approximation. So given an and an , we find an and such that best approximates .
Apropos of a technical question I just asked on the other thread, I thought of promonoidal categories, which seem to me to be a pretty rare example of a categorical structure in which "all reasonable decompositions exist uniquely," especially if you think of them as a special kind of multicategory. Mario Román calls these multicategories malleable in his gorgeous thesis that I've just now looked into for the first time.
Kevin Carlson said:
Apropos of a technical question I just asked on the other thread, I thought of promonoidal categories, which seem to me to be a pretty rare example of a categorical structure in which "all reasonable decompositions exist uniquely," especially if you think of them as a special kind of multicategory. Mario Román calls these multicategories malleable in his gorgeous thesis that I've just now looked into for the first time.
What do you mean that all reasonable decompostions exist uniquely in promonoidal categories?
For multicategories, composites are unique on the nose by definition, just as they are for categories... but promonoidal categories are defined in terms of categories with additional structure encoded in terms of coherence isomorphisms.
When you view a promonoidal category as a malleable multicategory, it is one in which, for instance, every ternary morphism can be decomposed into a composite of two binaries in an essentially unique way, and similarly for all other combinations of arities you can name.