Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: Graphs with polarities


view this post on Zulip John Baez (Apr 02 2025 at 16:01):

@Adittya Chaudhuri and I are writing a paper on "Graphs with polarities", and I thought it would be nice to have our conversations about this paper here. I've always liked doing research in public forums, like blogs. It's a good way to make sure we're not missing big ideas or making dumb mistakes, since people can comment. It's a good way to publicize the research. And for others, it's entertaining to watch - and helpful for students who are just starting research and haven't seen how it's done.

view this post on Zulip John Baez (Apr 02 2025 at 16:03):

Anyone wanting to get the basic idea of what we'll be discussing can check out my blog series

(This links to part 5, but you can easily click back to earlier articles.)

view this post on Zulip John Baez (Apr 02 2025 at 16:18):

But I'll just dive in and start talking to @Adittya Chaudhuri.

Any graph GG freely generates a category Free(G)\textrm{Free}(G).
In the paper we talk about graphs whose edges are labeled by elements of a monoid LL. We show that any such LL-labeled graph GG freely generates a category Free(G)\textrm{Free}(G) over the one-object category BLBL.

I want to make some expository changes but also I have a more substantial idea.

In terms of exposition, we currently define an LL-labeled graph to be a graph with a functor Free(G)BL\text{Free}(G) \to BL. This makes the concept seem too complicated. It's just a graph with edges labeled by elements of LL. We'd defined graphs with edges labeled by elements of a set in Definition 2.1, so we can just apply that here.

It should be a little proposition, not a definition, that when LL is a monoid an LL-labeled graph gives a functor Free(G)BL\text{Free}(G) \to BL (and the LL-labeled graph can be recovered from this).

I'll make this change - I won't list all the expository changes I want to make; it's quicker just to make them - but I want to emphasize my philosophy: definitions should be simple and easy to understand whenever possible. They should not be impressive, especially in applied category theory where non-category-theorists will be trying to understand them.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:22):

Just to clarify, did you refer to Definition 2.1 as in the current overleaf file?

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:23):

(As Defn 2.1 is the definition of LL-labeled graph).

view this post on Zulip John Baez (Apr 02 2025 at 16:24):

Yes: that's where we define graphs with edges labeled by elements of a set, and labeling edges by elements of a monoid works the exact same way.

But here's the more substantial idea. We're calling elements of LL 'polarities' and using them to describe different ways in which one thing affects another. For example if L={+,}L = \{+,-\} is the group Z/2\mathbb{Z}/2, an edge labeled by + means one vertex positively affects another, while an edge labeled by - means one vertex negatively affects another.

But we've seen that the absence of an edge is also a kind of polarity, meaning 'no effect'.

This has been confusing me for a long time, but I think I've figured it out.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:25):

Are you saying about multiplicative monoid of Z/3\mathbb{Z}/3?

view this post on Zulip John Baez (Apr 02 2025 at 16:27):

Let me take my time and explain things... it will take a while but it'll become clear.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:27):

Sure.

view this post on Zulip John Baez (Apr 02 2025 at 16:34):

So, we want a formalism where the absence of an edge is on the same footing as a labeled edge. And I realized such a formalism already exists! If VV is a monoidal category people talk about categories enriched in VV, or VV-categories for short, which have a set of objects and for each pair of objects x,yx,y an object hom(x,y)V\text{hom}(x,y) \in V, and composition and identity-assigning maps obeying the usual properties.

But some people (who? I've seen it somewhere!) also define VV-graphs, which are like VV-categories without the composition and identity-assigning maps. A VV-graph is just a set of vertices and for each pair of vertices x,yx,y an object hom(x,y)V\text{hom}(x,y) \in V. That's all.

People do this because there's something like a monad on the category of VV-graphs whose algebras are VV-categories: the 'free enriched category on an enriched graph' monad.

But if we take VV to be a mere monoid, thought of as a monoidal category with only identity morphisms, then a VV-graph is like a graph with polarities!

If we take VV to be the boolean monoid $$\{T,F}$$ then a VV-graph is just a graph, where TT means the presence of an edge and FF means the absence of an edge.

But if we take VV to be the multiplicative monoid of Z/3\mathbb{Z}/3, then a VV-graph is the same as a LL-labeled graph where L={+,}L = \{+,-\} - just as you said.

view this post on Zulip John Baez (Apr 02 2025 at 16:35):

I'm now thinking that VV-graphs are fundamentally more important than LL-labeled graphs, in part because they connect so nicely to enriched category theory.

view this post on Zulip John Baez (Apr 02 2025 at 16:36):

Any monoid LL gives a monoid VV with an extra element 00 that's 'absorptive': anything times 00 is 00 - and this lets us turn LL-labeled graphs into VV-graphs.

view this post on Zulip John Baez (Apr 02 2025 at 16:37):

However, there are also monoids VV that don't have an absorptive element, so the theory of VV-graphs is strictly more general than the theory of LL-labeled graphs. I have not thought of any examples of how this could be useful in the study of polarities.

view this post on Zulip John Baez (Apr 02 2025 at 16:38):

Right now, my reason for getting interested in VV-graphs is not so much the greater generality, but the fact that it wouldn't be good to be studying a special case of enriched category theory without even noticing it!

view this post on Zulip John Baez (Apr 02 2025 at 16:39):

Okay, now I'm fired up... I will go to the gym and later today I will start working on the exposition of the paper. (The other reason I like doing research in public is that it's more exciting and it makes me work faster!)

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:40):

Thanks !! I am thinking on the perspective of VV-graphs.

view this post on Zulip John Baez (Apr 02 2025 at 16:42):

Great! Maybe you can find some aspects of our paper that would be clearer from this perspective. Notice that the 0 in the monoid VV is naturally the additive identity when we give VV a ring structure. I.e., adding a non-edge to a labeled edge goes nothing to the labeled edge.

view this post on Zulip John Baez (Apr 02 2025 at 16:43):

Do people think about categories enriched in a ring or rig?

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:47):

My question is "why do we need the full rig structure in the hom Set?" Is'nt the commutative monoid structure not sufficient?

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:49):

I meant categories enriched in commutative monoids?

view this post on Zulip John Baez (Apr 02 2025 at 16:51):

I think we're getting a bit mixed up here, since LL doesn't need to be a commutative monoid, and we should think of its operation as multiplication, and an LL-labeled graph gives a VV-graph where V=L{0}V = L \cup \{0\} is a not necessarily commutative monoid.

The rig structure plays no role in any of this.

It comes later, and I probably shouldn't have mentioned it yet.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 16:52):

Yes, now I understand your point.

view this post on Zulip Kevin Carlson (Apr 02 2025 at 17:38):

Ignore me if this is distracting, but I thought of you two sometime in the last couple of weeks when I was doing some VV-enriched calculation and I found I wanted to understand maps f:vC(a,b)f:v\to C(a,b) for varying vVv\in V and an enriched category C.C. I found myself drawing such maps as arrows from aa to bb labeled by a little v.v. Also, you can compose such a “vv-arrow” ff with a “ww-arrow” g:wC(b,c)g: w\to C(b,c) to get a vwv\otimes w-arrow aca\to c! You just compose wfw\otimes f with gg in V.V.

view this post on Zulip Kevin Carlson (Apr 02 2025 at 17:41):

So, it feels like there’s some nice functor from VV-categories into categories labeled by the monoid of objects of VV, up to truth since the objects of VV aren’t quite a monoid in general. Just an idle thought!

view this post on Zulip Nathanael Arkor (Apr 02 2025 at 17:44):

The concept of a VV-graded category captures this intuition precisely (every VV-enriched category induces a VV-graded category where the vv-graded morphisms from aa to bb are given by the morphisms vC(a,b)v \to C(a, b)). Rory Lucyshyn-Wright's recent paper V-graded categories and V-W-bigraded categories: Functor categories and bifunctors over non-symmetric bases is a nice introduction to these ideas.

view this post on Zulip Kevin Carlson (Apr 02 2025 at 17:49):

Ah, lovely, I'm glad it's a known thing!

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:02):

(deleted)

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:08):

One problem is our VV is in the form of [VV][V \rightrightarrows V], for a monoid VV. So, only arrows are identity morphisms.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:11):

An LL-labeled graph is [LEV][L \leftarrow E \rightrightarrows V], which means for every x,yVx,y \in V there can multiple labeled edges between xx and yy.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:15):

How the notion of VV-graph is capturing "labelled multiple edges" between two vertices x,yx,y?

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:21):

I feel like for a monoid LL, there should be a functor from the category of LL-labeled graphs Gph/GLGph/G_{L} to the category of (L{0})( L \cup \lbrace 0 \rbrace)-graphs.. ? (where GLG_{L} is the graph with one vertex and an edge for each ll in LL). Of course, to make such an association, we may need additional algebraic structure on LL (like a rig..etc.).

view this post on Zulip Jorge Soto-Andrade (Apr 02 2025 at 18:22):

John Baez said:

Adittya Chaudhuri and I are writing a paper on "Graphs with polarities", and I thought it would be nice to have our conversations about this paper here. I've always liked doing research in public forums, like blogs. It's a good way to make sure we're not missing big ideas or making dumb mistakes, since people can comment. It's a good way to publicize the research. And for others, it's entertaining to watch - and helpful for students who are just starting research and haven't seen how it's done.

Nice idea! A very naive question: isn't this (labeling edges of a graph by elements in a monoid)motivated - among other things - by path integrals in discrete integral calculus in n n dimensions? The label associated to an edge from aa to bb could be the potential difference V(b)V(a)V(b) - V(a). When integrating, you add the weights associated to the edges which constitute your path. But you may have contexts where it is sensible to multiply those weights... You also have cases where you deal with (directed) graphs whose edges are labelled by linear operators, which you can compose.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:30):

For me, a motivation for multiplication of weight can be from regulatory networks in biological systems (for eg: the paper https://arxiv.org/pdf/2301.01445, where the case LL= Z/2\mathbb{Z}/2 is considered), and they are called signed graphs (signed categories): "Say aa stimulates bb and bb is inhibiting cc, then composing, aa is inhibiting cc.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:33):

However, I think if we consider the multiplicative monoid of the tropical semiring https://ncatlab.org/nlab/show/tropical+semiring, then a product is actually an addition.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:53):

According to the definition of VV-graph as @John Baez defined above, it seems it is like a monoid valued matrix M ⁣:X×XVM \colon X \times X \to V , where XX is the set of vertices of the graph, which may be like the "adjacency matrices of such VV-graphs.. ? I feel like the concept of RR-matrices in @Jade Master 's thesis https://arxiv.org/pdf/2105.12905 is the quantale analog of VV-graphs that you defined.

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 18:55):

But, is this notion capturing "multiple labeled edges" like our definition of LL-labeled graphs i.e objects of the slice category Gph/GLGph/G_{L}?

view this post on Zulip Adittya Chaudhuri (Apr 02 2025 at 19:24):

However, I completely agree that the absence of edges is nicely captured by VV-graphs unlike LL-labeled graphs.

view this post on Zulip John Baez (Apr 02 2025 at 19:27):

Kevin Carlson said:

So, it feels like there’s some nice functor from VV-categories into categories labeled by the monoid of objects of VV, up to truth since the objects of VV aren’t quite a monoid in general. Just an idle thought!

Nice! And of course every monoidal category is equivalent to a strict one so you can have a monoid of objects if you want... at the cost of making it obscenely large. (This is probably not a good idea, but you can do it.)

So far in our work on 'polarities' @Adittya Chaudhuri and I are only thinking about the case where VV actually is a monoid, i.e. a discrete monoidal category. We haven't thought about 'morphisms between polarities'. But it might be a natural thing to do - we could dip our toe in this water by letting VV be a monoidal poset.

view this post on Zulip John Baez (Apr 02 2025 at 19:37):

Jorge Soto-Andrade said:

A very naive question: isn't this (labeling edges of a graph by elements in a monoid) motivated - among other things - by path integrals in discrete integral calculus in n n dimensions? The label associated to an edge from aa to bb could be the potential difference V(b)V(a)V(b) - V(a). When integrating, you add the weights associated to the edges which constitute your path. But you may have contexts where it is sensible to multiply those weights... You also have cases where you deal with (directed) graphs whose edges are labelled by linear operators, which you can compose.

We are motivated by work that people are already doing on 'regulatory networks' in biology (as @Adittya Chaudhuri explained) and also 'causal loop diagrams' in the modeling discipline known as System Dynamics. I explained the latter here:

and here is an example:

Causal loop diagram for factory productivity

In all that work, one only considers a monoid of labels. But if your monoid is a rig, you can try to sum labels over all paths from one vertex to another, where the labels for paths are obtained by multiplying labels of edges making up the path... and the summation is where we use a rig. So then we get a discrete path integral, similar to what you're describing.

There are difficulties with this idea as soon as our graph has directed cycles, because then the sum can become infinite. And the graphs usually do have directed cycles, as in the picture above. That's why they're called 'causal loop diagrams': people are interested in studying the 'feedback loops'.

There are ways around these difficulties... but anyway I should be working on the paper!

view this post on Zulip James Fairbanks (Apr 02 2025 at 23:51):

This sounds cool and I’m interested to see if there’s a connection to the matrix exponential. When the edge weights are real or complex, you can look at the matrix exponential to summarize all paths between a pair of nodes, discounting by length.

view this post on Zulip John Baez (Apr 03 2025 at 00:07):

There is definitely a connection to the matrix exponential, and yes, the 'discounting by length' implicit in the formula for the exponential is exactly what solves the divergence issue that occurs in a naive 'sum over paths'. I've written a bit about this in a note to myself, and I should figure out a good place to put that writing. I'm not sure this paper is the place! We'll see. Anyway, thanks, you've revived a certain interesting line of thought.

view this post on Zulip John Baez (Apr 04 2025 at 16:28):

@Adittya Chaudhuri - I'm trying to prove and polish up Proposition 2.3. So people can understand what I'm talking about: we're studying the category of graphs with edges labeled by elements of some set LL, and we call this category Gph/GL\mathsf{Gph}/G_L because it's equivalent to the category of graphs over GLG_L: the graph with one vertex and one edge for each element of LL.

I've removed the part of this proposition that's useful for structured cospans, and moved it to the section on structured cospans, where people will understand why we care about this.

So what's left is this:

Proposition. For any set LL, we have the following:

(a) The category Gph/GL\mathsf{Gph}/G_L is a presheaf topos.

(b) The forgetful functor U ⁣:Gph/GLGphU\colon \mathsf{Gph}/G_L \to \mathsf{Gph} is a discrete fibration.

(c) The presheaf P ⁣:GphopSetP \colon \mathsf{Gph}^{\text{op}} \to \mathsf{Set} associated to UU is representable by GLG_L.

(d) The category Gph/GL\mathsf{Gph}/G_L is equivalent to the category of elements P\int P.

view this post on Zulip John Baez (Apr 04 2025 at 16:34):

All of this should be a special case of completely general facts that show up whenever you have a presheaf category C^\widehat{C} (here Gph\mathsf{Gph}) and an object xx in that category (here GLG_L). So I think our proof should say this, and briefly explain the completely general facts, with references to proofs, in a way that people who don't already know them have a chance of understanding.

Here are the general facts, if I understand correctly:

General facts. For any object xx in a presheaf category C^\widehat{C}:

(a) The category C^/x\widehat{C}/x is a presheaf topos.

(b) The forgetful functor U ⁣:C^/xC^U\colon \widehat{C}/x \to \widehat{C} is a discrete fibration.

(c) The presheaf P ⁣:C^opSetP \colon \widehat{\mathsf{C}}^{\text{op}} \to \mathsf{Set} associated to UU is representable by xx.

(d) The category C^/x\widehat{\mathsf{C}}/x is equivalent to the category of elements C^opP\int_{\widehat{\mathsf{C}}^{\text{op}}} P.

So, one question is: where are these general facts proved - preferably in a textbook that will help readers understand them?

view this post on Zulip John Baez (Apr 04 2025 at 16:47):

I also think for (a) it's good to note that

C^/xCx^ \widehat{\mathsf{C}}/x \simeq \widehat{\int_{\mathsf{C}} x}

where x:CopSetx : \mathsf{C}^{\text{op}} \to \mathsf{Set} is our presheaf on C\mathsf{C}. I think this the usual proof that a slice category of a presheaf category is again a presheaf category. I'm confused about how this is related to fact (d).

view this post on Zulip Adittya Chaudhuri (Apr 04 2025 at 18:29):

I think (d) should follow from the fact that PP is the presheaf associated to UU(via Grothendieck correspondence). Hence, P\int P should be equivalent to C^/x\widehat{C}/x by the fact that the category of Presheaves over C^\widehat{C} is equivalent to the category of discrete fibrations over C^\widehat{C}.

view this post on Zulip Adittya Chaudhuri (Apr 04 2025 at 18:30):

Or , am I misunderstanding something here?

view this post on Zulip Adittya Chaudhuri (Apr 04 2025 at 19:00):

(deleted)

view this post on Zulip John Baez (Apr 04 2025 at 21:03):

I don't follow the logic here:

Hence, P\int P should be equivalent to C^/x\widehat{C}/x by the fact that the category of presheaves on C^\widehat{C} is equivalent to the category of discrete fibrations over C^\widehat{C}.

You're probably just skipping some steps, but the part " P\int P should be equivalent to C^/x\widehat{C}/x" mentions P\int P and C^/x\widehat{C}/x, but the reason "by the fact that the category of presheaves on C^\widehat{C} is equivalent to the category of discrete fibrations over C^\widehat{C}" doesn't mention either of those things.

view this post on Zulip John Baez (Apr 04 2025 at 21:05):

Another question: where do you plan to use these facts:

(c) The presheaf P ⁣:GphopSetP \colon \mathsf{Gph}^{\text{op}} \to \mathsf{Set} associated to UU is representable by GLG_L.

(d) The category Gph/GL\mathsf{Gph}/G_L is equivalent to the category of elements P\int P.

?

view this post on Zulip Adittya Chaudhuri (Apr 04 2025 at 21:49):

John Baez said:

I don't follow the logic here:

Hence, P\int P should be equivalent to C^/x\widehat{C}/x by the fact that the category of presheaves on C^\widehat{C} is equivalent to the category of discrete fibrations over C^\widehat{C}.

You're probably just skipping some steps, but the part " P\int P should be equivalent to C^/x\widehat{C}/x" mentions P\int P and C^/x\widehat{C}/x, but the reason "by the fact that the category of presheaves on C^\widehat{C} is equivalent to the category of discrete fibrations over C^\widehat{C}" doesn't mention either of those things.

Let Pre(C^)Pre(\widehat{C}) denote the category of presheaves over C^\widehat{C} and discfib(C^)discfib(\widehat{C}) denote the category of discrete fibrations over C^\widehat{C}. Since Pre(C^)Pre(\widehat{C}) is equivalent to discfib(C^)discfib(\widehat{C}), there is a pair of functors F ⁣:discfib(C^)Pre(C^)F \colon discfib(\widehat{C}) \to Pre(\widehat{C}) and G ⁣:Pre(C^)discfib(C^)G \colon Pre(\widehat{C}) \to discfib(\widehat{C}) such that FGidF \circ G \cong \rm{id} and GFidG \circ F \cong {\rm{id}}. However, these FF and GG arise from the Grothendieck correspondence i.e in our context F(U)=PF(U)=P and G(P)=PC^G(P)= \int P \to \widehat{C}. Hence, GF(U)UG \circ F(U) \cong U translates to (PC^)U(\int P \to \widehat{C}) \cong U as fibrations. Now, from the definition of isomorphism of fibrations, we have PC^/x\int P \cong \widehat{C}/x .

view this post on Zulip Adittya Chaudhuri (Apr 04 2025 at 22:09):

John Baez said:

Another question: where do you plan to use these facts:

(c) The presheaf P ⁣:GphopSetP \colon \mathsf{Gph}^{\text{op}} \to \mathsf{Set} associated to UU is representable by GLG_L.

(d) The category Gph/GL\mathsf{Gph}/G_L is equivalent to the category of elements P\int P.

?

The free-forgetful adjunction between Gph\mathsf{Gph} and Set\mathsf{Set} allows us to define a natural isomorphism between the presheaves hom(Free(),BL) ⁣:GphopSet\hom(Free(-),BL)\colon \mathsf{Gph}^{{\rm{op}}} \to \mathsf{Set} and hom(,GL) ⁣:GphopSet\hom(-,G_L) \colon \mathsf{Gph}^{{\rm{op}}} \to \mathsf{Set}. Hence, by Grothendieck correspondence, hom(Free(),BL)hom(,GL)\int \hom(Free(-),BL) \cong \int \hom(-,G_L). However, hom(,GL) \hom(-,G_L) is same as PP, and thus hom(,GL)PGph/GL\int \hom(-,G_L) \cong \int P \cong \mathsf{Gph}/G_{L}. On the other hand hom(Free(),BL)\int \hom(Free(-),BL) is the category of graphs with LL-valued polarities" as defined in Definition 3.10. Thus the category of graphs with LL- valued polarities is same as the category of LL-labeled graphs. I used it in the proof of Theorem 3.12.

view this post on Zulip John Baez (Apr 05 2025 at 01:20):

Thanks.

view this post on Zulip Adittya Chaudhuri (Apr 05 2025 at 08:32):

@John Baez and I were discussing about the "stratification" as in the Section 5 of https://arxiv.org/pdf/2211.01290 and its possible way of incorporating it in the framework of graphs with polarities. I was thinking in the following way:

Say we assume "Students influence University in "i1,i2,ini_1,i_2, \ldots i_{n} ways". We see it as a graph with two vertices (Students and University) and nn-number of directed labelled edges from Students to University. Out of these nn influences, some are positive and some are negative. We call this graph GaggregateG_{aggregate}. Now consider another graph with two vertex a,ba,b but only two directed edges from aa to bb labeled by +1+1 and 1-1. We call this graph GTypeG_{Type}. Now, let us divide the Students into "PhD and Undergrads" and divide the University into "Public and Private". Then, if we construct the corresponding graph, we would obtain (by incorporating the appropriate edges coming from GaggregateG_{aggregate}) a graph which we call GStrataG_{Strata}. Now, there are two functions taggregate ⁣:GaggregateGTypet_{aggregate} \colon G_{aggregate} \to G_{Type} and tstrata ⁣:GstrataGTypet_{strata} \colon G_{strata} \to G_{Type} , which maps Students, PhD, Undergrads to aa and University, public, private to bb, and the positive edges to +1+1 and negative edges to 1-1. Now, we construct the pullback GStrata×tstrata,GType,taggregateGaggregateG_{Strata} \times_{t_{strata}, G_{Type}, t_{aggregate}} G_{aggregate} in the category Gph/GR\mathsf{Gph}/G_{\mathbb{R}}. We will call the pullback graph GStratifiedG_{Stratified}.

However, @John Baez said that "the notion of multiple stratifications" i.e Students into PhD's and Undergads and Universitites into Public and Private may lead to some unwanted errors. He said the standard technique is to do in a "non-multiple" way like "classifying people based on the "age difference" or "sex difference" (Separately) as in the Figure 14 of https://arxiv.org/pdf/2211.01290.

But, I am thinking about this :

Is there a way to incorporate "multiple stratification" in the framework of labeled graphs which can bypass the "errors about which @John Baez mentioned"? I think like this because, for example, aa influence bb positively and say, bb influence cc negatively. Now, say based on certain characteristics, a,ba,b and cc can be classified into a1,a2a_1,a_2, b1,b2,b3b_1,b_2, b_3, c1,c2c_1,c_2 respectively. Then, shouldn't the concept of multiple stratifications may naturally arise ? Am I missing something?

From the Mathematical point, we are trying to understand how the property "products distribute over colimits in a presheaf topos" translate into the language of "gluing and stratifying labeled graphs/causal loop diagrams".

view this post on Zulip John Baez (Apr 05 2025 at 21:03):

In the paper A categorical framework for modeling with stock and flow models we explain how to do stratification of models using pullbacks.

If I wanted to do multiple stratifications, like taking a model with students and universities and 1) subdividing students into undergraduates and PhD students and 2) subdividing universities into public and private, I would do one at a time, not both at once. This means doing first one pullback of stock and flow models that involves students, and then another that involves universities.

Of course, taking one pullback and then another is still an example of a limit. So we are still just taking a limit of stock and flow models. But it's less confusing to do it in stages.

view this post on Zulip Adittya Chaudhuri (Apr 05 2025 at 21:28):

Thanks. I got your point.

view this post on Zulip Adittya Chaudhuri (Apr 07 2025 at 13:10):

@John Baez I was thinking about an alternative (a bit relaxed) formalism of stratification for labeled graphs, which I am describing below:

For a set LL, consider a LL-labeled graph p ⁣:GGLp \colon G \to G_{L}. I am defining a stratification of p ⁣:GGLp \colon G \to G_{L} as a functor StratG ⁣:Free(G)RelStrat_{G} \colon Free(G) \to \mathsf{Rel} from the free category on GG to the category of relations https://ncatlab.org/nlab/show/Rel.

For every vertex vv of GG, lets call the elements of the set StratG(v)Strat_{G}(v) as the stratification of vv. Now, I construct a LL-labeled graph GStrataG_{Strata} defined by the following data:

V(GStrata):=vVStratG(v)V(G_{Strata}):= \sqcup_{v \in V} Strat_{G}(v)

E(GStrata):=eEStratG(e)E(G_{Strata}):=\sqcup_{e \in E} Strat_{G}(e)

sGStrata ⁣:E(GStrata)V(GStrata),(((x,v),(y,w)),StratG(e:vw))(x,v)s_{G_{Strata}} \colon E(G_{Strata}) \to V(G_{Strata}), \big(((x,v), (y,w)), Strat_{G}(e:v \to w) \big) \mapsto (x,v), where xStratG(v) x \in Strat_{G}(v).

tGStrata ⁣:E(GStrata)V(GStrata),(((x,v),(y,w)),StratG(e:vw))(y,w)t_{G_{Strata}} \colon E(G_{Strata}) \to V(G_{Strata}), \big(((x,v), (y,w)), Strat_{G}(e:v \to w) \big) \mapsto (y,w), where yStratG(w)y \in Strat_{G}(w).

lGStrata ⁣:E(GStrata)Ll_{G_{Strata}} \colon E(G_{Strata}) \to L, defined as (((x,v),(y,w)),StratG(e:vw))l(e)\big(((x,v), (y,w)), Strat_{G}(e:v \to w) \big) \mapsto l(e).

Although to match our intuition (with the kind of stratification discussed in https://arxiv.org/pdf/2211.01290), we may add an extra condition i.e we may define a stratification of p ⁣:GGLp \colon G \to G_{L} as a functor StratG ⁣:Free(G)RelStrat_{G} \colon Free(G) \to \mathsf{Rel} from the free category on GG to the category of relations such that for each morphism γid\gamma \neq {\rm{id}} in Mor(Free(G))Mor(Free(G)), we have StratG(γ)=StratG(v)×StratG(w)Strat_{G}(\gamma)= Strat_{G}(v) \times Strat_{G}(w). However, I am illustrating the general case by an example below:

Let us consider a graph with two vertices xx and yy, and an edge e ⁣:xye \colon x \to y labeled by the phrase help incuring`help \,\ in \,\, curing' by considering it as a graph whose edges are labeled by free monoid on the set of English words. Say, xx represents medicines used in treating headache and yy represents people with headache. Now, define StratG ⁣:Free(G)RelStrat_{G} \colon Free(G) \to \mathsf{Rel} as follows:
"xx maps to the set of all medicines used in curing headaches, and yy maps to the set of collection of people with different kinds of allergies. In this case StratG(e)={((xi,StratG(x)),(yj,StratG(y))) ⁣:xiStrat_{G}(e)= \lbrace \big((x_i, Strat_{G}(x) ), (y_{j}, Strat_{G}(y) ) \big) \colon x_i does not cause any allergic effects among people in yi}y_i \rbrace . Finally each edge of StratGStrat_{G} is labeled by the phrase 'helpincuringhelp\,\, in \,\, curing'.

Maybe the whole discussion can be simplified/or maybe I am misunderstanding some perspectives.

view this post on Zulip John Baez (Apr 07 2025 at 16:06):

Can you use this construction to obtain a new labeled graph? The idea of stratification has traditionally been that you start with a model XX of some sort (e.g. a labeled graph), and then build a new more complicated model XX' that has an epimorphism XXX' \to X.

view this post on Zulip John Baez (Apr 07 2025 at 17:07):

I'm in touch with an ecologist who has been using causal loop diagrams. These are simply directed graphs with edges labeled by signs) to study ecosystems. They're called 'causal loop diagrams' because people who use them are especially interested in finding feedback loops. This is a standard technique in the field called 'systems ecology'. She wrote this paper about it:

In this paper she introduced what she called "Type II loop analysis":

Loop Analysis for aquatic ecosystems has been used in two ways: (1) construction of hypothetical–intuitive models buttressed by the general knowledge of the investigator and reports of species coexistence and interactions in the literature (Type I). This is not unlike how Systems Ecologists first conceptualize their quantitative models. To date, most applications of LA have been Type I, with 10–12 or fewer nodes, which ignores a great deal of biological realism. Often, a model or set of models is drawn by hand in a ‘back of the envelope’ manner, and the LA calculation is completed to obtain predictions of changes in each node once a parameter input or driving force is assumed to perturb the network at one or more locations. In Type II LA, time series data of species field densities and nutrient concentrations are used to identify nodes and links, and node number is not limited, making larger, more biologically reasonable models possible.

She writes:

Hi Dr. Baez,

I considered your preference in doing Category Theory with decorated cospans a serious challenge.  I came up with a way to decorate the links in my Type II Loop Analysis models for marine plankton ecosystems.  Usually, I translate the quantitative field-lab data into qualitative signs as directed changes (in abundances from one sampling period to the next) in graph nodes that are sets of functional species in an ecological network.   This leaves the networks completely qualitative, and quantitative values are never used again.   However, calculations in the LA methodology allow one to predict how the nodes will change qualitatively when the network is perturbed from the outside (parameter input=PI).  I construct the models and automatically test for all positive and negative PIs to all nodes, then select the simplest model and input node with the best agreement with the data in the 90-100% range.  In the new work, I took the best agreement PI for the 640 individual models (20 years of about 17.5 networks/year for 2 stations)  and listed the paths and complements.  A complement is a set of disjunct loops of nodes not on the path.   Then, I counted how many times each link appeared in the paths, complements, and total (P + C) calculations, so I have individual 640 quantified sets of links x 3 with associated graphs.  A total of 1.36 links were counted for the Halifax station.  I view these values as frequencies of causality in the networks.  I believe this causal structure is more important than just flows of carbon or Kcal of energy, which are the common ways of quantifying ecosystem links.  No two networks repeat over hundreds of networks, but they are about 75% similar.  All networks are ranged around a central lattice containing several hundred species individually arranged in the nodes.  The link frequency values are quite disparate, with the lattice links generally the most common, which was expected.  As an example, I have attached a summary table of the counts and percentages of link frequencies for the summary graph for about 374 individual models for the Halifax station.   (On the attached graph, the asterisks mean the links were present in loop calculations but <1% of all involved links).  

I want to determine if there is a deeper underlying, even emergent, causal structure with Category Theory than summarized by the current set of largely collective network properties usually calculated for signed digraphs.  In addition, is there any way to illustrate the networks are CLEF (closed to efficient causality a la’ Aristotle and a main characteristic of self-organizing living systems).  Robert Rosen demonstrated ‘CLEFness’ with his M-R systems for cells using Category Theory models, and Jan-Hendrik Hofmeyr of Stellenbosch University is currently expanding Rosen’s work.  I think Bob was the first person to use Category Theory in biology in the 1950s in his PH.D. thesis at the University of Chicago and subsequent publications.  He knew the two Category Founders (Professors Eilenberg and MacLane).   I have identified these lattice structures at different locations over 1000 km of ocean.  It appears the upper ocean is a teaming 3D lattice of interactions. The loop models are a main part of my work on ecosystem chimeras-nodes that trade functions mutualistically in networks that individual nodes cannot provide for themselves throughout evolution.  I do not think anything like this (using Category Theory for loop links) has been attempted before at the ecosystem level. 

Thus, my questions are:

  1. Are the cospans decorated sufficiently for Category Theory?  Is there something better I could do?
  2. Is this a project you would be interested in?  I would be very interested to work with you on it. 

I have also attached two papers I wrote for Mathematics in December, one on Rosen’s approach to algorithms for living systems (he didn’t think they could capture life itself), and one on how I do Type II loop analysis for plankton communities,  which is the most detailed description of the methodology to date.  There is a YouTube video (https://www.youtube.com/watch?v=QFQNTv8lGFw) for a seminar about ecosystem chimeras I did for Michael Levin’s lab at Tufts last week using Loop Analysis.  They make laboratory chimeras at the organism level.

view this post on Zulip John Baez (Apr 07 2025 at 17:09):

She's a great example of someone who may benefit from wisely done research on graphs with polarities.

view this post on Zulip Adittya Chaudhuri (Apr 07 2025 at 19:43):

John Baez said:

Can you use this construction to obtain a new labeled graph? The idea of stratification has traditionally been that you start with a model XX of some sort (e.g. a labeled graph), and then build a new more complicated model XX' that has an epimorphism XXX' \to X.

I think yes. The correspondence about which you asked is
GGStrataG \mapsto G_{Strata}
and the epimorphism that you asked is ϕ ⁣:GstrataG\phi \colon G_{strata} \to G defined as
(x,v)v(x,v) \mapsto v and
(((x,v),(y,w)),StratG(e:vw))e\big(((x,v), (y,w)), Strat_{G}(e:v \to w) \big) \mapsto e.

Now, the fact ϕ\phi is a morphism of LL-labeled graphs follows from the construction of the labelling map lGStrata ⁣:E(GStrata)Ll_{G_{Strata}} \colon E(G_{Strata}) \to L, defined as (((x,v),(y,w)),StratG(e:vw))l(e)\big(((x,v), (y,w)), Strat_{G}(e:v \to w) \big) \mapsto l(e).

Here GStrataG_{Strata} is more complicated than GG.

view this post on Zulip Adittya Chaudhuri (Apr 07 2025 at 19:45):

John Baez said:

She's a great example of someone who may benefit from wisely done research on graphs with polarities.

Definitely!! I agree completely.

view this post on Zulip Adittya Chaudhuri (Apr 07 2025 at 20:03):

I am going through the Patricia A. Lane's paper that you shared. I had always been super interested in Rosen's approach (especially, anticipatory systems https://link.springer.com/book/10.1007/978-1-4614-1269-4)

view this post on Zulip James Fairbanks (Apr 08 2025 at 23:12):

Section 3b of this paper also talks about Stratified Models with pullbacks

https://royalsocietypublishing.org/doi/10.1098/rsta.2021.0309

It’s presented there in an elementary way because the audience was modelers, not category theorists. So it might be a helpful presentation for communicating with collaborators.

view this post on Zulip James Fairbanks (Apr 08 2025 at 23:17):

In some code I wrote for the Regulatory Networks project I computed some pullbacks of signed graphs with Catlab and it did what I expected. You could make stratified models that way too. They would be stratified at the presentation level rather than at the free signed category level.

view this post on Zulip John Baez (Apr 08 2025 at 23:51):

Yes, that paper seems to be the original "stratification using pullbacks" paper. I should have mentioned it!

view this post on Zulip John Baez (Apr 09 2025 at 00:11):

@Adittya Chaudhuri - can you try writing something for the proof of Prop. 2.3? The statements are:

If we write C^\widehat{C} for the category of presheaves on a category CC, and choose a presheaf FC^F \in \widehat{C}, then:

(a) The category C^/F\widehat{C}/F is a presheaf topos.
(b) The forgetful functor U ⁣:C^/FC^U \colon \widehat{C}/F \to \widehat{C} is a discrete fibration.
(c) The functor P ⁣:C^opSetP \colon \widehat{C}^{\text{op}} \to Set associated to UU is representable by FF.
(d) The category C^/F\widehat{C}/F is equivalent to the category of elements P\int P.

We don't really want to prove them: we want to crisply explain them and give references to proofs. If you write something I can try to make it nicer.

view this post on Zulip John Baez (Apr 09 2025 at 02:43):

Also: I want to make the math section of the paper more focused on topics that will actually help people who study causal loop diagrams. Here's one example:

When LL is a monoid, we already discuss the free LL-labeled category on an LL-labeled graph GG. One reason is obvious: this lets us assign labels not just to edges of GG but to edge paths (which are the morphisms in the free category on GG), thus describing 'indirect effects'.

But here's another reason: if we can show LL-labeled categories are monadic over LL-labeled graphs, we can define a Kleisli category. This has LL-labeled graphs as objects, but maps that send an edge in one LL-labeled graph to a composite of edges in another.

This is a useful notion of morphism. For example we can use one of these morphisms to map

health+wealth \text{health} \xrightarrow{+} \text{wealth}

to

health+income+wealth \text{health} \xrightarrow{+} \text{income} \xrightarrow{+} \text{wealth}

view this post on Zulip John Baez (Apr 09 2025 at 02:44):

I think quite often we may want to make a model of a system more complicated in this way: by adding new vertices, and 'factoring' LL-labeled edges into edge paths!

view this post on Zulip John Baez (Apr 09 2025 at 02:46):

So I have a question:

Question. It's widely known that reflexive graphs are monadic over sets, and categories are monadic over reflexive graphs. Are categories monadic over graphs? I.e. is Cat\mathsf{Cat} monadic over Gph\mathsf{Gph} where Gph\mathsf{Gph} is the category of presheaves on the category

\bullet \stackrel{\to}{\to} \bullet?

view this post on Zulip John Baez (Apr 09 2025 at 02:47):

I think it is....

view this post on Zulip John Baez (Apr 09 2025 at 03:55):

If true, I feel it must be true that LL-labeled categories are monadic over LL-labeled graphs when LL is a monoid. Then we can set up the Kleisli category I mentioned, and know it really is a Kleisli category.

I believe you can define a Kleisli-like category starting with any adjunction L:CDL : C \to D, R:DCR: D \to C. The objects are objects cCc \in C, the morphisms from cc to cc' are morphisms cRLcc \to R L c', and we compose f:cRLcf: c \to RL c' and g:cRLcg: c' \to RL c'' by taking RLgf:cRLRLcRL g \circ f: c \to RLRLc' and following it with RϵL:RLRLcRLcR \epsilon L : RLRLc' \to RLc' where ϵ\epsilon is the counit.

Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?

view this post on Zulip Nathanael Arkor (Apr 09 2025 at 05:45):

John Baez said:

I think it is....

It is (it's easy to check directly).

view this post on Zulip Nathanael Arkor (Apr 09 2025 at 05:47):

John Baez said:

I believe you can define a Kleisli-like category starting with any adjunction L:CDL : C \to D, R:DCR: D \to C. The objects are objects cCc \in C, the morphisms from cc to cc' are morphisms cRLcc \to R L c', and we compose f:cRLcf: c \to RL c' and g:cRLcg: c' \to RL c'' by taking RLgf:cRLRLcRL g \circ f: c \to RLRLc' and following it with RϵL:RLRLcRLcR \epsilon L : RLRLc' \to RLc' where ϵ\epsilon is the counit.

Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?

RLRL is the monad induced by the adjunction, so this is exactly the Kleisli category of the monad. Monadicity is irrelevant.

view this post on Zulip Adittya Chaudhuri (Apr 09 2025 at 05:57):

John Baez said:

Adittya Chaudhuri - can you try writing something for the proof of Prop. 2.3? The statements are:

If we write C^\widehat{C} for the category of presheaves on a category CC, and choose a presheaf FC^F \in \widehat{C}, then:

(a) The category C^/F\widehat{C}/F is a presheaf topos.
(b) The forgetful functor U ⁣:C^/FC^U \colon \widehat{C}/F \to \widehat{C} is a discrete fibration.
(c) The functor P ⁣:C^opSetP \colon \widehat{C}^{\text{op}} \to Set associated to UU is representable by FF.
(d) The category C^/F\widehat{C}/F is equivalent to the category of elements P\int P.

We don't really want to prove them: we want to crisply explain them and give references to proofs. If you write something I can try to make it nicer.

Thanks. I will add .

view this post on Zulip Adittya Chaudhuri (Apr 09 2025 at 06:40):

(deleted)

view this post on Zulip Kevin Carlson (Apr 09 2025 at 07:40):

This is an aside, but in case you guys haven’t caught this vibe yet, the kind of Kleisli category you’re working on is a big inspiration for double Lawvere theories in the double categorical setting. The natural morphisms of models of double Lawvere theories are, in effect, Kleisli in this way.

view this post on Zulip Adittya Chaudhuri (Apr 09 2025 at 09:15):

John Baez said:

But here's another reason: if we can show LL-labeled categories are monadic over LL-labeled graphs, we can define a Kleisli category. This has LL-labeled graphs as objects, but maps that send an edge in one LL-labeled graph to a composite of edges in another.

This is a useful notion of morphism. For example we can use one of these morphisms to map

health+wealth \text{health} \xrightarrow{+} \text{wealth}

to

health+income+wealth \text{health} \xrightarrow{+} \text{income} \xrightarrow{+} \text{wealth}

I am thinking of it as a zooming-in (refining) details operation on a LL-labeled graph by adding new vertices to it and factoring its LL-labeled edges into LL-labeled edge-paths using the Kleisli morphisms of the Kleisli category associated to the monad arising from the free-forgetful adjunction between the category of LL-labeled graphs and the category of LL-labeled categories as you explained. I am trying to demonstrate, why I am thinking from the point of zooming-in (refining) details operation below:

Considering your example G:=health+wealthG:= \text{health} \xrightarrow{+} \text{wealth}

Now, if we ask how? Then, an answer may be given by a morphism Z1 ⁣:GT(H1)Z_1 \colon G \to T(H_1) (healthhealth,weatlthwealthhealth \mapsto health, weatlth \mapsto wealth, and the unique edge maps to the unique edge path in H1H_1) in the associated Kleisli category (w.r.t to the Free-forgetful monad TT), where H1H_1 is as described by you health+income+wealth \text{health} \xrightarrow{+} \text{income} \xrightarrow{+} \text{wealth} .

However, one may ask how again!!

Then, one may define another morphism: Z2 ⁣:H1T(H2)Z_2 \colon H_1 \to T(H_2) (I think the definition is evident from the construction of H2H_2 below), where H2H_2 is as follows:

health+income+savings+wealth \text{health} \xrightarrow{+} \text{income} \xrightarrow{+}savings \xrightarrow{+} \text{wealth} .

Question:

By construction, Z2Z_2 and Z1Z_1 are composable morphisms in the Kleisli category we are talking about. Now, can we think Z2Z1Z_2 \circ Z_1 as the double-zooming in operation on GG , which takes (G=health+wealth)(G=\text{health} \xrightarrow{+} \text{wealth}) to (H2=health+income+savings+wealth)( H_2=\text{health} \xrightarrow{+} \text{income} \xrightarrow{+}savings \xrightarrow{+} \text{wealth} )?

If the above statement makes sense, then we can of course extend it to a "nn-level zooming in operations on GG", by composing nn number of morphisms ZnZn1Z1 ⁣:GT(Hn)Z_{n} \circ Z_{n-1} \cdots \circ Z_1 \colon G \to T(H_{n}) in the Kleisli category.

view this post on Zulip Adittya Chaudhuri (Apr 09 2025 at 09:26):

Another side of the story could be the following (Using Kleisli category)

I think the same idea has been used to describe the occurrence of motifs (which I like to think as some important simple shaped labeled graphs") like feedback loops, feedforward loops in a regulatory networks. I think the same idea had been used in the section 2.2 of https://arxiv.org/pdf/2301.01445 to find the occurrence of motifs in particular case of L=Z/2L=\mathbb{Z}/2 for regulatory networks.

I wonder the following: When we work with the monoid Z/2\mathbb{Z}/2, then the corresponding important shaped graphs are mostly various feedback loops and feedforward loops etc..(for example, the ones mentioned in the Uri Alon's paper https://www.nature.com/articles/nrg2102).

Question:

What are some other monoids (monoids other than Z/2\mathbb{Z}/2) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?

@John Baez In the overleaf file (Remark 3.2), I called a morphism f ⁣:XT(G)f \colon X \to T(G) in the Kleisli category as an occurrence of an XX-shaped graph in GG. (Although, my nomenclature may not be appropriate).

view this post on Zulip Adittya Chaudhuri (Apr 09 2025 at 15:15):

James Fairbanks said:

In some code I wrote for the Regulatory Networks project I computed some pullbacks of signed graphs with Catlab and it did what I expected. You could make stratified models that way too. They would be stratified at the presentation level rather than at the free signed category level.

Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?

view this post on Zulip John Baez (Apr 09 2025 at 15:39):

Nathanael Arkor said:

John Baez said:

Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?

RLRL is the monad induced by the adjunction, so this is exactly the Kleisli category of the monad. Monadicity is irrelevant.

Oh, duh! Thanks!

view this post on Zulip John Baez (Apr 09 2025 at 15:42):

Adittya Chaudhuri said:

Another side of the story could be the following (Using Kleisli category)

I think the same idea has been used to describe the occurrence of motifs (which I like to think as some important simple shaped labeled graphs") like feedback loops, feedforward loops in a regulatory networks. I think the same idea had been used in the section 2.2 of https://arxiv.org/pdf/2301.01445 to find the occurrence of motifs in particular case of L=Z/2L=\mathbb{Z}/2 for regulatory networks.

Oh, very good point! Yes, a 'motif' is an example of a Kleisli morphism. I was talking to @Evan Patterson about this Kleisli idea recently, and he probably pointed that out.

Somehow looking for motifs in a complicated monoid-labeled graph feels a bit different than taking a simple model of a system, given by a monoid-labled graph, and 'zooming in', or 'adding detail', using a Kleisli morphism. But they're mathematically very similar.

view this post on Zulip John Baez (Apr 09 2025 at 15:49):

Adittya Chaudhuri said:

Question: What are some other monoids (monoids other than Z/2\mathbb{Z}/2) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?

I listed a few in my blog articles and I plan to talk about these examples in our paper - otherwise it's useless to generalize from Z/2\mathbb{Z}/2 to an arbitrary monoid!

The simplest example is the multiplicative monoid of Z/3\mathbb{Z}/3, where 1 and -1 mean 'positive effect' and 'negative effect', and 0 means 'no effect'. Notice that the additive group Z/2\mathbb{Z}/2 is a submonoid. There's a general construction of 'adding an absorptive element' or 0 to a monoid, and that's what we're doing to get the the multiplicative monoid of Z/3\mathbb{Z}/3 from the additive abelian group Z/2\mathbb{Z}/2.

view this post on Zulip John Baez (Apr 09 2025 at 15:50):

I also mentioned an example where we have a monoid with an element that means 'undetermined effect' - an effect that could be positive or negative.

view this post on Zulip John Baez (Apr 09 2025 at 15:50):

It's also perfectly fine to use (R,+)(\mathbb{R}, +) or (R,)(\mathbb{R}, \cdot) as our monoid - these should be very useful in applications. These monoids have homomorphisms onto some of the small monoids I just mentioned.

view this post on Zulip John Baez (Apr 09 2025 at 16:02):

Adittya Chaudhuri said:

John Baez said:

If we write C^\widehat{C} for the category of presheaves on a category CC, and choose a presheaf FC^F \in \widehat{C}, then:

(a) The category C^/F\widehat{C}/F is a presheaf topos.
(b) The forgetful functor U ⁣:C^/FC^U \colon \widehat{C}/F \to \widehat{C} is a discrete fibration.
(c) The functor P ⁣:C^opSetP \colon \widehat{C}^{\text{op}} \to Set associated to UU is representable by FF.
(d) The category C^/F\widehat{C}/F is equivalent to the category of elements P\int P.

Thanks. I added the material in the overleaf file in the way you suggested. For (a), I added a reference. For (b), (c) and (d), I sketched the proofs.

Great! We should find references for all this stuff - I can't believe any of it is new. But proof sketches are also good, mainly for educational reasons. I already added a reference to Loregian-Riehl that's an introduction to the relation between functors F:CopSetF : \mathsf{C}^{\text{op}} \to \mathsf{Set} and discrete fibrations FC\int F \to \mathsf{C}. I'll add it to this proof too.

view this post on Zulip James Fairbanks (Apr 09 2025 at 16:59):

Adittya Chaudhuri said:

Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?

You can repeatedly do pullbacks, but it gets complicated fast. The result of a pullback is actually not an object in the category, but a commutative diagram with an object that has the universal property and the span of projection maps. So in order to take a pullback of a pullback, you have to drop that extra structure. Then when you want to index into the resulting object from your nested pullbacks, you really wish you had those morphisms. In a nested pullback, it is hard to find the stuff you want algorithmically.

I found it easier to figure out what multiple stratification I wanted and set up the right limit in one shot. Then you have all the projections into the factors at one level of abstraction. You can build up that limit diagram one arrow at a time, computing limits and visualizing the results until you get all the factors in. But it is more convenient to build up the limit diagram iteratively and compute one big limit, rather than compute limits of limits.

view this post on Zulip John Baez (Apr 09 2025 at 17:12):

That's interesting. I was telling Additya it's easier (for me) to think about multiple stratifications one step at a time. But thinking is different than coding.

view this post on Zulip John Baez (Apr 09 2025 at 23:21):

Hey @Adittya Chaudhuri - thanks for putting a proof for Prop. 2.3. I've polished it up a bit - check it out.

view this post on Zulip John Baez (Apr 10 2025 at 02:04):

Could you supply a similar proof for Prop. 2.6? I've gotten up to that point now. Here's a question for the world:

(Yes, we're being evil in treating Cat\mathsf{Cat} as a 1-category, but that's mainly because in our application we think we want to use the strict slice category, where morphisms are triangles that commute on the nose.)

I know Cat\mathsf{Cat} and Cat/C\mathsf{Cat}/C are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.

view this post on Zulip David Egolf (Apr 10 2025 at 02:25):

This question reminds me of the fundamental theorem of topos theory. I read on the nLab that Cat\mathsf{Cat} is a "2-topos". So I wonder if there is a "fundamental theorem of 2-topos theory", and if that could be relevant here.

view this post on Zulip John Baez (Apr 10 2025 at 02:26):

Right. The sad thing is that we seem to want the strict slice 2-category, where morphisms are triangles that commute on the nose. Anyone proving a "fundamental theorem of 2-topos theory" would not use that.

view this post on Zulip John Baez (Apr 10 2025 at 02:28):

But if someone has proved a "fundamental theorem of 2-topos theory" I'd still like to know about it!

view this post on Zulip John Baez (Apr 10 2025 at 04:50):

In our paper we use the easier "fundamental theorem of presheaf topos theory", which says the slice category of a presheaf topos is a presheaf topos... and there's probably a higher version of that too.

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 06:53):

John Baez said:

Adittya Chaudhuri said:

Question: What are some other monoids (monoids other than Z/2\mathbb{Z}/2) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?

I listed a few in my blog articles and I plan to talk about these examples in our paper - otherwise it's useless to generalize from Z/2\mathbb{Z}/2 to an arbitrary monoid!

The simplest example is the multiplicative monoid of Z/3\mathbb{Z}/3, where 1 and -1 mean 'positive effect' and 'negative effect', and 0 means 'no effect'. Notice that the additive group Z/2\mathbb{Z}/2 is a submonoid. There's a general construction of 'adding an absorptive element' or 0 to a monoid, and that's what we're doing to get the the multiplicative monoid of Z/3\mathbb{Z}/3 from the additive abelian group Z/2\mathbb{Z}/2.

Thank you, yes. But,

Question:
What could be some interesting patterns of subgraphs (motifs) in graphs labeled by elements of Z/3\mathbb{Z}/3 or R\mathbb{R} . By interesting, I meant to say an appropriate analogue of feedback and feedforward loops in the context of Z/3\mathbb{Z}/3 and R\mathbb{R}?

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 07:02):

John Baez said:

Hey Adittya Chaudhuri - thanks for putting a proof for Prop. 2.3. I've polished it up a bit - check it out.

Thank you so much for polishing the proof. The proof now looks much, much better than what I did. I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 07:04):

John Baez said:

Could you supply a similar proof for Prop. 2.6?

Thanks. Yes, I added a basic proof in the overleaf file. By basic , I mean like the way you upgraded the version of graphs (Proposition 2.3) to presheaf topos, there should be a similar upgradation for Cat\mathsf{Cat} in Proposition 2.6. I think you are already hinting about this in #theory: applied category theory > Graphs with polarities @ 💬 In other words, can we characterise Cat\mathsf{Cat} like categories?

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 07:15):

James Fairbanks said:

Adittya Chaudhuri said:

Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?

You can repeatedly do pullbacks, but it gets complicated fast. The result of a pullback is actually not an object in the category, but a commutative diagram with an object that has the universal property and the span of projection maps. So in order to take a pullback of a pullback, you have to drop that extra structure. Then when you want to index into the resulting object from your nested pullbacks, you really wish you had those morphisms. In a nested pullback, it is hard to find the stuff you want algorithmically.

I found it easier to figure out what multiple stratification I wanted and set up the right limit in one shot. Then you have all the projections into the factors at one level of abstraction. You can build up that limit diagram one arrow at a time, computing limits and visualizing the results until you get all the factors in. But it is more convenient to build up the limit diagram iteratively and compute one big limit, rather than compute limits of limits.

Thank you. I understand your point.

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 07:16):

John Baez said:

That's interesting. I was telling Additya it's easier (for me) to think about multiple stratifications one step at a time. But thinking is different than coding.

Thank you. Yes, now I understand why you were suggesting about multiple stratifications one step at a time.

view this post on Zulip Graham Manuell (Apr 10 2025 at 07:42):

John Baez said:

I know Cat\mathsf{Cat} and Cat/C\mathsf{Cat}/C are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.

It's not true that Cat/C\mathsf{Cat}/C is cartesian closed in general.

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 10:00):

Graham Manuell said:

John Baez said:

I know Cat\mathsf{Cat} and Cat/C\mathsf{Cat}/C are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.

It's not true that Cat/C\mathsf{Cat}/C is cartesian closed in general.

I think Example 1.7 in https://sinhp.github.io/files/CT/notes_on_lcccs.pdf demonstrates it.

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 15:35):

Regarding #theory: applied category theory > Graphs with polarities @ 💬 , For the monoid (N,×,1)(\mathbb{N}, \times, 1), let us consider the N\mathbb{N}-graphs. Let G:=v20vG:= v \xrightarrow{20}v. Then, a Kleisli morphism Z ⁣:GT(H)Z \colon G \to T(H), where H:=v2w2x5vH:= v \xrightarrow{2}w \xrightarrow{2}x\xrightarrow{5}v gives a factorisation of 2020.

I think more generally, for any nNn \in \mathbb{N}, if we consider G:=vnvG:= v \xrightarrow{n}v, then every prime factorisation n=p1α1p2α2pkαkn=p^{\alpha_1}_1 p^{\alpha_2}_2 \cdots p^{\alpha_k}_{k} of nn can be described as a (monic)Kleisli morphism Z ⁣:GT(H)Z \colon G \to T(H) from GG to some graph HH, where TT is the monad associated to free-forgetful adjunction between N\mathbb{N}-graphs and N\mathbb{N}-categories?

Of course, it may not be anything interesting.

view this post on Zulip Kevin Carlson (Apr 10 2025 at 17:45):

John Baez said:

Locally finitely presentable (which implies complete and cocomplete and monadic over presheaf category) and extensive is about all I've got.

view this post on Zulip John Baez (Apr 10 2025 at 18:09):

Adittya Chaudhuri said:

Thank you so much for polishing the proof.

Sure!

The proof now looks much, much better than what I did.

It's based on what you did, but I want to explain things to readers who may not know all the concepts well enough. When writing it's always important to imagine who the audience is. I'm imagining an audience of people who want to apply category theory, but may not know very much category theory. So just saying some functor is a discrete fibration may not mean much to them, so I wanted to explain what it means.

Also I think it may someday be useful for people to have an explicit description of the category of LL-labelled graphs as a presheaf category, so I added that. Probably I should move it out of the proof and make it a separate paragraph.

I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".

Great! By the way, I said "crisp", not "crispy". English is a weird language, almost impossible to learn: potato chips (which the British call "crisps") are crispy; fresh lettuce is not crispy - but it's crisp. "Crisp" also means something else:

A crisp way of speaking, writing, or behaving is quick, confident, and effective. For example:

view this post on Zulip John Baez (Apr 10 2025 at 18:11):

Kevin Carlson said:

John Baez said:

Locally finitely presentable (which implies complete and cocomplete and monadic over presheaf category) and extensive is about all I've got.

Thanks, that's super-helpful! If I'd thought I might have guessed they were locally finitely presentable, but I would never have thought about 'extensive' since I tend to think of that as applying to categories of 'spaces'.

view this post on Zulip Adittya Chaudhuri (Apr 10 2025 at 18:14):

John Baez said:

Adittya Chaudhuri said:

Thank you so much for polishing the proof.

Sure!

The proof now looks much, much better than what I did.

It's based on what you did, but it explains things to readers who may not know all the concepts well enough. Also I think it may someday be useful to explicitly describe the category of LL-labelled graphs as a presheaf category, so I added that.

I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".

Great! By the way, I said "crisp", not "crispy". English is a weird language: potato chips are crispy, fresh lettuce is not crispy but it's crisp, but "crisp" also means something else:

A crisp way of speaking, writing, or behaving is quick, confident, and effective. For example:

Thank you. Yes, I meant `crisp', but somehow, I typed crispy.

view this post on Zulip Kevin Carlson (Apr 10 2025 at 18:40):

John Baez said:

Thanks, that's super-helpful! If I'd thought I might have guessed they were locally finitely presentable, but I would never have thought about 'extensive' since I tend to think of that as applying to categories of 'spaces'.

Well, I think Grothendieck at least thought that categories are a kind of space!

view this post on Zulip John Baez (Apr 10 2025 at 20:37):

If you think of categories as simplicial sets with an extra property you can think of them as a kind of space, but it takes a lot of nerve.

view this post on Zulip John Baez (Apr 10 2025 at 20:50):

Yes, I meant 'crisp', but somehow, I typed crispy.

Okay. I started getting interested in the difference between crisp and crispy and got into an argument with my wife about it. I argued that good potato chips are clearly "crispy", while she says they are "crisp".

I don't know if I can continue to live with someone who has such fundamental disagreements with me.

view this post on Zulip John Baez (Apr 11 2025 at 00:46):

On a more serious note:

LL-labeled graphs with L={+,}L = \{+,-\} are often called 'causal loop diagrams' because people use them to identify 'feedback loops'. I think it would be good to say a bit about this. Here's an idea:

We can define the first homology of a graph GG with coefficients in LL, H1(G)H_1(G), in two equivalent ways. We can either geometrically realize the graph as a space and take its first homology with Z\mathbb{Z} coefficients, or - better - define an abelian group of 'cycles' in GG, called Z1(G)Z_1(G), and an abelian group of 'boundaries', called B1(G)B_1(G) and form H1(G)H_1(G) as a quotient:

H1(G)=Z1(G)/B1(G)H_1(G) = Z_1(G)/B_1(G)

At least when we have a finite graph, H1(G)H^1(G) is a free abelian group. But beware: there's no canonical choice for the generators of this free group.

view this post on Zulip John Baez (Apr 11 2025 at 00:52):

However:

1) GG is a directed graph, and for our applications the only possible feedback loops are directed loops: that is, paths of edges of the form

v0e1v1e2envn=v0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

So, we're not really interested in the usual 1st homology, but only a kind of directed first homology. This should be easy to define, but maybe someone in directed algebraic topology has already defined the directed first homology of a directed graph - so we should find out.

2) In some of our applications LL is not an abelian group but merely a commutative monoid. So, the usual textbook theory of homology doesn't apply. However, the directed first homology of a graph with coefficients in a commutative monoid should still make sense.

view this post on Zulip John Baez (Apr 11 2025 at 01:37):

When we understand this stuff, I hope there will be

and

so we can define the commutative monoid H1(G)\vec{H}_1(G) as Z1(G)\vec{Z}_1(G) modulo the congruence relation of being homologous.

(Note that with commutative monoids, we should mod out by [[congruence relations]], not submonoids! For abelian groups you can turn congruence relations into (normal) subgroups, which lets you talk about modding out by a subgroup. But that uses subtraction!)

I conjecture that, at least for a finite graph, H1(G)\vec{H}_1(G) will be a free commutative monoid on some set of generators. (If you draw me a directed graph, I believe I can find such generators.)

view this post on Zulip John Baez (Apr 11 2025 at 01:42):

Then I hope we can do this:

Suppose LL is a commutative monoid and GG is an LL-labeled graph, i.e. a graph with a map \ell sending edges to elements of LL.

Clearly for any directed loop in GG

v0e1v1e2envn=v0v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

we get an element of LL, namely

(e1)++(en) \ell(e_1) + \cdots + \ell(e_n)

This describes the feedback around this directed loop. (It's analogous to a holonomy.)

But I also conjecture that we get a map of commutative monoids

~ ⁣:Z1(G)L \tilde{\ell} \colon \vec{Z}_1(G) \to L

sending cycles to elements of LL. I haven't defined cycles, but a cycle should an N\mathbb{N}-linear combination of edges of GG with some property saying its boundary is zero. Then, to define $\tilde{\ell}$$ on a cycle, we sum up the labels over all the edges that appear in this cycle. (The same edge may show up several times in a cycle - so then we sum its label several times.)

This tells us the total feedback around any cycle!

view this post on Zulip John Baez (Apr 11 2025 at 01:43):

I conjecture that if ccc \sim c' are homologous cycles, then we get the same element of \ell this way:

~(c)=~(c) \tilde{\ell}(c) = \tilde{\ell}(c')

view this post on Zulip John Baez (Apr 11 2025 at 01:47):

If so, ~\tilde{\ell} descends to a map of commutative monoids

H1(G)L \vec{H}_1(G) \to L

Maybe I'll call this by the same name, ~\tilde{\ell}.

I conjecture that this map contains all the information about how much feedback there is around any directed loop.

view this post on Zulip John Baez (Apr 11 2025 at 01:56):

One nice thing about this little project is that @Adittya Chaudhuri has already studied holonomies of connections, and this is very much like that: made simpler because we are working on a graph, but more complicated because we are working with a (commutative) monoid instead of a group!

view this post on Zulip John Baez (Apr 11 2025 at 01:57):

It's possible we should develop this for a not-necessarily-commutative monoid.

view this post on Zulip James Deikun (Apr 11 2025 at 05:24):

In the conventional first homology of a graph, there are no bounding cycles, so H1(G)Z1(G)H_1(G) \cong Z_1(G). And when you only look at directed cycles, two cycles can't add up to a cycle that doesn't self-intersect, so you actually get canonical generators.

I was also going to suggest matroid theory besides homology, but it actually doesn't look very promising for this.

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 12:21):

John Baez said:

One nice thing about this little project is that Adittya Chaudhuri has already studied holonomies of connections, and this is very much like that: made simpler because we are working on a graph, but more complicated because we are working with a (commutative) monoid instead of a group!

Thanks. Are you hinting towards the following?

Given a connection 1-form ω\omega on a smooth principal GG-bundle π ⁣:EM\pi \colon E \to M and a point xMx \in M,

1) we can construct a holonomy map H ⁣:Ω(M,x)G \mathcal{H} \colon \Omega (M,x) \to G, where Ω(M,x)\Omega (M,x) is the set of smooth loops on MM based at xx . On the other hand, one can recover the whole bundle with the connection data from the holonomy map.

2) when two smooth loops γ1\gamma_1 and γ2\gamma_2 are thin homotopic then the map H\mathcal{H} descends to a map on the quotient Hˉ ⁣:Ω(M,x)/G \bar{\mathcal{H}} \colon \Omega (M,x)/ \sim \to G. Here, \sim is the equivalence relation on Ω(M,x)\Omega (M,x) defined by thin homotopy.

I think (1) is analogous to l~ ⁣:Z1(G)L \tilde{l} \colon \vec{Z}_1(G) \to L .

I think (2) is analogous to the map l~ ⁣:H1(G)L\tilde{l} \colon \vec{H}_1(G) \to L .

Now, if we replace the set Ω(M,x)\Omega (M,x) by Ω(M,x)lazy\Omega (M,x)_{lazy} containing lazy paths (as defined in https://arxiv.org/pdf/1003.4485), then we can turn Ω(M,x)lazy/\Omega (M,x)_{lazy}/ \sim into a group under concatenation as a binary operation. We can then extend the set map Hˉ\bar{\mathcal{H}} to a group homomorphism Hˉ ⁣:Ω(M,x)lazy/G\bar{\mathcal{H}} \colon \Omega (M,x)_{lazy}/ \sim \to G.

Then like the way the group homomorphism Hˉ ⁣:Ω(M,x)lazy/G\bar{\mathcal{H}} \colon \Omega (M,x)_{lazy}/ \sim \to G contain information of the bundle π ⁣:EM\pi \colon E \to M with connection ω\omega, the homomorphism of commutative monoids l~ ⁣:H1(G)L\tilde{l} \colon \vec{H}_1(G) \to L contain the information of how much feedback in any directed loop in the LL-labeled graph GG.

Or, am I misunderstanding ?

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 14:34):

@John Baez The relation (that you described) between the directed first homology of a graph with coefficients in a (commutative) monoid LL and the feedback loops in LL-labeled graphs seems super interesting!! Now, I am trying to understand all these things (what you wrote) properly.

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 14:48):

By directed Algebraic topology, are you saying about https://ncatlab.org/nlab/show/Directed+Algebraic+Topology?

view this post on Zulip John Baez (Apr 11 2025 at 16:51):

James Deikun said:

In the conventional first homology of a graph, there are no bounding cycles, so H1(G)Z1(G)H_1(G) \cong Z_1(G).

In Sunada's approach to undirected graphs (summarized in my paper Topological crystals, section 2) one counts an edge path like

v0e1v1e2v2e21v1e11v0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} v_2 \xrightarrow{e_2^{-1}} v_1 \xrightarrow{e_1^{-1}} v_0

as a 1-cycle that's homologous to zero. Maybe a more "efficient" approach wouldn't even allow such 1-cycles, but there's something nice about making every edge path that's a loop give a cycle.

But more importantly, you're right that in a directed graph, we're only interested in directed paths, so edges don't have inverses, and this issue goes away!

And when you only look at directed cycles, two cycles can't add up to a cycle that doesn't self-intersect, so you actually get canonical generators.

Another great point! I had not yet mentally adjusted from the case of undirected graphs.

This is very nice for the theory of causal loop diagrams, and it must already be known in the system dynamics community. A canonical set of 'generating' feedback loops is a very pleasant thing.

view this post on Zulip John Baez (Apr 11 2025 at 17:05):

Adittya Chaudhuri said:

Then like the way the group homomorphism Hˉ ⁣:Ω(M,x)lazy/G\bar{\mathcal{H}} \colon \Omega (M,x)_{lazy}/ \sim \to G contain information of the bundle π ⁣:EM\pi \colon E \to M with connection ω\omega, the homomorphism of commutative monoids l~ ⁣:H1(G)L\tilde{l} \colon \vec{H}_1(G) \to L contain the information of how much feedback in any directed loop in the LL-labeled graph GG.

Or, am I misunderstanding ?

No, that's exactly the correct analogy. This analogy has been on my mind for many decades, because starting in the early 1990s, before working on higher gauge theory, I worked on loop quantum gravity. Thus, long before treating connections on a trivial GG-bundle over a manifold MM as smooth functors from the path groupoid of MM to GG, I wrote some papers about connections on graphs, which took a similar viewpoint. See for example

It's a bit primitive since it doesn't introduce the holonomy of a path, but you can figure out what that must be!

I'm reusing those ideas now, but a bunch of things change when we have a directed graph, where paths are only allowed to move in the direction of the edges.

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 17:08):

Thanks!! Sounds super interesting!

view this post on Zulip John Baez (Apr 11 2025 at 17:08):

By the way, you read all my messages too quickly, so please reread that.

view this post on Zulip John Baez (Apr 11 2025 at 17:09):

I usually rewrite my posts a few times. Sorry!

view this post on Zulip John Baez (Apr 11 2025 at 17:13):

Anyway, here is how it works for a directed graph GG - the kind of graph we're talking about in our paper. Suppose LL is a monoid. A connection on a trivial LL-bundle over GG should clearly be a functor

Π1(G)BL \vec{\Pi}_1(G) \to \mathsf{B} L

where Π1(G)\vec{\Pi}_1(G) is the 'fundamental category' of GG and BL\mathsf{B} L is the one-object category corresponding to LL. We get a fundamental category instead of a fundamental groupoid because we're not allowed walk backward along a directed edge. The fundamental category has

But look - this 'fundamental category' is exactly the same as the free category on the graph GG, which we call Free(G)\mathsf{Free}(G) in our paper and study in detail!

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 17:14):

Now, I got your point. And, sorry for reading the last message quickly.

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 17:18):

In a way, LL-labeled category P ⁣:Free(G)BLP \colon Free(G) \to BL is a connection on a trivial LL-bundle over GG ?

view this post on Zulip John Baez (Apr 11 2025 at 17:19):

It's okay to read my message quickly... but I never expect anyone to read my messages quickly, so I tend to keep rewriting them and improving them. I find it easier to write messages to you later in the day, when you're asleep and I can rewrite them dozens of times.

So, where was I?

A connection on the trivial LL-bundle over GG is a functor

Free(G)BL \mathsf{Free}(G) \to \mathsf{B} L

But this is exactly a way of making Free(G)\mathsf{Free}(G) into an LL-labeled category! This also corresponds precisely to a way of making GG into an LL-labeled graph.

In a way, LL-labeled category P ⁣:Free(G)BLP \colon \mathsf{Free}(G) \to BL is a connection on a trivial LL-bundle over GG?

Exactly.

view this post on Zulip Adittya Chaudhuri (Apr 11 2025 at 17:20):

John Baez said:

It's okay to read my message quickly... but I never expect anyone to read my messages quickly, so I tend to keep rewriting them and improving them. I find it easier to write messages to you later in the day, when you're asleep and I can rewrite them dozens of times.

I got your point.

view this post on Zulip John Baez (Apr 11 2025 at 17:26):

So, we don't need to talk a lot about connections on graphs; they are just our LL-labeled graphs.

There's more to say when LL is a commutative monoid. When LL is noncommutative, the holonomy around a loop depends on the basepoint. (When LL is a group, the holonomy gets conjugated when we change the basepoint.) But when LL is a commutative monoid, the holonomy doesn't depend on the basepoint!

More generally, when LL is commutative we can talk about the holonomy around a collection of loops in GG: it's just the sum of the holonomies around each loop... and the sum is well-defined because LL is commutative.

Indeed, we can talk about the holonomy around any cycle in GG. Thus, when LL is a commutative monoid, for any LL-labeled graph GG we expect to get a homomorphism of commutative monoids

H1(G)L H_1(G) \to L

view this post on Zulip James Deikun (Apr 11 2025 at 17:28):

Note that also when a noncommutative monoid has a cyclic trace, you can use the cyclic trace to form a kind of basepoint-independent holonomy on loops specifically.

view this post on Zulip John Baez (Apr 11 2025 at 17:31):

Good point. The funny thing is that in applications to "system dynamics", like epidemiology and systems biology, the monoid always seems to be commutative. So we'll probably focus on that case.

You pointed out something which @Adittya Chaudhuri and I should state and prove: since our graph GG is directed graph, the commutative monoid I'm calling H1(G)H_1(G) is free on a canonical set of generators, the non-self-intersecting cycles.

view this post on Zulip John Baez (Apr 11 2025 at 17:33):

We should make up some nice term for them, like the 'basic feedback loops'. There is probably already a standard term for them in the field of system dynamics.

So, any LL-labeled graph gives a commutative monoid homomorphism

H1(G)L H_1(G) \to L

but this is determined by its value on the 'basic feedback loops', i.e. the canonical generators of the free commutative monoid H1(G)LH_1(G) \to L.

view this post on Zulip John Baez (Apr 11 2025 at 21:57):

By the way, I'm pretty sure any free commutative monoid has a canonical set of generators. Consider the finitely generated case: a free commutative monoid is isomorphic to (Nn,+)(\mathbb{N}^n, +). This is free on the nn generators

e1=(1,0,0,),e2=(0,1,0,),e_1 = (1, 0, 0, \dots ), \quad e_2 = (0, 1, 0, \dots ), \dots

and it's not free on any other generators: each eie_i cannot be expressed as a sum of elements except itself and the zero vector.

view this post on Zulip John Baez (Apr 11 2025 at 22:04):

Adittya Chaudhuri said:

By directed algebraic topology, are you saying about https://ncatlab.org/nlab/show/Directed+Algebraic+Topology?

Yes. You'll see that there are lots of approaches to directed algebraic topology. As far as I know, none of them has triumphed yet.

The idea (at least according to me) is that just as a homotopy nn-type is essentially the same as an nn-groupoid, a directed nn-type should be essentially the same as an nn-category. But it should be some sort of space with extra structure that has a 'fundamental nn-category'. Thus, directed nn-types should give a more topological way of thinking about nn-categories, just as homotopy types give a more topological way of thinking about nn-groupoids.

But we don't need to think in detail about this complicated stuff: it's just a guiding philosophy. Directed graphs should be a nice way to think about directed 1-types, and we already know how to get the 'fundamental category' of a directed graph GG: it's what our paper calls Free(G)\textsf{Free}(G).

We probably shouldn't talk much about directed algebraic topology in our paper; it's just good to keep in mind. The concept of the 'directed first homology monoid' of a (directed) graph seems somewhat useful, though.

view this post on Zulip John Baez (Apr 11 2025 at 23:49):

Minor stuff:

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 03:25):

John Baez said:

The idea (at least according to me) is that just as a homotopy nn-type is essentially the same as an nn-groupoid, a directed nn-type should be essentially the same as an nn-category. But it should be some sort of space with extra structure that has a 'fundamental nn-category'. Thus, directed nn-types should give a more topological way of thinking about nn-categories, just as homotopy types give a more topological way of thinking about nn-groupoids.

Interesting. In a way, are you are talking about a directed version of the homotopy hypothesis https://ncatlab.org/nlab/show/homotopy+hypothesis, and thus a fundamental (,1)(\infty,1)-category https://ncatlab.org/nlab/show/fundamental+%28infinity%2C1%29-category should capture the information of a directed topological space?

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 04:47):

In section 2.3 (overleaf file), you mentioned a relation between `motifs in music' and motifs in LL-labeled graphs. I was not much aware of this relation till today, and I just checked https://en.wikipedia.org/wiki/Motif_(music) to learn more about it. It's very interesting, and after reading the Wikipedia article, I also feel it reflects the same meaning. "Some small signature tunes within a big musical piece somehow repeat many times in the big piece". Surprisingly, I think listeners (especially non-professional musicians) also sometimes try to identify the whole big piece by only remembering/humming the small signature tunes or motifs in the big piece.

view this post on Zulip John Baez (Apr 12 2025 at 04:58):

Right! I believe the word 'motif' comes from music, made especially famous by Wagner's use of leitmofits. Grothendieck used this word in mathematics - 'motive' is French for 'motif'.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 05:02):

Thanks. Interesting.

view this post on Zulip John Baez (Apr 12 2025 at 05:03):

Adittya Chaudhuri said:

John Baez said:

The idea (at least according to me) is that just as a homotopy nn-type is essentially the same as an nn-groupoid, a directed nn-type should be essentially the same as an nn-category. But it should be some sort of space with extra structure that has a 'fundamental nn-category'. Thus, directed nn-types should give a more topological way of thinking about nn-categories, just as homotopy types give a more topological way of thinking about nn-groupoids.

Interesting. In a way, are you are talking about a directed version of the homotopy hypothesis https://ncatlab.org/nlab/show/homotopy+hypothesis, and thus a fundamental (,1)(\infty,1)-category https://ncatlab.org/nlab/show/fundamental+%28infinity%2C1%29-category should capture the information of a directed topological space?

I was indeed talking about a directed version of the homotopy hypothesis. But I was talking about it for nn-categories, or probably better for (,n)(\infty,n)-categories. There should be different kinds of directed space. In some the paths can be directed, i.e. non-invertible, but the homotopies between paths, etc. are all invertible. This kind of directed space should have a fundamental (,1)(\infty,1)-category.

But people also study directed spaces where the homotopies are also directed, so you can have a homotopy from one path γ\gamma to another path δ\delta but not from δ\delta back to γ\gamma. If all homotopies between homotopies, etc., are invertible then these spaces would have a fundamental (,2)(\infty,2)-category. And so on.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 05:05):

Thanks. I understand your point.

view this post on Zulip John Baez (Apr 12 2025 at 05:05):

By the way, if you want to link to the nLab here you don't have to include a URL like https://ncatlab.org/nlab/show/homotopy+hypothesis. There's a more elegant way: you can just type the page title in double square brackets, like this: [[homotopy hypothesis]]. Some smart person added this feature.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 05:07):

John Baez said:

By the way, if you want to link to the nLab here you don't have to include a URL like https://ncatlab.org/nlab/show/homotopy+hypothesis. There's a more elegant way: you can just type the page title in double square brackets, like this: [[homotopy hypothesis]].

Thank you. Yes. I wanted to do it. But I was not aware "how to do it" until now.

view this post on Zulip John Baez (Apr 12 2025 at 05:08):

You can also include arbitrary URLs like this:

[Motivating motives](http://math.ucr.edu/home/baez/motives/)

gives

Motivating motives

But for the nLab you just need to put the page title in double square brackets.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 05:10):

Thank you. I will do this.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 15:06):

Today, I was trying to understand the things from the general perspective discussed before. I am just trying to write down what I am thinking.

1)There is a well-known adjunction between the category of topological spaces Top\mathsf{Top} and the category of simplicial sets sSet\mathsf{sSet} given by singular simplicial set Sing ⁣:TopsSetSing \colon \mathsf{Top} \to \mathsf{sSet} and geometric realization functor  ⁣:sSetTop|-| \colon \mathsf{sSet} \to \mathsf{Top} which connects the study of topological spaces via simplicial sets. Now, according to The Homology of Simplicial Sets, given a topological space XX, replacing Sing(X)nSing(X)_{n} by Z[Sing(X)n]\mathbb{Z}[Sing(X)_{n}] for each nn, we obtain a simplicial abelian group Z[Sing(X)n]\mathbb{Z}[Sing(X)_{n}]. Now, one can convert the Z[Sing(X)]\mathbb{Z}[Sing(X)] into a chain complex C={Z[Sing(X)n]}n0C_{*}=\lbrace \mathbb{Z}[Sing(X)_{n}] \rbrace_{n \geq 0} whose boundary maps are given by the alternating sum of the face operators of Z[Sing(X)]\mathbb{Z}[Sing(X)]. Now, if we compute the homology of the chain complex CC{*} ([[chain homology and cohomology]]), we get the singular homology of XX in each degree. In particular, in the degree 1, we get the first singular homology group of XX.

2) On the other hand, there is also a well-known adjunction between Cat\mathsf{Cat} and sSet\mathsf{sSet} given by the nerve functor N ⁣:CatsSetN \colon \mathsf{Cat} \to \mathsf{sSet} and its left adjoint homotopy category functor H ⁣:sSetCatH \colon \mathsf{sSet} \to \mathsf{Cat} which connects the study of categories via simplicial sets. Now, given a topological space XX, the fundamental groupoid functor π1 ⁣:TopGpd\pi_{1} \colon \mathsf{Top} \to \mathsf{Gpd} (as explained by Harry Gindi in one of my 5-year-old MO question What is the geometric realization of the the nerve of a fundamental groupoid of a space? ) is isomorphic to HSingH \circ Sing, and hence, HSing(X)H \circ Sing(X) is same as the fundamental groupoid of XX i.e π1(X)\pi_{1}(X).

Now, as @John Baez was saying about directed topological spaces, and since he was discussing about homology monoids (which may make sense, because paths in directed topological spaces may not be able to be traversed in the reverse direction ) and fundamental categories (which may make sense, because paths in directed topological spaces may not be able to be traversed in the reverse direction ) of (such spaces), in the context of directed graphs, I was wondering whether one can suitably "directify" the correspondence of (1) and (2) to obtain suitable notions of singular homology monoids and fundamental categories of directed topological spaces. Then, we may obtain the results for directed graphs as a special case.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 15:20):

@John Baez Another thing about which I was also thinking today:

Directed graphs can also be seen as simplicial sets Directed Graphs as Simplicial Sets. Although the definition of a directed graph in Directed Graphs as Simplicial Sets coincides with our definition, the definition of morphisms are little different. However, the fundamental category of these directed graphs are free categories of the graphs as explained in The Path Category of a Directed Graph.

view this post on Zulip John Baez (Apr 12 2025 at 15:29):

The questions you raise are very interesting.

I was wondering whether one can suitably "directify" the correspondence of (1) and (2) to obtain suitable notions of singular homology monoids and fundamental categories of directed topological spaces. Then, we may obtain the results for directed graphs as a special case.

That's a fascinating and ambitious project and I hope someone tries it - or has already done it. Of course one needs to properly define 'directed topological space', and right now there seem to be multiple competing definitions.

I don't have the energy to tackle this project: I just want to finish our paper, which is much easier. But there may be people here who can help you with this project.

view this post on Zulip John Baez (Apr 12 2025 at 15:34):

You could say our project is exploring the homotopy theory and homology theory of one-dimensional directed spaces, and applying it to biology and epidemiology. That's ambitious enough for me. :upside_down:

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:03):

I understand your point.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:07):

John Baez said:

You could say our project is exploring the homotopy theory and homology theory of one-dimensional directed spaces, and applying it to biology and epidemiology.

This is also fascinating. It may give ideas on how to approach the general case. Although I do not know.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:08):

John Baez said:

The questions you raise are very interesting.

Thank you!!

view this post on Zulip John Baez (Apr 12 2025 at 16:09):

1-dimensional topology is a lot simpler than higher-dimensional topology; that's why we teach kids about the fundamental group long before we teach them about πn\pi_n, and why people understood gauge theory long before higher gauge theory. But I like doing simple things that are actually useful. Often the simplest ideas are the most useful.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:15):

I understand your point completely. I think from the point of applications in biology and systems dynamics, a 1-dimensional case would be more suitable now. Then, if we feel that "we need to upgrade our framework for explaining certain realistic phenomenon in biology or systems dynamics" like the way "gauge theory upgraded to higher gauge theory" , then one may develop the general case with a correct motivation. I am also not sure if what I am saying actually makes sense or not.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:18):

John Baez said:

Often the simplest ideas are the most useful.

I really like this line and I fully agree to it.

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 16:50):

John Baez said:

That's a fascinating and ambitious project and I hope someone tries it - or has already done it. Of course one needs to properly define 'directed topological space', and right now there seem to be multiple competing definitions.

Thank you. Yes, I understand your point regarding "multiple competing definitions" from the nLab page [[directed topological space]].

view this post on Zulip Adittya Chaudhuri (Apr 12 2025 at 18:26):

John Baez said:

It may not be completely related to our paper, but I just observed this:

1-category Cat\mathsf{Cat} can be given a model structure ([[Thomason model structure]]) and hence, for any category C\mathsf{C} the slice category Cat/C\mathsf{Cat}/\mathsf{C} can be given a [[slice model structure]] in a nice way. One can also do similar with [[canonical model structure on Cat]].

view this post on Zulip John Baez (Apr 12 2025 at 21:11):

Nice!

Here's how I'd like to define the first homology monoid of a graph in our paper, taking directedness into account. If there are flaws or mistakes, someone please let me know.

We start with a graph GG: a set VV of vertices, a set EE of edges, and two maps s,t:EVs,t: E \to V.

  1. Define the commutative monoid of 1-chains C1(G)C_1(G) to be the free commutative monoid on EE, which is N[E]\mathbb{N}[E], the commutative monoid of formal N\mathbb{N}-linear combinations of elements of EE.
  2. Define the commutative monoid of 0-chains C0(G))C_0(G)) to be the free commutative monoid on VV, which is N[V]\mathbb{N}[V], the commutative monoid of formal N\mathbb{N}-linear combinations of elements of VV.
  3. Note we have set inclusions EC1(G)E \to C_1(G), VC0(G)V \to C_0(G). Define maps s,t:C1(G)C0(G)s, t: C^1(G) \to C^0(G) to be the monoid homomorphisms extending s,t:EVs,t : E \to V.
  4. Define a 1-chain cc to be a 1-cycle if s(c)=t(c)s(c) = t(c).
  5. Define H1(G)H_1(G) to be the commutative monoid of 1-cycles. (We should use cycles modulo boundaries, but the only boundary is 00.)

view this post on Zulip John Baez (Apr 12 2025 at 23:08):

Note that we can generalize this to define the first homology of a graph GG with coefficients in any commutative monoid LL:

  1. Define the commutative monoid of 1-chains C1(G,L)C_1(G,L) to be L[E]L[E], the commutative monoid of formal LL-linear combinations of elements of EE.
  2. Define the commutative monoid of 0-chains C0(G,L)C_0(G,L) to be L[V]L[V], the commutative monoid of formal LL-linear combinations of elements of VV.
  3. Note we have inclusions of sets EC1(G,L)E \to C_1(G,L), VC2(G,L)V \to C_2(G,L). Define maps s,t:C1(G,L)C0(G,L)s, t: C^1(G,L) \to C^0(G,L) to be the monoid homomorphisms extending s,t:EVs,t : E \to V.
  4. Define a 1-chain cc to be a 1-cycle if s(c)=t(c)s(c) = t(c).
  5. Define H1(G,L)H_1(G,L) to be the commutative monoid of 1-cycles. (We should use cycles modulo boundaries, but again the only boundary is 00.)

view this post on Zulip John Baez (Apr 12 2025 at 23:14):

But I don't think we need H1(G,L)H_1(G,L) now: it seems for applications what matters is H1(G)H_1(G) (which is the same as H1(G,N)H_1(G,\mathbb{N}).

The reason is that we want to prove this:

Theorem. H1(G)H_1(G) is a free commutative monoid. There is a unique set of generators ciH1(G)c_i \in H_1(G) such that H1(G)H_1(G) is free on the set {ci}\{c_i\}.

If we're doing applied mathematics, we can call these generators the basic feedback loops.

view this post on Zulip John Baez (Apr 12 2025 at 23:24):

Lemma. Let LL be a commutative monoid. Then every map  ⁣:EG\ell \colon E \to G (called an LL-labeling) extends uniquely to a monoid homomorphism C1(G)L C_1(G) \to L, which we also call \ell.

Notice that H1(G)H_1(G) is a sub-monoid of C1(G)C_1(G), unlike what we usually expect in homology. \ell thus restricts to a map H1(G)LH_1(G) \to L, which we again call \ell. By the theorem this map  ⁣:H1(G)L\ell \colon H_1(G) \to L is uniquely determined by its values on the basic feedback loops!

view this post on Zulip John Baez (Apr 12 2025 at 23:25):

The moral: a labeling of a graph lets us assign a kind of 'feedback' valued in LL to any cycle in the graph, but these feedbacks are determined by the feedbacks on the basic feedback loops.

view this post on Zulip John Baez (Apr 12 2025 at 23:35):

(By the way, while I defined 1st homology with coefficients in the commutative monoid LL, it looks like what we're secretly using here is 1st cohomology with coefficients in LL. This makes me realize that when L=RL = \mathbb{R}, we can think of an \ell-labeling of a graph as assigning a voltage to each edge. This sets up a connection between electrical circuits and cohomology of graphs, which I've explained here). That choice of LL is an abelian group, and right now I'm mostly excited about the generalization to commutative monoids. But still, our paper should mention electrical circuits as an example.)

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 05:14):

Thanks!! I am trying to understand your construction. Just a very minor point: In the begining at point 2, I think there is a typo. I think you meant C0(G)=N[V]C_{0}(G)=\mathbb{N}[V] ? Also, in point 3, I think by C2(G)C_2(G) you meant C0(G)C_0(G)?

view this post on Zulip John Baez (Apr 13 2025 at 05:25):

Yes, and yes. You're right! I'll fix those mistakes.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 12:32):

I may be completely misunderstanding, but I am trying to write down what I am thinking:

According to your construction, given a graph G=[s,t ⁣:EV]G=[s,t \colon E \to V], consider a cycle c=α1e1+α2e2++αkekc= \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k} in C1(G)C_1(G). Hence, by the way you defined a cycle in the point (3), we must have s(c)=t(c)s(c)=t(c), which implies s(ei)=t(ei)s(e_i)=t(e_i) for all i1,2,,ki \in 1,2, \ldots, k. Then, this should imply each of the edge eie_i is a loop. Now, if I assume the correctness of the theorem you stated, then we can assume the edges eie_i are basic feedback loops for each eie_i. This also says, every basic feedback loop is an edge (not a path).

Intuitively, (according to me), it means any cycle is composed of basic feedback loops, and the coeffcients attached to basic feedback loop eie_i say the number of times eie_i is counted in cc. However, if I consider a cycle cˉ\bar{c} like this: cˉ:=ae1be2ce3a\bar{c}:= a \xrightarrow{e_1}b \xrightarrow{e_2}c \xrightarrow{e_3}a, where a,b,ca,b,c are distinct, then I am not able to guess how can I write cˉ\bar{c} in terms of basic feed back loops. (Also, I am not completely understanding the intuition behind the coefficients of the basic feedback loops).

May be I am misunderstanding something very fundamentally.

Now, if we slightly modify to the following:

Consider a graph G=[s,t ⁣:EV]G=[s,t \colon E \to V] as before, and now consider the category Free(G)\mathsf{Free}(G). Let us denote the object set and morphism set by respectively, VV and Path(G)Path(G). Now, I am repeating the same construction as you did. More precisely,

  1. Define the commutative monoid of 1-chains C1(G)C_1(G) to be the free commutative monoid on Path(G)Path(G), which is N[Path(G)]\mathbb{N}[Path(G)], the commutative monoid of formal N\mathbb{N}-linear combinations of elements of Path(G)Path(G).
  2. Define the commutative monoid of 0-chains C0(G))C_0(G)) to be the free commutative monoid on VV, which is N[V]\mathbb{N}[V], the commutative monoid of formal N\mathbb{N}-linear combinations of elements of VV.
  3. Note we have set inclusions Path(G)C1(G)Path(G) \to C_1(G), VC0(G)V \to C_0(G). Define maps s,t:C1(G)C0(G)s, t: C_1(G) \to C_0(G) to be the monoid homomorphisms extending s,t:Path(G)Vs,t : Path(G) \to V.
  4. Define a 1-chain cc to be a 1-cycle if s(c)=t(c)s(c) = t(c).
  5. Define H1(G)H_1(G) to be the commutative monoid of 1-cycles.

Now, in the light of the above modified definition, consider a cycle c=α1p1+α2p2++αkpkc= \alpha_1p_1 + \alpha_2p_2 + \cdots + \alpha_{k}p_{k} in C1(G)C_1(G) (each pip_i is a path or a sequence of edges). The condition of cycle then forces each pip_i to be a graph theoretic cycle (usual cycle in graph theory). Now, if I consider the cycle cˉ:=ae1be2ce3a\bar{c}:= a \xrightarrow{e_1}b \xrightarrow{e_2}c \xrightarrow{e_3}a, then the whole cˉ\bar{c} itself is a basic feedback loop (not decomposable) and hence, a generator of H1(G)H_1(G).

Intuition of the modified definition:

Intuitively, (according to me), it means any cycle is composed of basic feedback loops, and the coeffcients attached to basic feedback loop pip_i say the number of times pip_i is counted in cc. I think in the light of the category Free(G)Free(G), the word counted is also making sense.

Now, I am trying to see, how the Lemma part gets modified:

Consider a map  ⁣:EL\ell \colon E \to L (called an LL-labeling) . Now, by free-forgetful adjunction between graphs and categories, the set Hom(G,GL)Hom(Free(G),BL){\rm{Hom}}(G,G_L{}) \cong {\rm{Hom}}(Free(G), BL). Now, since C1(G)=N[Path(G)]C_1(G)=\mathbb{N}[Path(G)], we should have a Lemma(modified) as follows:

Lemma(modified). Let LL be a commutative monoid. Then every map  ⁣:EL\ell \colon E \to L (called an LL-labeling) extends uniquely to a monoid homomorphism C1(G)L C_1(G) \to L, which we also call \ell.

But, it may not be a monoid homomorphism!!

Reason: Say, γ=(α1p1+α2p2++αkpk)i=1kl(pi)\gamma=( \alpha_1p_1 + \alpha_2p_2 + \cdots + \alpha_{k}p_{k}) \mapsto \sum^{k}_{i=1}l(p_i), where l(pi=e1e2en):=l(e1)+l(e2)++l(en)l(p_i=e_1e_2 \ldots e_n):=l(e_1)+l(e_2) +\ldots + l(e_n) . Similarly, γ=(α1p1+α2p2++αkpk)i=1kl(pi)\gamma'=( \alpha'_1p_1 + \alpha'_2p_2 + \cdots + \alpha'_{k}p_{k}) \mapsto \sum^{k}_{i=1}l(p_i). But (γ+γ)i=1kl(pi)(\gamma + \gamma') \mapsto \sum^{k}_{i=1}l(p_i), which may not be equal to i=1kl(pi)+i=1kl(pi)\sum^{k}_{i=1}l(p_i)+ \sum^{k}_{i=1}l(p_i).

But the same reason also holds true on the non-modified version.

However, I think if we replace C1(G)C_1(G) by C1(G,L)C_1(G,L) and assume the commutative monoid (L,+,0)(L,+,0) is a part of a rig (L,+,0,×,1)(L, +, 0, \times, 1) , then from the distribuitive law of a rig,  ⁣:C1(G)L\ell \colon C_1(G) \to L can be proved as a homomorphism of monoids/ or may be rigs.

I apologise priorly if I am making very fundamental mistakes.

view this post on Zulip John Baez (Apr 13 2025 at 16:42):

Adittya Chaudhuri said:

I may be completely misunderstanding, but I am trying to write down what I am thinking:

According to your construction, given a graph G=[s,t ⁣:EV]G=[s,t \colon E \to V], consider a cycle c=α1e1+α2e2++αkekc= \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k} in C1(G)C_1(G). Hence, by the way you defined a cycle in the point (3), we must have s(c)=t(c)s(c)=t(c), which implies s(ei)=t(ei)s(e_i)=t(e_i) for all i1,2,,ki \in 1,2, \ldots, k. Then, this should imply each of the edge eie_i is a loop.

If that's true my definition is bad. But let's see. Consider this triangle-shaped graph:

v1e1v2e2v3e3v1 v_1 \xrightarrow{e_1} v_2 \xrightarrow{e_2} v_3 \xrightarrow{e_3} v_1

I expect that

e1+e2+e3 e_1 + e_2 + e_3

is a cycle, since it goes around the triangle, even though we don't have s(ei)=t(ei)s(e_i) = t(e_i) as you claim. Let's see:

s(e1+e2+e3)=s(e1)+s(e2)+s(e3)=v1+v2+v3 s(e_1 + e_2 + e_3) = s(e_1) + s(e_2) + s(e_3) = v_1 + v_2 + v_3

t(e1+e2+e3)=t(e1)+t(e2)+t(e3)=v2+v3+v1 t(e_1 + e_2 + e_3) = t(e_1) + t(e_2) + t(e_3) = v_2 + v_3 + v_1

so yes, e1+e2+e3e_1 + e_2 + e_3 is a cycle.

I believe your mistake occurs here:

s(c)=t(c)s(c)=t(c), which implies s(ei)=t(ei)s(e_i)=t(e_i) for all i1,2,,ki \in 1,2, \ldots, k

view this post on Zulip John Baez (Apr 13 2025 at 16:46):

There's no way to deduce s(ei)=t(ei)s(e_i) = t(e_i) from

s(α1e1+α2e2++αkek)= s( \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k}) =
t(α1e1+α2e2++αkek) t( \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k})

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 16:53):

Yh. Sorry. It was a bad mistake. Somehow, I mixed up!!

view this post on Zulip John Baez (Apr 13 2025 at 17:01):

I suggest trying all my definitions, and my claimed Theorem (which is really just a conjecture), in some examples.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:01):

John Baez said:

John Baez said:

I suggest trying all my definitions, and my claimed Theorem (which is really just a conjecture), in some examples.

Thanks. I will.

view this post on Zulip John Baez (Apr 13 2025 at 17:05):

If we were in the same room we could do some examples on a blackboard, with pictures. But they're quite tiring to write in LaTeX.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:05):

John Baez said:

Lemma. Let LL be a commutative monoid. Then every map  ⁣:EG\ell \colon E \to G (called an LL-labeling) extends uniquely to a monoid homomorphism C1(G)L C_1(G) \to L, which we also call \ell.

Without assuming a compatibility, between N\mathbb{N} and LL, is it becoming a monoid homomorphism? For example,

Say, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma=( \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i), . Similarly, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma'=( \alpha'_1e_1 + \alpha'_2e_2 + \cdots + \alpha'_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i). But (γ+γ)i=1kl(ei)(\gamma + \gamma') \mapsto \sum^{k}_{i=1}l(e_i), which may not be equal to i=1kl(li)+i=1kl(li)\sum^{k}_{i=1}l(l_i)+ \sum^{k}_{i=1}l(l_i).

However, I think if we replace C1(G)C_1(G) by C1(G,L)C_1(G,L) and assume the commutative monoid (L,+,0)(L,+,0) is a part of a rig (L,+,0,×,1)(L, +, 0, \times, 1) , then from the distribuitive law of a rig,  ⁣:C1(G)L\ell \colon C_1(G) \to L can be proved as a homomorphism of monoids/ or may be rigs.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:05):

John Baez said:

If we were in the same room we could do some examples on a blackboard, with pictures. But they're quite tiring to write in LaTeX.

Yes, I can completely understand.

view this post on Zulip John Baez (Apr 13 2025 at 17:06):

Say, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma=( \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i), . Similarly, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma'=( \alpha'_1e_1 + \alpha'_2e_2 + \cdots + \alpha'_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i). But (γ+γ)i=1kl(ei)(\gamma + \gamma') \mapsto \sum^{k}_{i=1}l(e_i).

How are you deriving that last two equations?

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:08):

John Baez said:

Say, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma=( \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i), . Similarly, γ=(α1e1+α2e2++αkek)i=1kl(ei)\gamma'=( \alpha'_1e_1 + \alpha'_2e_2 + \cdots + \alpha'_{k}e_{k}) \mapsto \sum^{k}_{i=1}l(e_i). But (γ+γ)i=1kl(ei)(\gamma + \gamma') \mapsto \sum^{k}_{i=1}l(e_i).

How are you deriving that last equation?

By the same map : (α1+α1)e1+(α2+α2)e2++(αk+αk)eki=1kl(ei)( \alpha_1 +\alpha'_1)e_1 + (\alpha_2 + \alpha_2')e_2 + \cdots + (\alpha_{k}+ \alpha'_k)e_{k} \mapsto \sum^{k}_{i=1}l(e_i)

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:09):

αiN\alpha_i \in \mathbb{N}

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:10):

if αiL\alpha_i \in L as a rig, we can do this by distributive law

view this post on Zulip John Baez (Apr 13 2025 at 17:13):

Remember, we're extending the labelling :EL\ell : E \to L to a monoid homomorphism :C1(G)L\ell: C_1(G) \to L, using the fact that C1(G)C_1(G) is the free commutative monoid on EE. So this extended map

 ⁣:C1(G)L \ell \colon C_1(G) \to L

is given by

α1e1++αneni=1kαi(ei) \alpha_1 e_1 + \cdots + \alpha_n e_n \to \sum_{i=1}^k \alpha_i \ell(e_i)

where αiN\alpha_i \in \mathbb{N}.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:14):

Thanks. But what does the term αi(ei)\alpha_i \ell(e_i) mean?

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:15):

Is it just summing l(ei)l(e_i) αi\alpha_i times?

view this post on Zulip John Baez (Apr 13 2025 at 17:19):

Yes. Every element of the free commutative monoid on a set {x,y}\{x,y\}, for example, can be written as mx+nym x + n y where m,nNm,n \in \mathbb{N}.

view this post on Zulip John Baez (Apr 13 2025 at 17:19):

We're writing the monoid operation as addition here, which I admit is somewhat confusing because we often write it as multiplication!

view this post on Zulip John Baez (Apr 13 2025 at 17:20):

When doing co/homology, people write the operation in the coefficient group AA as addition, so all the formulas will look more familiar if we do that for LL too.

view this post on Zulip John Baez (Apr 13 2025 at 17:23):

There's a theory of modules for rigs (some people call them 'semi-modules'.) A commutative monoid is the same as a module of the rig N\mathbb{N}. For any rig RR I like to denote the free RR-module on a set SS by R[S]R[S]: it's the set of finite RR-linear combinations of elements of SS. So the free commutative monoid on SS is N[E]\mathbb{N}[E]. We can add elements of this, and multiply them by natural numbers.

view this post on Zulip Adittya Chaudhuri (Apr 13 2025 at 17:24):

Yes. Thanks I got the point.

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 02:05):

@John Baez I am slowly realising (still a lot more to realise)!!! The definition you wrote about "homology monoids" is very beautiful. The fact that we are working with the free commutative monoid N[E]\mathbb{N}[E] instead of the free abelian group Z[E]\mathbb{Z}[E] says

GG is a directed graph and is seen as a directed space by the "non-existence of inverse" in the commutative monoid (N,+,0)(\mathbb{N}, +, 0). By not using Z[E]\mathbb{Z}[E], it is clearly telling we are not considering our directed graph as a usual non-directed topological space example.

view this post on Zulip John Baez (Apr 14 2025 at 05:15):

Thanks! You're exactly right. I noticed that the work we're already done on monoid-labeled directed graphs actually points to a generalization of 1st homology for directed graphs, where we use commutative monoids at every point where people usually use abelian groups.

view this post on Zulip John Baez (Apr 14 2025 at 05:38):

This new 'directed first homology' is interesting: you can easily find two directed graphs homeomorphic to a circle, one whose directed first homology is N\mathbb{N}, and one whose directed first homology is 00. It just depends on whether you can go around the circle following a directed path or not!

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 12:13):

John Baez said:

Thanks! You're exactly right. I noticed that the work we're already done on monoid-labeled directed graphs actually points to a generalization of 1st homology for directed graphs, where we use commutative monoids at every point where people usually use abelian groups.

Thank you. Yes, I agree.

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 12:16):

John Baez said:

This new 'directed first homology' is interesting: you can easily find two directed graphs homeomorphic to a circle, one whose directed first homology is N\mathbb{N}, and one whose directed first homology is 00. It just depends on whether you can go around the circle following a directed path or not!

Nice!! Yes, indeed!! I didn't observe this before. Thanks!!

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 15:51):

John Baez said:

Theorem. H1(G)H_1(G) is a free commutative monoid. There is a unique set of generators ciH1(G)c_i \in H_1(G) such that H1(G)H_1(G) is free on the set {ci}\{c_i\}.

I am trying to make an attempt to find the candidate for the generators:

We know c=α1e1+α2e2+αkekc= \alpha_1e_1 + \alpha_2e_2+ \cdots \alpha_{k}e_{k} is a cycle iff

α1s(e1)+α2s(e2)+αks(ek)=α1t(e1)+α2t(e2)+αkt(ek)\alpha_1s(e_1) + \alpha_2s(e_2) + \cdots \alpha_{k}s(e_{k})= \alpha_1t(e_1) + \alpha_2t(e_2) + \cdots \alpha_{k}t(e_{k}) .......(*)

Now, I will say cc is a minimal cycle if for any proper subset S{e1,e2,,ek}S \subseteq \lbrace e_1, e_2, \cdots, e_{k} \rbrace, the corresponding analogous equation of (*) does not hold. [I think this also matches my intuition if I think about your basic feedback loops].

Now, let the set cyclemin={cH1(G) ⁣:ccycle_{min}= \lbrace c \in H_1(G) \colon c is a minimal cycle }\rbrace.

I claim the following:

My minimal cycles are your basic feedback loops i.e H1(G)=N[cyclemin]H_{1}(G)= \mathbb{N}[cycle_{min}].

I am trying to to show why I think so:

Let c=α1e1+α2e2+αkekc= \alpha_1e_1 + \alpha_2e_2+ \cdots \alpha_{k}e_{k} be an arbitrary element of H1(G)H_1(G). Now, if cc is a minimal then we are done. Otherwise, let us assume cc is not minimal. Then by definition there is a proper subset S{e1,e2,,ek}S \subseteq \lbrace e_1, e_2, \cdots, e_{k} \rbrace such that the equation (*) holds for SS. Now, since we are working with "finite linear combinations", I think this process will end in a minimal loop after repeating this process for finitely many steps.

I think from the above, one can deduce your theorem (of course, one needs to fill in a lot of details).

I may be wrong.. I just thought about it!!

view this post on Zulip John Baez (Apr 14 2025 at 16:07):

I believe an approach like this should work, and I hope you can turn it into a proof.

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 16:11):

Thank you. Yes, I think I can.

view this post on Zulip John Baez (Apr 14 2025 at 16:16):

For a while I thought there might be a quick nonconstructive proof that goes like this:

C1(G)C_1(G) is the free commutative monoid on the set of edges of the graph GG. H1(G)H_1(G) is a submonoid of C1(G)C_1(G). Every submonoid of a free commutative monoid is a free commutative monoid. Thus H1(G)H_1(G) is a free commutative monoid.

But this proof is wrong! Every subgroup of a free abelian group is a free abelian group - that's why I hoped this would work for monoids too. But then I remembered there's a whole big subject that studies submonoids of the free commutative monoid on one generator, N\mathbb{N}. These are called numerical monoids, and most of them are not free. This one is free:

{0,3,6,9,12,} \{0,3,6,9,12, \dots \}

but we can throw in the number 8 and close under addition to get one that is not free:

{0,3,6,8,9,11,12,14,15,16,} \{0, 3, 6, 8, 9, 11, 12, 14, 15, 16, \dots \}

view this post on Zulip John Baez (Apr 14 2025 at 16:18):

So, this approach doesn't work; there seems to be something special about H1(G)H_1(G) that makes it free. And it's very good to understand your 'minimal cycles' concretely, because this is a practical subject. So even if there were a quick nonconstructive proof that H1(G)H_1(G) is free, we'd also want a more algorithmic proof, like the one you're trying to get.

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 19:03):

John Baez said:

So, this approach doesn't work; there seems to be something special about H1(G)H_1(G) that makes it free. And it's very good to understand your 'minimal cycles' concretely, because this is a practical subject. So even if there were a quick nonconstructive proof that H1(G)H_1(G) is free, we'd also want a more algorithmic proof, like the one you're trying to get.

Thank you. I completely agree that "understanding of minimal cycles" would be important for practical purposes as then we can narrow down our focus to "these objects" to study motifs like feedback loops etc, and hence this in turn would make things a bit simpler. I really like your line "there seems to be something special about H1(G)H_1(G) that makes it free". I think I need some more time to understand this line in a better way.

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 19:04):

John Baez said:

But then I remembered there's a whole big subject that studies submonoids of the free commutative monoid on one generator, N\mathbb{N}. These are called numerical monoids, and most of them are not free. This one is free:

{0,3,6,9,12,} \{0,3,6,9,12, \dots \}

but we can throw in the number 8 and close under addition to get one that is not free:

{0,3,6,8,9,11,12,14,15,16,} \{0, 3, 6, 8, 9, 11, 12, 14, 15, 16, \dots \}

Interesting objects!! I was not aware of these objects until now. Thanks!

view this post on Zulip Adittya Chaudhuri (Apr 14 2025 at 19:22):

@John Baez I am sharing some feelings about two things that we have already developed in our paper:

1) By Kleisli morphism approach, we can zoom in (add details) in a motif like feedback loop, and hence, often helps in finding a very complicated motif in a large network. [Complications are arising by adding extra causal relatioships in the framework.]

2) By the directed homology approach, we can express a complicated feedback loop interms of simpler loops (basic feedback loops), and hence, often helps in finding a very complicated motif in a large network. [Complications are arising by adding extra feedback loops in the framework.]

So, if I am understanding correctly, both (1) and (2) are serving a similar purpose but in different ways (at least at the moment, it seems so!!). Another good thing is that (1) needs a monoid structure in the labeling set but (2) needs a commutative monoid structure in the labelling set.. (so that the map  ⁣:C1(G)L\ell \colon C_1(G) \to L is a homomorphism of monoids). However, I think most of the monoids that we are interested in are commutative.

In this context, what are some non-commutative monoids that may be interesting for our purpose?

view this post on Zulip John Baez (Apr 14 2025 at 21:02):

Those thoughts are nice, thanks.

Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.

view this post on Zulip John Baez (Apr 14 2025 at 21:10):

I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.

Here's one simple relation between them:

Suppose LL is a commutative monoid. Then for each L\ell \in L there is an LL-labeled graph with one vertex and one edge labeled by \ell. We can call this the 'walking loop with feedback equal to \ell'.

When we have any LL-labeled graph GG, each cycle in this graph gets a holonomy valued in LL, which we can call its feedback. If the cycle looks like this

v0e1v1e2envn=v0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

its feedback is defined to be

(e1)++(en) \ell(e_1) + \cdots + \ell(e_n)

Now for the relation: a cycle in an LL-labeled graph GG has feedback L\ell \in L if and only if there is a Kleisli map from the walking loop with feedback equal to \ell to (G,)(G,\ell), that maps the loop to this cycle.

view this post on Zulip John Baez (Apr 14 2025 at 21:12):

This is not very deep, but it suggests that we're starting to build a little tool kit of ideas.

view this post on Zulip John Baez (Apr 15 2025 at 01:45):

Here is a little idea that might help you with your proof, @Adittya Chaudhuri. I believe any commutative monoid has a [[preorder]] \le defined as follows:

xy x \le y iff there exists zz such that y=x+z y = x + z

This is not always a partial order: for example, in an abelian group we have xyx \le y for all xx and yy.

However, for the commutative monoid H1(G)H_1(G) I can prove this is a partial order, i.e. xyx \le y and yxy \le x imply x=yx = y.

Then we can define a minimal cycle in GG to be a cycle that is minimal with respect to this partial order: more precisely, cH1(G)c \in H_1(G) is minimal if for any cycle cH1(G)c' \in H_1(G) with ccc' \le c we have d=0d = 0.

view this post on Zulip John Baez (Apr 15 2025 at 01:45):

Then we want to prove H1(G)H_1(G) is free on the set of minimal cycles.

view this post on Zulip John Baez (Apr 15 2025 at 05:52):

However, the proof of this probably won't be abstract nonsense about commutative monoids! As I mentioned before, H1(G)H_1(G) has some special features.

If you get stuck you might like to look at the proof of Lemma 4 in my paper Topological crystals. This is about undirected finite graphs, so it's a bit different. It uses homology with coefficients in Z\mathbb{Z} rather than N\mathbb{N}. But some of the ideas and tricks may still be relevant!

This lemma says that any integer linear combination of edges of a graph that is a 1-cycle can be written as a finite sum of 'simple loops', meaning loops in which each vertex appears at most once.

This is quite similar to what we're thinking about now. In some ways it's more complicated, because using integer coefficients one can't use the ordering \le defined as above. Instead one needs to use a concept of the "support" of a cycle, which is defined at the bottom of page 10.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 06:55):

John Baez said:

Those thoughts are nice, thanks.

Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.

Thank you!!

Yes, I fully agree to your point : we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 07:00):

John Baez said:

I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.

Now for the relation: a cycle in an LL-labeled graph GG has feedback L\ell \in L if and only if there is a Kleisli map from the walking loop with feedback equal to \ell to (G,)(G,\ell), that maps the loop to this cycle.

As you said, it could be interesting in finding various relations between (1) and (2) and interprete their meaning for applications!! But, your result already demonstrates one in this direction!! Nice!! Thank you!!

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 07:03):

@John Baez Thank you so much for the ideas on the proof of the theorem. I am now trying to understand your ideas and implement them to construct a proof of your theorem by combining my previous approach.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 07:04):

John Baez said:

This is not very deep, but it suggests that we're starting to build a little tool kit of ideas.

I fully agree!!

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 16:11):

John Baez said:

Theorem. H1(G)H_1(G) is a free commutative monoid. There is a unique set of generators ciH1(G)c_i \in H_1(G) such that H1(G)H_1(G) is free on the set {ci}\{c_i\}.

Below I am trying to write down a proof based on the ideas you provided:

Proof: Consider the preorder relation in commutative monoid H1(G)C1(G)H_1(G) \leq C_1(G) defined as xyx \leq y iff there exists zH1(G)z \in H_1(G) such that y=x+zy=x+z. Now, define a cycle cH1(G)c \in H_1(G) a minimal cycle as a minimal element cminH1(G)c_{min} \in H_1(G) with respect to \leq. Define a set Cyclemin={cH1(G) ⁣:cCycle_{min}= \lbrace c \in H_{1}(G) \colon c is a minimal cycle }\rbrace. The fact that every element of C1(G)C_1(G) is a finite linear combination of the elements of EE with coefficients in N\mathbb{N} guarantees that the set CycleminCycle_{min} is non-empty. It can be observed that if x,yC1(G)x,y \in C_1(G) and x+y=0x+y=0, then x=0x=0 and y=0y=0. Now, say ccc \leq c' and ccc' \leq c. Then, there exists d,eH1(G)d,e \in H_1(G) such that c=c+dc'=c+d and c=c+ec=c' + e. Hence, we have c+c=(c+c)+d+ec+c'=(c+c') +d +e. Because natural numbers can not be added to zero, we have d+e=0d+e=0. Hence, by the said observation, d=0d=0 and e=0e=0. Hence, c=cc=c'. Thus the preorder "\leq" in H1(G)H_1(G) is actually a partial order. Thus, we have the liberty to say cminc_{min} is a minimal cycle if it can not be written as sum of cycles or, more precisely, for any cycle ccminc \leq c_{min}, we have cmin=cc_{min}=c. Thus, the definition of a minimal cycle matches our intuition. Now, we claim H1(G)=N[Cyclemin]H_1(G)=\mathbb{N}[Cycle_{min}]. To see this we take an arbitrary cycle cH1(G)c \in H_{1}(G). If cc is minimal, we are done, otherwise, there exists c1,x1H1(G)c_1 , x_1 \in H_1(G) such that c=c1+x1.c=c_1 + x_1. Now, if c1c_1 and x1x_1 are minimal we are done, otherwise, we repeat the process, with respect to c1c_1 or/and x1x_1. Since, cc is a finite linear combination of the elements of EE with coefficients in N\mathbb{N}, this process will end in finite number of steps to a representation of the form c=α1cmin1+α2cmin2++αkcminkc= \alpha_1c^{1}_{min} + \alpha_2c^{2}_{min} + \cdot + \alpha_{k} c^{k}_{min}, where αi,kN\alpha_{i},k \in \mathbb{N} and each cminic^{i}_{min} is a minimal cycle for each i=1,2,,ki =1,2, \ldots, k . Hence, H1(G)=N[Cyclemin]H_1(G)=\mathbb{N}[Cycle_{min}].

view this post on Zulip John Baez (Apr 15 2025 at 16:16):

I'm reading your proof - thanks very much for writing it up here.

Thus, we have the liberty to say cminc_{min} is a minimal cycle if it can not be written as sum of cycles or, more precisely, for any cycle ccminc \leq c_{min}, we have cmin=cc_{min}=c.

This needs a little correction, since 00 is a cycle:

Thus, we have the liberty to say cminc_{min} is a minimal cycle if it can not be written as sum of two cycles which are both nonzero, or, more precisely, for any nonzero cycle ccminc \leq c_{min}, we have cmin=cc_{min}=c.

view this post on Zulip John Baez (Apr 15 2025 at 16:17):

(It's a lot like how a prime number is not a product of two natural numbers that are both >1\gt 1.)

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 16:18):

Thanks, yes, I understand your point.

view this post on Zulip John Baez (Apr 15 2025 at 16:21):

What you've proved is that every cycle is a sum of nn minimal cycles for some n0n \ge 0. (The sum of 00 minimal cycles is 00.)

You have not yet shown that every cycle can be uniquely expressed as a sum of nn minimal cycles. That's what we need to show that H1(G)H_1(G) is the free commutative monoid on the set of minimal cycles, i.e.

H1(G)N[Cyclemin] H_1(G) \cong \mathbb{N}[\text{Cycle}_{\text{min}} ]

view this post on Zulip John Baez (Apr 15 2025 at 16:21):

In other words, you've shown that H1(G)H_1(G) is generated by the set of minimal cycles, but not yet that it's freely generated.

view this post on Zulip John Baez (Apr 15 2025 at 16:23):

To show that it's freely generated we'll need to use more special features of H1(G)H_1(G). I suspect we'll need to use tricks similar to those used in proving Lemma 4 in Topological crystals.

view this post on Zulip John Baez (Apr 15 2025 at 16:24):

It's worth proving that H1(G)H_1(G) is freely generated by the set of minimal cycles, because this gives a complete description of H1(G)H_1(G), i.e. an isomorphism theorem

H1(G)N[Cyclemin]H_1(G) \cong \mathbb{N}[\text{Cycle}_{\text{min}} ]

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 16:24):

Thanks. I understand your point. Yes, I will look at the Lemma 3 in your paper.

view this post on Zulip John Baez (Apr 15 2025 at 16:26):

That's good, but separately we should start thinking about how we would prove that every cycle is uniquely a sum of minimal cycles. Suppose a cycle can be written in two different ways as a sum of minimal cycles. What do we do now, to get a contradiction? I guess we first subtract off all minimal cycles that appear in both sides of the equation.

view this post on Zulip John Baez (Apr 15 2025 at 16:28):

When we're done we get

m1c1++nkmk=n1d1++mhdh m_1 c_1 + \cdots + n_k m_k = n_1 d_1 + \cdots + m_h d_h

where cic_i and did_i are minimal cycles and none of the cic_i equal any of the djd_j.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 16:28):

Yes.

view this post on Zulip John Baez (Apr 15 2025 at 17:08):

I think at this point in the proof it may be useful to note that the free commutative monoid on n generators (Nn\mathbb{N}^n) has a natural injection to the free abelian group on n generators (Zn\mathbb{Z}^n), so our C1(G)C_1(G) becomes a submonoid of the usual C1(G,Z)C_1(G,\mathbb{Z}). This allows us to subtract, and it becomes sufficient to show your minimal cycles freely generate Z1(G,Z)Z_1(G, \mathbb{Z}).

view this post on Zulip John Baez (Apr 15 2025 at 17:08):

We were here:

When we're done we get

m1c1++nkmk=n1d1++mhdh m_1 c_1 + \cdots + n_k m_k = n_1 d_1 + \cdots + m_h d_h

where cic_i and did_i are minimal cycles and none of the cic_i equal any of the djd_j.

view this post on Zulip John Baez (Apr 15 2025 at 17:12):

But now we can subtract, and we can see it's sufficient to show:

If c1,,ckc_1,\dots,c_k are distinct minimal cycles and

n1c1++nkck=0 n_1 c_1 +\cdots + n_k c_k=0

for some integers nin_i, then all thrse integers are zero.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 17:14):

John Baez said:

When we're done we get

m1c1++nkmk=n1d1++mhdh m_1 c_1 + \cdots + n_k m_k = n_1 d_1 + \cdots + m_h d_h

where cic_i and did_i are minimal cycles and none of the cic_i equal any of the djd_j.

To show the above, isn't the "minimality" enough? Do we really need to go to Z\mathbb{Z}? I mean a minimal cycle can not be written in terms of sum of cycles. Then, how can the coeefficients(attached to same minimal cycles) differ in LHS and RHS?

view this post on Zulip John Baez (Apr 15 2025 at 17:15):

Adittya Chaudhuri said:

Thanks. I understand your point. Yes, I will look at the Lemma 3 in your paper.

Sorry, I meant Lemma 4.

view this post on Zulip Adittya Chaudhuri (Apr 15 2025 at 17:28):

John Baez said:

But now we can subtract, and we can see it's sufficient to show:

If c1,,ckc_1,\dots,c_k are distinct minimal cycles and

n1c1++nkck=0 n_1 c_1 +\cdots + n_k c_k=0

for some integers nin_i, then all thrse integers are zero.

I am assuming you meant the use of Z\mathbb{Z} in this step but not in the previous one.

view this post on Zulip John Baez (Apr 15 2025 at 17:49):

Since the homology monoid is injectively mapped to the homology group, an equation between elements of the homology monoid holds iff the corresponding equation holds in the homology group.

view this post on Zulip John Baez (Apr 15 2025 at 17:53):

But yes, in this equation the coefficients are natural numbers:

When we're done we get

m1c1++nkmk=n1d1++mhdh m_1 c_1 + \cdots + n_k m_k = n_1 d_1 + \cdots + m_h d_h

where cic_i and did_i are minimal cycles and none of the cic_i equal any of the djd_j.


view this post on Zulip John Baez (Apr 15 2025 at 20:27):

There's a lot more to say, but here's one thing. We've been talking about 'minimal cycles' defined in a purely algebraic way, i.e. cycles that aren't sums of other cycles in a nontrivial way. The paper Topological Crystals instead takes a more geometrical approach: it talks about 'simple loops', which are loops of edges which visit each vertex at most once.

Just by looking at examples, I believe these are the same thing. But we haven't proved this. We could try to prove it.

The advantage of the algebraic definition is that it becomes rather easy to prove every cycle is a sum of minimal cycles.

Lemma 4 in Topological Crystals works for undirected graphs, but in that context it shows that every cycle is a sum of (cycles coming from) simple loops. This takes some work! The argument is mainly due to Greg Egan. It's a good example of how we can take advantage of the graph structure and think geometrically, essentially giving an algorithm to find a simple loop hiding inside any cycle. We can then subtract off this simple loop, and we get a new cycle whose support is contained in that of the original cycle. Then we repeat, and this must eventually terminate.

view this post on Zulip John Baez (Apr 15 2025 at 20:33):

Anyway, I believe we will need to really work and think in terms of graphs to prove this conjecture, which I foolishly called a theorem:

Conjecture. If GG is a finite directed graph, its first homology monoid H1(G)=Z1(G)H_1(G) = Z_1(G) is the free commutative monoid on the set of minimal cycles (or alternatively: simple loops).

view this post on Zulip John Baez (Apr 15 2025 at 20:34):

That's really two conjectures, and proving either one would be fine for now, though it would be even nicer to prove that minimal cycles are the same as simple loops, since then we'd know the 'algebraic' and 'geometrical' approaches are the same!

Anyway, I think we really need to get our hands dirty and work here, more like graph theorists than category theorists.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 04:53):

(deleted)

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 04:56):

John Baez said:

That's really two conjectures, and proving either one would be fine for now, though it would be even nicer to prove that minimal cycles are the same as simple loops, since then we'd know the 'algebraic' and 'geometrical' approaches are the same!

Anyway, I think we really need to get our hands dirty and work here, more like graph theorists than category theorists.

Thank you!! Yes, I also feel that to align our "graph theoretic intuition" with our "algebraic definitions" it is necessary to see how a cycle cH1(G)c \in H_1(G) is related to a graph theoretic cycle. For that, first I want to define a graph theoretic cycle of a presheaf graph G=[EV]G=[E \rightrightarrows V].

Definition:
A graph theoretic cycle of a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V) is defined as a finite sequence {xi}i=1n\lbrace x_{i} \rbrace^{n}_{i=1} of edges satisfying the following properties:

t(x1)=s(x2),,t(xn1)=s(xn)t(x_1)=s(x_2), \cdots, t(x_{n-1})=s(x_n) and t(xn)=s(x1)t(x_n)=s(x_1).

Definition:
Given a graph theoretic cycle {xi}i=1n\lbrace x_{i} \rbrace^{n}_{i=1} of a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), we define a graphical cycle as x1+x2++xnC1(G)x_1 +x_2 + \cdots + x_{n} \in C_1(G).

Now, I think, to see the relation between our intuition and your homology theory, I think the first step is to prove the following lemma:

Lemma
Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V) and a graph theoretic cycle {xi}i=1n\lbrace x_{i} \rbrace^{n}_{i=1} we have the following:

a) The graphical cycle x1+x2+xnx_1 +x_2 + \cdots x_{n} is an element of H1(G)H_1(G).
b) For any element c=α1e1+α2e2++αkekc= \alpha_1e_1 + \alpha_2e_2 + \cdots + \alpha_{k}e_{k} of H1(G)H_1(G), there exists a set of graph theoretic cycles {xi}i=1n\lbrace x_{i} \rbrace^{n}_{i=1} such that each xix_{i} is an element of the set {e1,e2,,ek}\lbrace e_1,e_2,\ldots ,e_{k} \rbrace.

(a) follows from t(x1)=s(x2),,t(xn1)=s(xn)t(x_1)=s(x_2), \cdots, t(x_{n-1})=s(x_n) and t(xn)=s(x1)t(x_n)=s(x_1).

Next we define a set Cyclegr:={{xi}i=1n ⁣:{xi}i=1nCycle_{gr}:= \lbrace \lbrace x_{i} \rbrace^{n}_{i=1} \colon \lbrace x_{i} \rbrace^{n}_{i=1} is graph theoretic cycle }\rbrace. Then, (a) allows us to define a function F ⁣:CyclegrH1(G)\mathcal{F} \colon Cycle_{gr} \to H_1(G) , {xi}i=1nx1+x2++xn \lbrace x_{i} \rbrace^{n}_{i=1} \mapsto x_1 +x_2 + \cdots+ x_{n} . Since H1(G)H_{1}(G) is a submonoid of the free commutative monoid C1(G)C_1(G), the function F\mathcal{F} is injective.

However, it feels "if your definition of homology is correct", then proving (b) is like "extracting/characterising the information of the space" from the/in terms of the "information of homology groups of the space", about which I am not sure how much information we can recover if we do not talk on "homotopy in our graphs" *. I am not sure!!

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 05:16):

Next, may be we can define a simple loop (from the way you defined) in terms of our new language as follows:

Definition:

Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), a simple loop is a graphical cycle x1+x2++xnx_1 +x_2 + \cdots +x_{n} induced from a graph theoretic cycle {xi}i=1n \lbrace x_{i} \rbrace^{n}_{i=1}, such that s(xi)t(xj)s(x_i) \neq t(x_{j}) for all i,ji,j except (i,j)=(1,n)(i,j)=(1,n).

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 05:24):

@John Baez Now, I think to show that you have proposed only 1 conjecture but not two, we may need to prove the following Proposition (via the correspondence described above):

Proposition:

Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), there is a bijection between the set of simple loops and minimal cycles.

Or, may be a (weaker version)

Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), there is an injective function from the the set of simple loops to the set of minimal cycles.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 08:26):

John Baez said:

Conjecture. If GG is a finite directed graph, its first homology monoid H1(G)=Z1(G)H_1(G) = Z_1(G) is the free commutative monoid on the set of minimal cycles (or alternatively: simple loops).

@Morgan Rogers (he/him) produced (possibly) a counter-example here #theory: mathematics > Submonoids of free commutative monoids @ 💬

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 10:44):

An observation:

Using the function F ⁣:CyclegrH1(G)\mathcal{F} \colon Cycle_{gr} \to H_1(G) , {xi}i=1nx1+x2++xn \lbrace x_{i} \rbrace^{n}_{i=1} \mapsto x_1 +x_2 + \cdots+ x_{n} , I think we may talk about topological ordering in terms of your directed homology monoids as follows:

The above statement makes sense because every directed graph can be seen as a graph of our sense (presheaf graph).

view this post on Zulip John Baez (Apr 16 2025 at 17:30):

That sounds right. People like to study DAGs, or directed acyclic graphs, and these have trivial first homology monoid. A DAG is good for when you have a bunch of tasks that needs to be done, and aba \to b means you have to do task aa before you do task bb.

view this post on Zulip John Baez (Apr 16 2025 at 17:32):

The theorem you mention says that for a DAG there's some way you can order the tasks so that you can do them in that order.

view this post on Zulip John Baez (Apr 16 2025 at 17:33):

I'm not very interested in DAGs right now because I'm interested in 'causal loop diagrams' and their generalizations, where the focus is on feedback loops, i.e. directed cycles.

view this post on Zulip John Baez (Apr 16 2025 at 17:34):

I think I see how to prove this:

Proposition:

Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), there is a bijection between the set of simple loops and minimal cycles.

and I think it's still somewhat interesting even though my conjecture was false.

view this post on Zulip John Baez (Apr 16 2025 at 17:41):

I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 18:33):

John Baez said:

I'm not very interested in DAGs right now because I'm interested in 'causal loop diagrams' and their generalizations, where the focus is on feedback loops, i.e. directed cycles.

Yes, I understand your point. Technically, as you explained, we are mostly interested when H1(G)H_1(G) is not trivial. However, Topological ordering happens only when H1(G)H_1(G) is trivial.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 18:37):

John Baez said:

I think I see how to prove this:

Proposition:

Given a graph G:=((E,V),s,t ⁣:EV)G:= ((E,V), s,t \colon E \to V), there is a bijection between the set of simple loops and minimal cycles.

and I think it's still somewhat interesting even though my conjecture was false.

Yes, I think then, we can relate the ideas of our paper with your paper on Topological crystals. Also, I think, as a result, it would be easier for us to imagine minimal cycles concretely.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 18:51):

John Baez said:

I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.

Sounds good.

Since you said earlier that our paper can also be seen as an exploration of directed homology and homotopy of directed multigraphs in degree 1, I was thinking whether we can say something about the homotopy invariance of your directed 1st homology monoid (because this is true in the usual case). Since you said, the definition of a [[directed topological space]] is still not completely standard as there are multiple contenders, I was thinking about the following:

Step 1:
Defining a suitable (useful for practical purposes) notion of directed homotopy equivalence between presheaf graphs.

Step 2:
To show that your notion of directed homology monoid of a presheaf graph is invariant under directed homotopy equivalence between presheaf graphs.

Maybe I am not making much sense. But, today, I was thinking about these.

view this post on Zulip John Baez (Apr 16 2025 at 19:07):

I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:

In all of these we can find and explain good examples of

I think this will be quite nice. The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality. A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far".

It would be cool to study the homotopy theory of directed graphs... but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 19:26):

John Baez said:

I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:

In all of these we can find and explain good examples of

I think this will be quite nice.

Yes, I completely agree. It is also telling a nice and clear story.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 19:33):

John Baez said:

The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality.

Yes, I fully understand your point.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 19:35):

John Baez said:

It would be cool to study the homotopy theory of directed graphs.. but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.

Yes true!! I understand the point you made.

view this post on Zulip Adittya Chaudhuri (Apr 16 2025 at 19:38):

John Baez said:

I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:

In all of these we can find and explain good examples of

I think this will be quite nice. The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality. A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far".

It would be cool to study the homotopy theory of directed graphs.. but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.

Thank you so much!! I completely agree with all of your points. I am very grateful for your guidance!! I like this line a lot A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far"....It teaches me many things.

view this post on Zulip John Baez (Apr 16 2025 at 20:21):

Great! One good thing about this philosophy is that it lets you build up a supply of ideas for papers which you can write later. That way, you'll eventually have a big list of papers you can write, and whenever you want to write a paper you can choose the best one - where 'best' might mean most fun, or best for getting your next job, or whatever you want.

view this post on Zulip Adittya Chaudhuri (Apr 17 2025 at 02:20):

John Baez said:

Great! One good thing about this philosophy is that it lets you build up a supply of ideas for papers which you can write later. That way, you'll eventually have a big list of papers you can write, and whenever you want to write a paper you can choose the best one - where 'best' might mean most fun, or best for getting your next job, or whatever you want.

Thank you so much!! Yes, it is a very nice as well as a very helpful point of view. I will follow this principle!!

view this post on Zulip Adittya Chaudhuri (Apr 17 2025 at 02:47):

John Baez said:

I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.

Now, that we know that your homology monoid of a directed graph may not be free, I was wondering (from the point of applications) that if it is possible to systematically characterise the graphs GG whose homology monoids are free on the minimal cycles.

I am trying to explain below why I said "from the point of applications"

Say we encounter a very big causal loop diagram (provided by domain specific scientists like biologists, policy makers, epidemeologists, ..etc), and we want to study its feedback loops. Then, according to the theory we developed, we may first look at the minimal cycles. Now, if from the nature of the causal loop diagram (by using the above-mentioned-proposed-characterisation/finding conditions) we know from the begining that the homology monoid of the underlying directed graph of such a diagram is free on the set of minimal cycles, then we may be able to completely characterise our study of feedback loops in that diagram by focussing only on its minimal cycles.

view this post on Zulip James Deikun (Apr 17 2025 at 03:51):

I think the situation in the reduced counterexample is really the only "forbidden minor", insofar as that concept even applies to this kind of graph. A theta graph for example is fine, you have to have at least two points of intersection in two cycles and there has to be separation on both sides. Coming up with a meaningful way to count or label the violations is harder. Every example I can think of with three cycles reduces to one with two cycles in at least one way, just as a double check. To prove it you might want to formulate some kind of exchange principle, or a way to break minimal cycles into "subatomic particles" that freely generate a minimal free monoid containing the homology monoid.

view this post on Zulip Adittya Chaudhuri (Apr 17 2025 at 05:05):

James Deikun said:

I think the situation in the reduced counterexample is really the only "forbidden minor", insofar as that concept even applies to this kind of graph. A theta graph for example is fine, you have to have at least two points of intersection in two cycles and there has to be separation on both sides. Coming up with a meaningful way to count or label the violations is harder. Every example I can think of with three cycles reduces to one with two cycles in at least one way, just as a double check. To prove it you might want to formulate some kind of exchange principle, or a way to break minimal cycles into "subatomic particles" that freely generate a minimal free monoid containing the homology monoid.

Thank you. I am trying to understand your ideas.

view this post on Zulip John Baez (Apr 18 2025 at 00:57):

Let me try to prove there's a bijective correspondence between simple loops and minimal cycles.

Recall the framework. A graph GG is a set VV of vertices, a set EE of edges and source and target maps s,t:EVs, t: E \to V. We're working with a homology monoid, so

The source and target maps extend to monoid homomorphisms which we give the same names:

s,t:C1(G)C0(G) s, t : C_1(G) \to C_0(G)

We define a 1-cycle to be a 1-chain cc with

s(c)=t(c) s(c) = t(c)

We denote the commutative monoid of 1-cycles by Z1(G)Z_1(G). We define the first homology H1(G)H_1(G) to be Z1(G)Z_1(G) (since in general the first homology consists of 1-cycles mod 1-boundaries, but here the only 1-boundary is zero).

Later I will want to define the zeroth homology, but this is getting long so let's skip it for now.

There's a preorder on any commutative monoid given by

xy x \le y iff y=x+zy = x + z for some zz

Puzzle. Prove that for C1(G)C_1(G) this preorder is a partial order.

Puzzle. As a subset of C1(G)C_1(G), Z1(G)Z_1(G) inherits a partial order. Prove this is the same the preorder it gets by treating it as a commutative monoid in its own right. In other words, given x,yZ1(G)x, y \in Z_1(G) with

y=x+zy = x + z for some zC1(G)z \in C_1(G)

show that in fact we can take zZ1(G)z \in Z_1(G).

view this post on Zulip John Baez (Apr 18 2025 at 00:58):

We say that a 1-cycle cc is minimal if the only 1-cycle c\le c is zero.

view this post on Zulip John Baez (Apr 18 2025 at 01:04):

We define an edge path in GG to be a finite list of edges such that the target of each edge is the source of the next. In pictures:

v0e1v1e2envn v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n

where we allow the degenerate case n=0n = 0. Any edge path γ\gamma gives a 1-chain called [γ][\gamma], namely the sum of its edges:

[γ]=e1++enC1(G) [\gamma] = e_1 + \cdots + e_n \in C_1(G)

We say an edge path is a loop if vn=v0v_n = v_0.

Puzzle. Show that an edge path γ\gamma is a loop if and only if [γ][\gamma] is a 1-cycle.

view this post on Zulip John Baez (Apr 18 2025 at 01:08):

We say a loop is simple if it visits each vertex just once, except that it ends where it starts. More precisely,

v0e1v1e2envn=v0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

is a simple loop if all the vertices v1,,vnv_1, \dots, v_n are distinct.

view this post on Zulip John Baez (Apr 18 2025 at 01:14):

Claim 1. If γ\gamma is a simple loop then [γ]Z1(G)[\gamma] \in Z_1(G) is a minimal cycle.

Claim 2. For each minimal cycle cc there exists a simple loop γ\gamma such that [γ]=c[\gamma] = c.

Claim 3. If γ,γ\gamma, \gamma' are two simple loops with [γ]=[γ][\gamma] = [\gamma'], they differ only by where they start. That is, if γ\gamma is of the form

v0e1v1e2envn=v0v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

then γ\gamma' must be of the form

vkek+1vk+1ek+2envne1v1e2ekvk v_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \stackrel{e_n}{\to} v_n \stackrel{e_1}{\to} v_1 \stackrel{e_2}{\to} \cdots \stackrel{e_k}{\to} v_k

view this post on Zulip John Baez (Apr 18 2025 at 01:17):

So, we get a bijection between minimal cycles and equivalence classes of simple loops, where two simple loops are equivalent if they differ only by where they start!

view this post on Zulip John Baez (Apr 18 2025 at 01:29):

Let's start with Claim 1, which I believe is the easiest.

Proof of Claim 1. Suppose γ\gamma is a simple loop, say

v0e1v1e2envn=v0v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

The corresponding cycle is

[γ]=i=1nei [\gamma] = \sum_{i=1}^n e_i

Any cycle c\le c must be a chain c\le c, and since C1(G)C_1(G) is free on the set of edges any chain ccc' \le c must be of the form

c=iSei c' = \sum_{i \in S} e_i

where S{1,,n}S \subseteq \{1, \dots, n \} . We have

s(c)=iSvi1 s(c') = \sum_{i \in S} v_{i-1}

and

t(c)=iSvi t(c') = \sum_{i \in S} v_i

If cc' is a cycle we must have s(c)=t(c)s(c') = t(c').

But since all the vertices v1,,vnv_1, \dots , v_n are distinct (while v0=vnv_0 = v_n), the two sums above can only be equal if SS is all of {1,,n}\{1, \dots, n\}, in which case c=cc' = c, or SS is empty, in which case c=0c' = 0. (Note that since C0(G)C_0(G) is free on the set of vertices, the two sums can only be equal if they are 'visibly' equal - there are no extra relations.) Thus cc is minimal. :black_large_square:

view this post on Zulip John Baez (Apr 18 2025 at 01:53):

Now let me try Claim 2, which is where things get interesting.

Claim 2. For each minimal cycle cc there exists a simple loop γ\gamma such that [γ]=c[\gamma] = c.

Proof. Note that C1(G)C_1(G), the free commutative monoid on the set EE of edges of our graph, can also be thought of as the collection of [[multisets]] of elements of EE. This will be useful in what follows.

Let cc be a minimal cycle. Think of it as a multiset of edges as above. It's nonempty since c=0c = 0, so choose an edge in this multiset and call it v0e1v1 v_0 \xrightarrow{e_1} v_1.

Now there are two cases:

  1. If v1=v0v_1 = v_0 this path consisting of a single edge is a simple loop γ\gamma, so it gives a nonzero cycle [γ]=e1[\gamma] = e_1, so by minimality of cc we must have c=[γ]c = [\gamma] and we are done.

  2. If v1v0v_1 \ne v_0 then e1e_1 is not a cycle so cc must be a sum of e1e_1 and one or more edges in the multiset ce1c - e_1, and at least one of these edges must have source v1v_1, since otherwise it would be impossible to have s(c)=t(c)s(c) = t(c). Choose one such edge and call it v1e2v2 v_1 \xrightarrow{e_2} v_2 .

In the second case we get a path

v0e1v1e2v2 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} v_2

Now there are three cases:

  1. If v2=v0v_2 = v_0 then this path is a simple loop γ\gamma, so it gives a nonzero cycle [γ][\gamma], so by minimality of cc we must have c=[γ]c = [\gamma] and we are done.

  2. If v2=v1v_2 = v_1 then the path v1e2v2 v_1 \xrightarrow{e_2} v_2 is a simple loop γ\gamma, so it gives a nonzero cycle [γ][\gamma], so by minimality of cc we must have c=[γ]c = [\gamma] and we are done.

  3. If v0,v1,v2v_0, v_1, v_2 are distinct then e1+e2e_1 + e_2 is not a cycle, so cc must be a sum of e1+e2e_1 + e_2 and one or more edges in the multiset ce1e2c - e_1 - e_2, and at least one of these edges must have source v2v_2, since otherwise it would be impossible to have s(c)=t(c)s(c) = t(c). Choose one such edge and call it v2e3v3 v_2 \xrightarrow{e_3} v_3 .

And so on. I could write this more formally, but I hope the pattern is clear.

Since cc is a finite multiset of edges this process must eventually terminate: i.e., eventually the vertex vnv_n must equal one of the earlier vertices v1,,vn1v_1, \dots, v_{n-1}. If it equals vkv_k, then

vkek+1vk+1ek+2envn v_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \xrightarrow{e_n} v_n

is a simple loop, so

ek+1++en e_{k+1} + \cdots + e_n

is a nonzero cycle, and by minimality of cc this must equal cc.

Thus, we have found a simple loop γ\gamma with [γ]=c[\gamma] = c. :black_large_square:

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 05:42):

Thank you!! I am now trying to understand your proof.

view this post on Zulip John Baez (Apr 18 2025 at 06:04):

Feel free to ask questions. Claim 2 is the hard one. In our paper I would write up a more formal inductive proof; here I am outlining the first few steps of the induction and hoping the pattern becomes clear.

view this post on Zulip John Baez (Apr 18 2025 at 06:06):

You'll notice that the proof of Claim 2 is closely related to the proof of Lemma 4 in Topological Crystals, but it's simpler.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 06:07):

John Baez said:

Feel free to ask questions. Claim 2 is the hard one. In our paper I would write up a more formal inductive proof; here I am outlining the first few steps of the induction and hoping the pattern becomes clear.

Thank you so much. I am reading your proof.

view this post on Zulip John Baez (Apr 18 2025 at 06:36):

The proof is really an algorithm; it may be helpful to draw a graph and a minimal cycle in this graph, and carry out the algorithm.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 11:52):

Thank you!! I think I understod the proof. Although the proof of claim 2 is more complex than the proof of claim 1 I find both the proof beautiful and interesting.

I feel the current definition of a minimal cycle that you wrote: "We say that a 1-cycle cc is minimal if the only 1-cycle c\le c is zero" is a more workable definition if we look at the proof structure of your claim 2.

Regarding claim 1:

In the proof of claim 1, I like how both the facts C1(G)=N[E]C_{1}(G)= \mathbb{N}[E] and c[γ=v0e1v1e2envn=v0]c' \leq [\gamma= v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0] are together needed for writing c=iSei c' = \sum_{i \in S} e_i , where S{1,,n}S \subseteq \{1, \dots, n \} , which I think is crucial in the proof of claim 1.

Regarding claim 2:
I like how at every step it is using the minimality of cc to check whether to move to the next step or not and usuing the cyclicity s(c)=t(c)s(c)=t(c) to ensure that we can indeed move to the next step in a legitimate way, and finally seeing N[E]\mathbb{N}[E] as a finite multiset is ensuring that this algorithm works as it will end in a finite number of steps to our desired simple loop γ\gamma such that [γ]=c[\gamma]=c.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 11:58):

I think the proof of claim 3 shoud directly follow from the fact C1(G)=N[E]C_1(G)=\mathbb{N}[E] and the defnition of [γ][\gamma] when γ=v0e1v1e2envn=v0\gamma=v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 12:32):

Now, I think using your claim 3, it is possible to define an equivalence relation on Sloops(G)S_{loops}(G), the set of simple loops in GG, and then we can denote the quotient set by Sloops(G)/S_{loops}(G)/ \sim. Now, if we denote the set of all minimal cycles of GG by Cyclemin(G)Cycle_{min}(G), then your claim 1, claim 2 and claim 3 ensure a bijection F ⁣:(Sloops(G)/)Cyclemin(G)\mathcal{F} \colon (S_{loops}(G)/ \sim) \to Cycle_{min}(G) defined as γˉ[γ]\bar{\gamma} \to [\gamma], where γˉ\bar{\gamma} is the equivalence class of γ\gamma.

Then, I think F\mathcal{F} ensures an unambiguous way of imagining minimal cycles both from the point of graph theory and from the point of homology theory of directed topological spaces.

view this post on Zulip John Baez (Apr 18 2025 at 13:56):

Right! We get a bijection F\mathcal{F} and thus a precise link between the graph-theoretic and the homological way of thinking about minimal cycles.

The ability to work with minimal cycles in both ways should be important. I think it makes precise something that "systems dynamics" researchers already intuit in a rough way (though most are blissfully ignorant of homology theory).

view this post on Zulip John Baez (Apr 18 2025 at 14:16):

I will write up this stuff in our paper.

As we discussed in our private chat, I think our next big step is to figure out how cycles behave when we glue together open graphs, with the help of the Mayer-Vietoris exact sequence.

An open graph can be seen as a cospan of graphs

A1i1Bi2A2 A_1 \xrightarrow{i_1} B \xleftarrow{i_2} A_2

where AA and BB are graphs with no edges, just vertices.

(As you know, we can apply the theory of structured cospans whenever we have a left adjoint functor between cocomplete categories. Here this functor is the 'free graph on a set' functor Set\mathsf{Set} to Gph\mathsf{Gph}, sending each set to a graph with that set of vertices and no edges.)

To apply Mayer-Vietoris it seems easier to assume i1i_1 and i2i_2 are monic. The case where they are not monic can be very important, since sometimes we want to glue together two vertices of the same graph when composing two open graphs. But temporarily I'd like to avoid thinking about it, since then we can think of i1i_1 and i2i_2 are giving inclusions of subspaces: geometrically realizing our graphs to get spaces, we get a cospan of spaces

A1i1Bi2A2 |A_1| \xrightarrow{|i_1|} |B| \xleftarrow{|i_2|} |A_2|

and these are 'nice' inclusions of the sort required for Mayer-Vietoris to apply.

(Recall: the simplest version of Mayer-Vietoris applies when we have an open subspace of a topological space, but a more general version applies whenever we have a closed subspace that is a [[neighborhood retract Whenever we have a graph, viewed as a topological space, any set of vertices in that graph is a closed subspace that's a neighborhood retract.)

Now, you may wonder why I'm starting to talk about topology and a theorem about the singular homology groups of topological spaces, when we are really doing graph theory and studying the directed homology monoids of a graph!

The reason is purely efficiency: we may be able to more quickly guess what's going on if we use existing results from topology, rather than invent our own Mayer-Vietoris theorem for the directed homology monoids of a graph. In the longer run (like tomorrow) maybe we should invent our own Mayer-Vietoris theorem. But first let's see what we can do with the standard one.

view this post on Zulip John Baez (Apr 18 2025 at 14:52):

As you know, when we compose open graphs

A1i1Bi2A2A_1 \xrightarrow{i_1} B \xleftarrow{i_2} A_2

and

A2i3Bi4A3A_2 \xrightarrow{i_3} B' \xleftarrow{i_4} A_3

the key step is to take the pushout of the diagram

Bi2A2i3B B \xleftarrow{i_2} A_2 \xrightarrow{i_3} B'

Here we are gluing together two graphs BB and BB' along A2A_2, which is a graph consisting of just vertices.

Let's assume for now that i2i_2 and i3i_3 are monic. Then Mayer-Vietoris becomes relevant! Taking geometric realizations we get

Bi2A2i3B |B| \xleftarrow{|i_2|} |A_2| \xrightarrow{i_3} |B'|

and we can think of B|B| and B|B'| as two spaces whose intersection BB|B| \cap |B'| is the discrete set A2|A_2|. The pushout of this diagram can thus be identified with BB|B| \cup |B'|.

view this post on Zulip John Baez (Apr 18 2025 at 14:59):

My notation is getting unwieldy here so let's write

X=B X = |B|
Y=B Y = |B'|

We then get the Mayer-Vietoris exact sequence:

H1(XY)H1(X)H1(Y)H1(XY) H_1(X \cap Y) \to H_1(X) \oplus H_1(Y) \to H_1(X \cup Y) \xrightarrow{\partial}
H0(XY)H0(X)H0(Y)H0(XY) H_0(X \cap Y) \to H_0(X) \oplus H_0(Y) \to H_0(X \cup Y)

where \partial is the famous 'boundary' map, but since XYX \cap Y is a discrete space this becomes

0H1(X)H1(Y)H1(XY) 0 \to H_1(X) \oplus H_1(Y) \to H_1(X \cup Y) \xrightarrow{\partial}
H0(XY)H0(X)H0(Y)H0(XY) H_0(X \cap Y) \to H_0(X) \oplus H_0(Y) \to H_0(X \cup Y)

view this post on Zulip John Baez (Apr 18 2025 at 15:03):

My main interest is in 'emergent feedback loops': how new 1-cycles can appear when we glue together two graphs, which weren't present in either graph. So I'm interested in how the map

H1(X)H1(Y)H1(XY) H_1(X) \oplus H_1(Y) \to H_1(X \cup Y)

can fail to be an isomorphism! And we see this happens precisely when the map \partial fails to equal zero.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 18:33):

Thanks!! These ideas look extremely interesting!! Although I may need more time to realise more about these. I had always been excited about these "emergent feedback loops" that can appear after gluing causal loop diagrams. But I never imagined that the homology theory could play such an interesting and powerful role in identifying them until today. It is really exciting !! Thanks a lot!!! I am learning many things!!

emergentfeedbackloop.PNG
For example, If we glue a branch graph (red) and an incoherent feedforward loop(blue) along the graph {u,v}\lbrace u,v \rbrace with no edges, then we get a positive feedback loop and a negative feedback loop in the pushout graph. In fact, the pushout graph is itself has a special name called overlapping feedforward loop in the biochemical reaction network literature (see table 2 (number 20) of Functional Motifs in Biochemical Reaction Networks. In the attached file, both H1(B)H_1(B) and H1(B)H_1(B') are trivial but interestingly, H1(BB)H_1(B \cup B') is not!!..So, the boundary map \partial must not be 0 as you explained at the end. However, I think, in this particular case, one may also say H1(B)H1(B)H1(BB)H_1(B) \oplus H_1(B') \to H_1(B \cup B') is not an isomorphism from the fact that H1(B)H_1(B) and H1(B)H_1(B') are trivial but H1(BB)H_1(B \cup B') is not.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 18:45):

John Baez said:

Now, you may wonder why I'm starting to talk about topology and a theorem about the singular homology groups of topological spaces, when we are really doing graph theory and studying the directed homology monoids of a graph!

The reason is purely efficiency: we may be able to more quickly guess what's going on if we use existing results from topology, rather than invent our own Mayer-Vietoris theorem for the directed homology monoids of a graph. In the longer run (like tomorrow) maybe we should invent our own Mayer-Vietoris theorem. But first let's see what we can do with the standard one.

Thanks!! I fully understand and agree to your point. By using the standard Mayer-Vietoris on the "geometric realisation" version, now it is clear at least "what we need to achieve" in an appropriate way!!

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 18:53):

Thanks again for these beautiful ideas!! I am now trying to see how we can use these ideas to construct its analogue for our case (homology monoids of directed graphs).

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 18:55):

John Baez said:

Right! We get a bijection F\mathcal{F} and thus a precise link between the graph-theoretic and the homological way of thinking about minimal cycles.

The ability to work with minimal cycles in both ways should be important. I think it makes precise something that "systems dynamics" researchers already intuit in a rough way (though most are blissfully ignorant of homology theory).

Thanks.. Yes, I fully agree!!

view this post on Zulip John Baez (Apr 18 2025 at 19:48):

Adittya Chaudhuri said:

emergentfeedbackloop.PNG
For example, If we glue a branch graph (red) and an incoherent feedforward loop(blue) along the graph {u,v}\lbrace u,v \rbrace with no edges, then we get a positive feedback loop and a negative feedback loop in the pushout graph.

We don't get a feedback loop in this picture, because the pushout graph doesn't have a directed loop (which is the kind of loop my theorems are about): you can't walk around any loop while following the direction of the arrows.

But if you turn around the edge from ww to vv, making it into an edge from vv to ww, then the pushout graph does have a directed loop. In fact it has two.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 19:49):

Yes, of course..Sorry!! I meant ww to vv.. I have drawn wrongly!!

view this post on Zulip John Baez (Apr 18 2025 at 19:51):

Okay, no problem. Of course the usual homology group H1(G,Z)H_1(G,\mathbb{Z}) doesn't care which way the edges point, so your example is already good if we are studying that.

view this post on Zulip John Baez (Apr 18 2025 at 19:52):

By the way, I think in the paper I'll call our new homology monoids Hi(G,N)H_i(G,\mathbb{N}), and the old homology groups Hi(G,Z)H_i(G,\mathbb{Z}). We can define Hi(G,L)H_i(G,L) for any commutative monoid LL.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 19:52):

Yes!! Thanks. I understand your point. I was actually trying with some of the pictures I had drawn yesterday for example 2.10.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 19:56):

Another thing , I was thinking "can we use the notation H1(G,N)\vec{H}_1(G, \mathbb{N}) instead of H1(G,N)H_1(G, \mathbb{N})"? In that way, we may emphasise that we consider our directed graphs as directed topological spaces but not the usual topological spaces.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 19:59):

In fact, you were initially using this notation before as here #theory: applied category theory > Graphs with polarities @ 💬

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 20:23):

I am trying to sketch an approach (based on the ideas you stated) for extending to case of homology monoids of directed graphs:

1) For any morphism ϕ ⁣:GG\phi \colon G \to G' of graphs I think there is an induced morphism ϕ ⁣:H1(G,N)H1(G,N)\phi_{*} \colon H_1(G,\mathbb{N}) \to H_1(G', \mathbb{N}).

2) Now, we have structured cospans L(A)iGoL(B)L(A) \xrightarrow{i} G \xleftarrow{o} L(B) and L(B)iGoL(C)L(B) \xrightarrow{i'} G' \xleftarrow{o'} L(C). Now, we can compose it to get another structured cospan L(A)pr1iG+o,L(B),iGpr2oL(C)L(A) \xrightarrow{pr_1 \circ i} G +_{o,L(B),i'} G' \xleftarrow{pr_2 \circ o'} L(C), where pr1pr_1 and pr2pr_2 are natural projection maps to the pushout graph. Now, we apply (1) on these cospans to get induced maps on the homology monoids. Then, I think, we may be able to construct morphisms of commutative monoids H1(L(B),N)H1(G,N)H1(G,N)H1(G+o,L(B),iG,N) H_1(L(B), \mathbb{N}) \to H_1(G, \mathbb{N}) \oplus H_1(G', \mathbb{N}) \to H_1(G +_{o,L(B),i'} G', \mathbb{N}) which becomes 0H1(G,N)H1(G,N)H1(G+o,L(B),iG,N)0 \to H_1(G, \mathbb{N}) \oplus H_1(G', \mathbb{N}) \to H_1(G +_{o,L(B),i'} G', \mathbb{N}).

3) Construction of boundary map δ ⁣:H1(G+o,L(B),iG,N)H0(L(B),N)\delta \colon H_1(G +_{o,L(B),i'} G', \mathbb{N}) \to H_0(L(B), \mathbb{N}).

Maybe I am misunderstanding something !! I will think about these more!!

view this post on Zulip John Baez (Apr 18 2025 at 20:34):

Adittya Chaudhuri said:

Another thing , I was thinking "can we use the notation H1(G,N)\vec{H}_1(G, \mathbb{N}) instead of H1(G,N)H_1(G, \mathbb{N})"? In that way, we may emphasise that we consider our directed graphs as directed topological spaces but not the usual topological spaces.

I'm thinking a bit differently now: for a graph we can define what you're calling H1(G,L)\vec{H}_1(G,L) for any commutative monoid LL, but when LL is an abeilan group this is isomorphic to the usual undirected H1(G,L)H_1(G,L).

So right now I feel we don't need an extra arrow to indicate directedness: the fact that the coefficients are a commutative monoid requires directedness, but when the coefficients are an abelian group the homology becomes independent of which way the edges point.

view this post on Zulip Adittya Chaudhuri (Apr 18 2025 at 20:35):

Thanks. I understand your point.

view this post on Zulip John Baez (Apr 19 2025 at 01:07):

I'm trying to understand Mayer-Vietoris for graphs, and in particular the all-important boundary map

:H1(XY)H0(XY)\partial: H_1(X \cup Y) \to H_0(X \cap Y)

in a really concrete way when XX and YY are subgraphs of a graph XYX \cup Y, and XYX \cap Y is just a set of vertices. I realize I don't understand this map as vividly as I'd like.

I first learned homology theory in a course using William Massey's book Singular Homology Theory, and section III.3 is called "Homology of finite graphs". But it doesn't help me that much since it's mostly about computing the homology of a graph.

view this post on Zulip Adittya Chaudhuri (Apr 19 2025 at 15:53):

John Baez said:

I'm trying to understand Mayer-Vietoris for graphs, and in particular the all-important boundary map

:H1(XY)H0(XY)\partial: H_1(X \cup Y) \to H_0(X \cap Y)

in a really concrete way when XX and YY are subgraphs of a graph XYX \cup Y, and XYX \cap Y is just a set of vertices. I realize I don't understand this map as vividly as I'd like.

I first learned homology theory in a course using William Massey's book Singular Homology Theory, and section III.3 is called "Homology of finite graphs". But it doesn't help me that much since it's mostly about computing the homology of a graph.

I realised I also do not understand it concretely. I am working on it (trying to understand it in the way you mentioned).

view this post on Zulip John Baez (Apr 19 2025 at 17:59):

It may work like this. Suppose XX and YY are subgraphs of a graph XYX \cup Y and XYX \cap Y has no edges, only vertices.

Let cH1(XY,Z)=Z1(XY,Z)c \in H_1(X \cup Y, \mathbb{Z}) = Z_1(X \cup Y , \mathbb{Z}) be a cycle. Write cc as a linear combination of edges of XX, say cXc_X, plus a linear combination of edges of YY. Let dcXd c_X be the boundary of cXc_X. This is a linear combination of vertices in XYX \cap Y, so it's an element of H0(XY,Z)H_0(X \cap Y, \mathbb{Z}). Call this c\partial c. This defines a map

:H1(XY,Z)H0(XY,Z) \partial : H_1(X \cup Y, \mathbb{Z}) \to H_0(X \cap Y, \mathbb{Z})

I haven't proved that this works, this is just based on my intuitions about homology theory! It should be possible to check that this gives an exact sequence, the Mayer-Vietoris sequence.

More formally: given cH1(XY,Z)=Z1(XY,Z)c \in H_1(X \cup Y, \mathbb{Z}) = Z_1(X \cup Y , \mathbb{Z}), write

c=iniei,dc=0 c = \sum_i n_i e_i , \qquad dc = 0

where eie_i are edges of XYX \cup Y and niZn_i \in \mathbb{Z}. We can uniquely write cc as

c=cX+cY c = c_X + c_Y

where

cX=i:ei is an edge of Xniei c_X = \sum_{i : e_i \text{ is an edge of } X} n_i e_i

cY=i:ei is an edge of Yniei c_Y = \sum_{i : e_i \text{ is an edge of } Y} n_i e_i

Define

c=dcX \partial c = dc_X

(I was confused for a while about why we choose to define c\partial c to be dcXdc_X instead of dcY=dcXdc_Y = -dc_X, but I've decided there's an arbitrary choice of sign in the definition of \partial.

view this post on Zulip Adittya Chaudhuri (Apr 20 2025 at 04:03):

John Baez said:

It may work like this. Suppose XX and YY are subgraphs of a graph XYX \cup Y and XYX \cap Y has no edges, only vertices.

Let cH1(XY,Z)=Z1(XY,Z)c \in H_1(X \cup Y, \mathbb{Z}) = Z_1(X \cup Y , \mathbb{Z}) be a cycle. Write cc as a linear combination of edges of XX, say cXc_X, plus a linear combination of edges of YY. Let dcXd c_X be the boundary of cXc_X. This is a linear combination of vertices in XYX \cap Y, so it's an element of H0(XY,Z)H_0(X \cap Y, \mathbb{Z}). Call this c\partial c. This defines a map

:H1(XY,Z)H0(XY,Z) \partial : H_1(X \cup Y, \mathbb{Z}) \to H_0(X \cap Y, \mathbb{Z})

I haven't proved that this works, this is just based on my intuitions about homology theory! It should be possible to check that this gives an exact sequence, the Mayer-Vietoris sequence.

More formally: given cH1(XY,Z)=Z1(XY,Z)c \in H_1(X \cup Y, \mathbb{Z}) = Z_1(X \cup Y , \mathbb{Z}), write

c=iniei,dc=0 c = \sum_i n_i e_i , \qquad dc = 0

where eie_i are edges of XYX \cup Y and niZn_i \in \mathbb{Z}. We can uniquely write cc as

c=cX+cY c = c_X + c_Y

where

cX=i:ei is an edge of Xniei c_X = \sum_{i : e_i \text{ is an edge of } X} n_i e_i

cY=i:ei is an edge of Yniei c_Y = \sum_{i : e_i \text{ is an edge of } Y} n_i e_i

Define

c=dcX \partial c = dc_X

(I was confused for a while about why we choose to define c\partial c to be dcXdc_X instead of dcY=dcXdc_Y = -dc_X, but I've decided there's an arbitrary choice of sign in the definition of \partial.

Thanks!! Yes, I think your prescription should work as it is using the same principle of barycentric subdivision to represent a cycle CC in XYX \cup Y as a sum of XX-1-chain and YY-1-chain and then using the definition of a cycle (as an element of kernel space) and using the fact that we can take negative coeffcient to conclude that dcXdc_{X} is indeed an element of H0(XY,Z)H_{0}(X \cap Y, \mathbb{Z}).

From yesterday, I was trying to think a bit graph theoretically about "how to create an analogue for directed graphs/ when the coefficients are coming from N\mathbb{N}".

I am trying to write down what I am thinking (though still some/ lot of work might be needed to make things concrete)[It is also very possible that I am making fundamental mistakes (ignoring technical obstructions) while thinking graph theoretically]

We need to construct/understand :H1(XY,N)H0(XY,N) \partial : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N})

Since we have already established

1) minimal loops generate the commutative monoid of 1-cycles in directed graphs and
2) a bijection between the set of minimal cycles and the set of simple loops in directed graphs

I think we may not lose much if we consider only cycles of the form [γ]=e1+e2+en[\gamma]=e_1 + e_2 + \cdots e_{n}, where γ=v0e1v1e2envn=v0\gamma=v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0, and [γ][\gamma] is an element of H1(XY,N)=Z1(XY,N)H_1(X \cup Y, \mathbb{N})=Z_1(X \cup Y, \mathbb{N}) and is a minimal cycle.

Now, I think there are only two cases: [I think this needs a proof]

a) [γ][\gamma] is itself a minimal cycle/simple loop in XX or a minimal cycle/simple loop in YY
b) [γ][\gamma] is made up of two edge paths (not cycles) γX\gamma_{X} and γY\gamma_{Y} in XX and YY respectively.

I think in the case (b)(b), when we cut the [γ][\gamma] into γX\gamma_{X} and γY\gamma_{Y} in XX and YY, we have distinguished vertices "starting vertex of γx\gamma_{x}, ending vertex of γx\gamma_{x} and starting vertex of γy\gamma_{y}, ending vertex of γy\gamma_{y}".

Now, I want to define :H1(XY,N)H0(XY,N) \partial : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N}).as follows:

For case (a): [γ]0H0(XY,N)[\gamma] \mapsto 0 \in H_0(X \cap Y, \mathbb{N})

For case (b): [γ][\gamma] \mapsto starting vertex of γx+\gamma_{x}+ ending vertex of γx+\gamma_{x}+ starting vertex of γy+\gamma_{y} +, ending vertex of γy\gamma_{y}. From the construction, I think we can show that starting vertex of γx+\gamma_{x}+ ending vertex of γx+\gamma_{x}+ starting vertex of γy+\gamma_{y} +, ending vertex of γy\gamma_{y} is an element of H0(XY,N)H_0(X \cap Y, \mathbb{N}). Of course we need to define H0(XY,N)H_0(X \cap Y, \mathbb{N}) appropriately.

Also, I think from the fact that [γ][\gamma] is a minimal cycle, we can simplify the definition in case (b).

But, I am feeling that the above definition aligns with what you said before that is how the "non-zero-ness" of the boundary map can be considered as a determining factor for the existence of emergent feedback loops when we glue graphs along vertices .

I was thinking in terms of the attached example
emergeentloops.PNG

view this post on Zulip Adittya Chaudhuri (Apr 20 2025 at 09:45):

I just found a counter-example to my previous claim:

"Now, I think there are only two cases: [I think this needs a proof]

a) [γ][\gamma] is itself a minimal cycle/simple loop in XX or a minimal cycle/simple loop in YY
b) [γ][\gamma] is made up of two edge paths (not cycles) γX\gamma_{X} and γY\gamma_{Y} in XX and YY respectively. "

In the attachement I constructed the counter example.
counterexample(b).PNG where the unique minimal cycle in XYX \cup Y can not be decomposed into an edge path in XX and an edge path in YY.

Hence, my definition of :H1(XY,N)H0(XY,N) \partial : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N}) is not correct.

view this post on Zulip Adittya Chaudhuri (Apr 20 2025 at 10:40):

However, I think the following may work:

Conisder [γ]=e1+e2+en[\gamma]=e_1 + e_2 + \cdots e_{n}, where γ=v0e1v1e2envn=v0\gamma=v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0, and [γ][\gamma] is an element of H1(XY,N)=Z1(XY,N)H_1(X \cup Y, \mathbb{N})=Z_1(X \cup Y, \mathbb{N}) and is a minimal cycle.

Now, I claim the following:

Lemma:
Every edge path λ=u0f1u1f2fnum\lambda=u_0 \xrightarrow{f_1} u_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} u_m in the graph XYX \cup Y can be either decomposed uniquely into finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY or else, λ\lambda itself is a maximal edge path in XX or YY.

I define a maximal edge path in a graph GG as an edge path λ\lambda in GG such that there does not exist any edge ee in GG such that s(e)=t(λ)s(e)= t(\lambda) and t(e)=s(λ)t(e)=s(\lambda).

Now, assuming that the above lemma is correct, I reformulate my previous claim (using the same notations as used previously) as follows:

Now, I think there are only two distinct cases:

a) [γ][\gamma] is itself a minimal cycle/simple loop in XX or a minimal cycle/simple loop in YY
b) [γ][\gamma] is made up of (uniquely) finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY.

Now, I am redefining :H1(XY,N)H0(XY,N) \partial : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N}) as follows:

For case (a): [γ]0H0(XY,N)[\gamma] \mapsto 0 \in H_0(X \cap Y, \mathbb{N})

For case (b): [γ]is(γXi)+[\gamma] \mapsto \sum_{i} s(\gamma_{{X}_{i}}) + it(γXi)+ \sum_{i} t(\gamma_{{X}_{i}}) + js(γYj)+\sum_{j} s(\gamma_{{Y}_{j}}) + jt(γYj)\sum_{j} t(\gamma_{{Y}_{j}})

Since γ\gamma is a cycle, I think it will follow that is(γXi)+ \sum_{i} s(\gamma_{{X}_{i}}) + it(γXi)+ \sum_{i} t(\gamma_{{X}_{i}}) + js(γYj)+\sum_{j} s(\gamma_{{Y}_{j}}) + jt(γYj)H0(XY,N)\sum_{j} t(\gamma_{{Y}_{j}}) \in H_0(X \cap Y, \mathbb{N}) by construction in the Lemma. Of course we need to define H0(XY,N)H_0(X \cap Y, \mathbb{N}) appropriately. Note that \partial is well defined because of the "uniqueness" condition in the Lemma

Still, maybe there are some mistakes in this approach that I am yet to find out!!

I am restating (what I stated before) for completeness:

I feel the above definition of \partial aligns with what you said before that is how the "non-zero-ness" of the boundary map can be considered as a determining factor for the existence of emergent feedback loops when we glue graphs along vertices .

Of course after this we need to make sure \partial makes sense from the point of Mayer-Vietoris exact sequence.

view this post on Zulip John Baez (Apr 20 2025 at 19:13):

Adittya Chaudhuri said:

Lemma:
Every edge path λ=u0f1u1f2fnun\lambda=u_0 \xrightarrow{f_1} u_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} u_n in the graph XYX \cup Y can be either decomposed uniquely into finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY or else, λ\lambda itself is a maximal edge path in XX or YY.

This lemma must be correct, and I'd like to state it more simply like this:

Lemma:
Every edge path λ=u0f1u1f2fnun\lambda=u_0 \xrightarrow{f_1} u_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} u_n in the graph XYX \cup Y can be be decomposed uniquely into finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY.

Since an empty collection is still a finite collection, this includes the case where the edge path λ\lambda stays in XX (then the collection of maximal edge paths in YY will be empty) and the case where λ\lambda stays in XX (then the collection of maximal edge paths in XX will be empty).

view this post on Zulip John Baez (Apr 20 2025 at 19:16):

However, I don't like this formula:

it(γXi)+\sum_{i} t(\gamma_{{X}_{i}}) + js(γYj)+\sum_{j} s(\gamma_{{Y}_{j}}) + jt(γYj)\sum_{j} t(\gamma_{{Y}_{j}})

You seem to be using s(γXi)+t(γXi)s(\gamma_{X_i}) + t(\gamma_{X_i}) as a substitute for the formula we'd use in homology with Z\mathbb{Z} coefficients, namely

d(γXi)=t(γXi)s(γXi) d(\gamma_{X_i}) = t(\gamma_{X_i}) - s(\gamma_{X_i})

I don't think replacing subtraction by addition is a good way to deal with the fact that N\mathbb{N} doesn't have subtraction.

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 04:20):

John Baez said:

That fixed **le

Adittya Chaudhuri said:

However, I think the following may work:

Conisder [γ]=e1+e2+en[\gamma]=e_1 + e_2 + \cdots e_{n}, where γ=v0e1v1e2envn=v0\gamma=v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0, and [γ][\gamma] is an element of H1(XY,N)=Z1(XY,N)H_1(X \cup Y, \mathbb{N})=Z_1(X \cup Y, \mathbb{N}) and is a minimal cycle.

Now, I claim the following:

Lemma:
Every edge path λ=u0f1u1f2fnum\lambda=u_0 \xrightarrow{f_1} u_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} u_m in the graph XYX \cup Y can be either decomposed uniquely into finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY or else, λ\lambda itself is a maximal edge path in XX or YY.

This lemma must be correct, and I'd like to state it more simply like this:

Lemma:
Every edge path λ=u0f1u1f2fnum\lambda=u_0 \xrightarrow{f_1} u_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} u_m in the graph XYX \cup Y can be be decomposed uniquely into finite collection of maximal edge paths {γXi}i\lbrace \gamma_{{X}_{i}} \rbrace_{i} in XX and finite collection of maximal edge paths {γYj}j\lbrace \gamma_{{Y}_{j}} \rbrace_{j} in YY.

Since an empty collection is still a finite collection, this includes the case where the edge path λ\lambda stays in XX (then the collection of maximal edge paths in YY will be empty) and the case where λ\lambda stays in XX (then the collection of maximal edge paths in XX will be empty).

Thank you. Yes, I understand your point.

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 04:42):

John Baez said:

However, I don't like this formula:

it(γXi)+\sum_{i} t(\gamma_{{X}_{i}}) + js(γYj)+\sum_{j} s(\gamma_{{Y}_{j}}) + jt(γYj)\sum_{j} t(\gamma_{{Y}_{j}})

You seem to be using s(γXi)+t(γXi)s(\gamma_{X_i}) + t(\gamma_{X_i}) as a substitute for the formula we'd use in homology with Z\mathbb{Z} coefficients, namely

d(γXi)=t(γXi)s(γXi) d(\gamma_{X_i}) = t(\gamma_{X_i}) - s(\gamma_{X_i})

I don't think replacing subtraction by addition is a good way to deal with the fact that N\mathbb{N} doesn't have subtraction.

Yes, I agree it is not the right way to deal with the "lack of inverse" in N\mathbb{N}. In the usual set up (coeffcients in Z\mathbb{Z}), cycles are the elements of the kernel space of a boundary map δ\delta. From the "alternating sum definition" (target - source for 1-chains) of the boundary map, one can conclude that the boundary of any cycle is 00. But, in our case (coeffcients in N\mathbb{N}), we do not have the previledge to define alternating sum. You defined cycle as an element cC1(G)c \in C_1(G) such that s(c)=t(c)s(c)=t(c). However, we are yet to define a cycle interms of a kernel space of a boundary map. The definition of a boundary map was not necessary till now, as technically, a right definition of C2(G)C_2(G) should make it trivial because a graph is not expected to have any non-degenerate 2-chains. Hence, we had no difficulty in defining H1(G)Z1(G)H_1(G) \cong Z_1(G).

However, I think now to deal with "lack of inverse" situation in N\mathbb{N} in constructing a right definition of  ⁣:H1(XY)H0(XY)\partial \colon H_1(X \cup Y) \to H_0(X \cap Y), we need to see cycles as elements of the kernel space of an appropriate boundary map. I am not sure but I think this may be a better motivation to define an appropriate boundary map for the case (coefficients in N\mathbb{N}). I am not fully sure, but I am feeling that "congruence relation on a monoid" should play a good role here (from the way you previously discussed here #theory: applied category theory > Graphs with polarities @ 💬 )?

view this post on Zulip John Baez (Apr 21 2025 at 05:22):

When doing homology with coefficients in N\mathbb{N} instead of Z\mathbb{Z}, I believe that instead of defining the space of cycles Z1(X)Z_1(X) as the kernel of some map dd, we should define it as the equalizer of two maps ss and tt.

When working with abelian groups the equalizer of two maps ff and gg is the kernel of fgf - g, so all equalizers can be expressed as kernels. This is not true when working with commutative monoids, since we can't subtract. So, we need to rephrase exact sequences in a different way.

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 05:37):

Thanks!! Yes, I agree. It is interesting and really makes sense in the context of defining exact sequences (when coefficients in N\mathbb{N}) to replace kernels with equalisers. Does such a kind of exact sequence already exist in literature in other contexts?

view this post on Zulip John Baez (Apr 21 2025 at 05:39):

I think it works like this... this will take several posts to explain!

Suppose XX and YY are subgraphs of a graph XYX \cup Y and the intersection XYX \cap Y has no edges, only vertices.

When working with Z\mathbb{Z} coefficients the Mayer-Vietoris sequence implies that a sequence like this is exact:

H1(X)H1(Y)H1(XY)H0(XY) H_1(X) \oplus H_1(Y) \to H_1(X \cup Y) \xrightarrow{\partial} H_0(X \cap Y)

But H1H_1 is the space of 1-cycles mod 1-boundaries and for a graph the only 1-boundary is zero, so we can rewrite this as

Z1(X)Z1(Y)Z1(XY)H0(XY) Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) \xrightarrow{\partial} H_0(X \cap Y)

What the first map here does is sum a 1-cycle on the subgraph XXYX \subseteq X \cup Y and a 1-cycle on the subgraph YXYY \subseteq X \cup Y to get a 1-cycle on XYX \cup Y.

Digressing a bit: note that every 1-chain in C1(XY)C_1(X \cup Y) is uniquely expressed as a linear combination of edges of XX plus a linear combination of edges of YY, so the summation map

C1(X)C1(Y)C1(XY) C_1(X) \oplus C_1(Y) \to C_1(X \cup Y)

is actually an isomorphism, which we can call

C1(X)C1(Y)ΣC1(XY) C_1(X) \oplus C_1(Y) \xrightarrow{\Sigma} C_1(X \cup Y)

This map restricts to a map sending cycles to cycles,

Z1(X)Z1(Y)ΣZ1(XY) Z_1(X) \oplus Z_1(Y) \xrightarrow{\Sigma} Z_1(X \cup Y)

but this restricted map is not an isomorphism: there are cycles on XYX \cup Y that aren't sums of a cycle on XX and a cycle on YY.

view this post on Zulip John Baez (Apr 21 2025 at 05:41):

Given any cycle cZ1(X)c \in Z_1(X) we write it uniquely as a sum

c=cX+cYc = c_X + c_Y

where cXC1(X)c_X \in C_1(X) and cYC1(Y)c_Y \in C_1(Y), but cXc_X and cYc_Y will not always be cycles.

view this post on Zulip John Baez (Apr 21 2025 at 05:44):

We have dc=0d c = 0, and d=std = s - t where

s,t:C1C0s,t : C_1 \to C_0

are uniquely defined by the property that they map any edge (thought of as a 1-chain) to its source and target (thought of as 0-chains).

view this post on Zulip John Baez (Apr 21 2025 at 05:52):

Now let me remember the definition of \partial that we need to make this sequence exact:

Z1(X)Z1(Y)ΣZ1(XY)H0(XY)Z_1(X) \oplus Z_1(Y) \xrightarrow{\Sigma} Z_1(X \cup Y) \xrightarrow{\partial} H_0(X \cap Y)

First, note that

H0(XY)C0(XY)H_0(X \cap Y) \cong C_0(X \cap Y)

since XYX \cap Y is just a discrete set of points: a graph with vertices but no edges. So we can switch to talking about the sequence

Z1(X)Z1(Y)ΣZ1(XY)C0(XY)Z_1(X) \oplus Z_1(Y) \xrightarrow{\Sigma} Z_1(X \cup Y) \xrightarrow{\partial} C_0(X \cap Y)

In these terms, I claim that

c=dcX(1) \partial c = d c_X \qquad (1)

makes the sequence exact, where this formula is defined using the fact that given any cycle cZ1(X)c \in Z_1(X) we write it uniquely as a sum

c=cX+cYc = c_X + c_Y

where cXC1(X)c_X \in C_1(X) and cYC1(Y)c_Y \in C_1(Y).

First, let's note that the asymmetrical looking formula (1) is equivalent to another formula involving cYc_Y, namely

c=dcY(2) \partial c = - d c_Y \qquad (2)

since dcX=dcY d c_X = - d c_Y, because cc is a cycle, so 0=dc=d(cX+cY) 0 = d c = d(c_X + c_Y) .

view this post on Zulip John Baez (Apr 21 2025 at 05:57):

Next, note that this being exact:

Z1(X)Z1(Y)ΣZ1(XY)C0(XY)Z_1(X) \oplus Z_1(Y) \xrightarrow{\Sigma} Z_1(X \cup Y) \xrightarrow{\partial} C_0(X \cap Y)

is equivalent to Σ\Sigma being the equalizer of ff and gg, where

f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y)

are any maps with

=fg \partial = f - g

Claim. Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) is the equalizer of f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X .

view this post on Zulip John Baez (Apr 21 2025 at 06:03):

I can prove this later, but note what's nice about this! First, as I just said, it means that if we let =fg\partial = f - g with this choice of ff and gg, we get exactness at this point of the Mayer-Vietoris sequence when we're working with Z\mathbb{Z} coefficients:

Z1(X)Z1(Y)ΣZ1(XY)C0(XY)Z_1(X) \oplus Z_1(Y) \xrightarrow{\Sigma} Z_1(X \cup Y) \xrightarrow{\partial} C_0(X \cap Y)

But second, since ff and gg don't involve subtraction, we have a version of the Mayer-Vietoris sequence that works with N\mathbb{N} coefficients, saying that Σ\Sigma is the equalizer of gg and gg!

In fact, I'll prove the claim working with N\mathbb{N} coefficients, where a 1-cycle is defined as a 1-chain cc for which sc=tcs c = t c.

view this post on Zulip John Baez (Apr 21 2025 at 06:12):

Proof of Claim. To prove this, first we show that fΣ=gΣf \Sigma = g \Sigma, i.e.

fΣ(cX,cY)=gΣ(cX,cY)f \Sigma (c_X,c_Y) = g \Sigma (c_X, c_Y)

for all (cX,cY)Z1(X)Z1(Y)(c_X,c_Y) \in Z_1(X) \oplus Z_1(Y). By definition of Σ\Sigma we have

Σ(cX,cY)=cX+cY \Sigma (c_X,c_Y) = c_X + c_Y

and by definition of ff and gg we have

fΣ(cX,cY)=scX,gΣ(cX,cY)=tcX f \Sigma (c_X, c_Y) = sc_X , \qquad g \Sigma (c_X, c_Y) = tc_X

and these are equal since cXc_X is a cycle. :check_mark:

view this post on Zulip John Baez (Apr 21 2025 at 06:26):

Next, we show that if

fc=gc f c = g c

for some cZ1(XY)c \in Z_1(X \cup Y), then cimΣc \in \text{im} \Sigma.

By definition fc=gcf c = g c says

scX=tcX(a) s c_X = t c_X \qquad (a)

but since cc is a cycle we have sc=tcs c = t c, and thus s(cX+cY)=t(cX+cY)s(c_X + c_Y) = t(c_X + c_Y), which together with the above equation implies

scY=tcY(b) s c_Y = t c_Y \qquad (b)

(Here I'm using cancellation in Z1(XY)Z_1(X \cup Y). It's a cancellative monoid! I'm not using subtraction.)

(a) and (b) say that cXc_X and cYc_Y are 1-cycles! Thus, we have

c=cX+cY=Σ(cX,cY) c = c_X + c_Y = \Sigma (c_X,c_Y)

where (cX,cY)Z1(X)Z1(Y)(c_X, c_Y) \in Z_1(X) \oplus Z_1(Y) . So cimΣc \in \text{im} \Sigma as desired! \qquad :black_large_square:

view this post on Zulip John Baez (Apr 21 2025 at 06:42):

I feel this idea is similar to what you were saying, but it avoids the trouble you were running into with wanting to define d(γXi)=t(γXi)s(γXi)d(\gamma_{X_i}) = t(\gamma_{X_i}) - s(\gamma_{X_i}) but not being able to subtract, by switching from a kernel to an equalizer. It also avoids the need to explicitly break paths into a bunch of smaller paths, by using the fact that any 1-chain on XYX \cup Y can be uniquely broken into cX+cYc_X + c_Y where cXc_X is a 1-chain on XX and cYc_Y is a 1-chain on YY. I feel you had a good intuition for the situation and I'm polishing up what you were trying to say.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:51):

Adittya Chaudhuri said:

Thanks!! Yes, I agree. It is interesting and really makes sense in the context of defining exact sequences (when coefficients in N\mathbb{N}) to replace kernels with equalisers. Does such a kind of exact sequence already exist in literature in other contexts?

Hopefully this isn't too far off base since I haven't followed this thread in detail, but doing homological algebra with commutative monoids reminds me of Homological algebra in characteristic one by Connes and Consani. Although this is about homological algebra over the Boolean semifield (where 1+1=11 + 1 = 1), I think that essentially the same approach should appy to commutative monoids, and perhaps Homology of systemic modules is a good reference for this? I just came across this, so not sure.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:52):

Here's my guess as to how this all works: let's define a category where objects are commutative monoids and a morphism LML \to M is a pair (f,g)(f,g) of additive maps f,g:LMf, g : L \to M, to be thought of as a version of the formal difference fgf - g which avoids actually forming the difference. Such pairs compose via the twisted composition (f,g)(f,g):=(ff+gg,fg+gf)(f,g) \circ (f',g') := (ff' + gg', fg' + gf'). Furthermore, call such a morphism "null" if it is of the form (f,f)(f,f).

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:53):

This gives notions of kernel and image: the kernel of (f,g):LM(f,g) : L \to M is the universal morphism into LL whose composite with (f,g)(f,g) is null. There's a dual notion of cokernel. Furthermore, we get an induced notion of image, defined as the kernel of the cokernel projection.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:53):

Now consider composable morphisms (f,g):LM(f,g) : L \to M and (f,g):MN(f',g') : M \to N that have a null composite, meaning that ff+gg=fg+fgf'f + g'g = f'g + f'g. The homology monoid can now be defined as the kernel of the second morphism modulo the image of the first, where "modulo" is again in the cokernel sense.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:53):

All of this is really based on Marco Grandis's elegant and very general approach to non-abelian homological algebra.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:53):

Does any of this look like it could be relevant to you?

view this post on Zulip Tobias Fritz (Apr 21 2025 at 12:59):

What I'm not sure about is whether the resulting category is really going to satisfy Grandis's axioms for a "homological category". If not, then there are known ways around this at the expense of having to introduce further bookkeeping. I can say more about this if it turns out to be relevant.

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 13:47):

John Baez said:

I feel this idea is similar to what you were saying, but it avoids the trouble you were running into with wanting to define d(γXi)=t(γXi)s(γXi)d(\gamma_{X_i}) = t(\gamma_{X_i}) - s(\gamma_{X_i}) but not being able to subtract, by switching from a kernel to an equalizer. It also avoids the need to explicitly break paths into a bunch of smaller paths, by using the fact that any 1-chain on XYX \cup Y can be uniquely broken into cX+cYc_X + c_Y where cXc_X is a 1-chain on XX and cYc_Y is a 1-chain on YY. I feel you had a good intuition for the situation and I'm polishing up what you were trying to say.

Thank you so much for explaining your ideas in such a detailed and vivid way. I find your ideas very interesting.

Yes, I agree my approach was to break a minimal cycle in XYX \cup Y uniquely into a "collection of paths in XX" and a "collection of paths in YY", and then I was trying to define \partial in such a way that it maps all the minimal cycles in XX or minimal cycles in YY to 00. However, my approach was failing because I was not able to define \partial in a right way on the minimal cycles in XYX \cup Y which are not made up of minimal cycles in XX and minimal cycles in YY so that we get an exact sequence.

After your explanation, I realised that although I wanted to do the right thing, I was not using the correct language that expresses what I want in a consistent way. Now, I understand "how Mayer-Vietoris sequence for a directed graph" is not actually an exact sequence, but very close it (as we can recover the exact sequence if we work with coefficients in Z\mathbb{Z}). More precisely, your version of Mayer-Vietoris is a generalisation of the usual Mayer-Vietoris and can be framed precisely in the form of the following statement:

Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) is the equalizer of f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X .

I find it very interesting how "a very realistic idea of finding the existence of emergent feedback loops while glueing directed graphs" naturally leads to a "generalisation" of the Mayer-Vietoris sequence in undirected graphs. More interestingly, this generalisation was necessary to find a right language to phrase the problem of finding emergent feedback loops.

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 14:08):

Tobias Fritz said:

Does any of this look like it could be relevant to you?

Thank you so much!! These ideas are very interesting, and I think they are very much related to what we are doing and what John Baez was saying about equalisers. I would definitely be interested in knowing more about these objects. Can you please explain a bit more about the significance of "twisted composition" ?

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 14:13):

I understood that if (f,g):LM(f,g) : L \to M and (f,g):MN(f',g') : M \to N are composable and is null, then we have ff+gg=fg+fgf'f + g'g = f'g + f'g.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 14:14):

Sure! I realize that I had gotten the order of composition mixed up above and fixed that now.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 14:14):

It's probably more intuitive to write the morphisms as pairs (f+,f)(f_+, f_-) to underline the analogy with the fomal difference f+ff_+ - f_-. In this notation, the twisted composition is

(g+,g)(f+,f):=(g+f++gf,g+f+gf+),(g_+, g_-) \circ (f_+, f_-) := (g_+ f_+ + g_- f_-, g_+ f_- + g_- f_+),

corresponding to the two terms in (g+g)(f+f)=(g+f++gf)(g+fgf+)(g_+ - g_-) \circ (f_+ - f_-) = (g_+ f_+ + g_- f_-) - (g_+ f_- - g_- f_+).

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 14:14):

Tobias Fritz said:

Sure! I realize that I had gotten the order of composition mixed up above and fixed that now.

Thanks!!

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 14:22):

Tobias Fritz said:

It's probably more intuitive to write the morphisms as pairs (f+,f)(f_+, f_-) to underline the analogy with the fomal difference f+ff_+ - f_-. In this notation, the twisted composition is

(g+,g)(f+,f):=(g+f++gf,g+f+gf+),(g_+, g_-) \circ (f_+, f_-) := (g_+ f_+ + g_- f_-, g_+ f_- + g_- f_+),

corresponding to the two terms in (g+g)(f+f)=(g+f++gf)(g+fgf+)(g_+ - g_-) \circ (f_+ - f_-) = (g_+ f_+ + g_- f_-) - (g_+ f_- - g_- f_+).

Thanks!! I think you are constructing a category where "the desired (fg)(f-g)" can be written as morphism (f,g)(f,g) in your category so that we can avoid the situation of "existence of inverse elements". Am I understanding correctly?

view this post on Zulip Tobias Fritz (Apr 21 2025 at 14:26):

Exactly, that's the idea! And furthermore, if you consistently work with pairs like that, then there is an established machinery for homological algebra that applies. You can just turn the crank, and you'll get well-behaved concepts of chain complex, homology objects, connecting maps and even long exact sequences, provided that Grandis's axioms are satisfied. (I'm not sure if they are; in case that they're not, then there's still a workaround that I can explain.)

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 14:27):

Thanks!! Sounds very interesting!!

view this post on Zulip Tobias Fritz (Apr 21 2025 at 14:28):

(And I wouldn't want to call it "my" category because those pairs and twisted composition are already considered in the papers I've linked to.)

view this post on Zulip Adittya Chaudhuri (Apr 21 2025 at 16:08):

@John Baez Accoording to what I understand: The failure of the map Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of ff and gg is an isomorphism if and only if f=gf=g.

My questions are the following:

1) What are some nice criteria (other than f=gf=g, because to know f=gf=g, I think we precisely need to know whether there exists any cycle in XYX \cup Y which is not made up of a cycle in XX and a cycle in YY) which would imply Σ\Sigma is not an isomorphism?

2) Can we give a measure of how much Σ\Sigma is far from being an isomorphism?

If I am not misunderstanding, then I feel each of these questions may be relevant from the point of realistic applications in systems dynamics or systems biology.

view this post on Zulip John Baez (Apr 21 2025 at 17:36):

Thanks, Tobias, for all these potentially helpful ideas! It will take me a while to absorb them but they may give a more systematic way of developing an analogue of the Mayer-Vietoris exact sequence for the homology of a directed graph with coefficients in N\mathbb{N} (or any other commutative monoid).

view this post on Zulip Tobias Fritz (Apr 21 2025 at 17:41):

Sounds good! I think Grandis's Homological Algebra In Strongly Non-Abelian Settings, which this is based on, is really pretty, useful and quite underappreciated.

view this post on Zulip John Baez (Apr 21 2025 at 17:46):

Adittya Chaudhuri said:

John Baez According to what I understand: The failure of the map Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of ff and gg is an isomorphism if and only if f=gf=g.

My questions are the following:

1) What are some nice criteria (other than f=gf=g, because to know f=gf=g, I think we precisely need to know whether there exists any cycle in XYX \cup Y which is not made up of a cycle in XX and a cycle in YY) which would imply Σ\Sigma is not an isomorphism?

2) Can we give a measure of how much Σ\Sigma is far from being an isomorphism?

These are good questions. I don't know good answers.

The answers may involve relative homology. Suppose you have a graph where XX and YY are the walking edge, XYX \cap Y consists of two vertices, and XYX \cup Y has a nonzero cycle. Where does the cycle in XYX \cup Y come from? It seems to come from relative cycles aZ1(X,XY)a \in Z_1(X, X \cap Y) and bZ2(Y,XY)b \in Z_2(Y, X \cap Y) that "fit together to form a cycle":

da=dbd a = - d b

so a+ba + b is a cycle.

Here I am using Z\mathbb{Z} coefficients, which allows me to mention dd and -. To use N\mathbb{N} coefficients we should instead say

sa=tb s a = t b
ta=sb t a = s b

i.e. "bb starts where aa ends, and aa starts where bb ends".

That's all I have to say right now - maybe you can go further.

I still feel I haven't gotten the analogue of the Mayer-Vietoris sequence perfectly worked out, and I want to do that, perhaps using ideas from the papers Tobias pointed us to. I also want to write up the stuff we know is working well.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 18:21):

Here's one more thought on what I've been suggesting. The functor from the category that I've proposed should have a "forgetful" functor to Ab\mathsf{Ab} which maps every commutative monoid to its enveloping abelian group, and maps every morphism (f+,f)(f_+, f_-) to the actual difference f+ff_+ - f_-. It seems to me that this functor should preserve the kernels and cokernels. If this is indeed the case, then this functor is "exact". In particular, it should commute with homology: the homology objects associated to a chain complex of commutative monoids will be such that their enveloping groups are precisely the homology groups of the chain complex of the enveloping groups.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 18:22):

So the enveloping groups of the homology monoids that I'd construct should be the usual homology groups over Z\mathbb{Z} of the underlying undirected graph. Is this what you'd expect to happen?

view this post on Zulip John Baez (Apr 21 2025 at 20:11):

No. What we want (and have) is that the usual 1st homology group over Z\mathbb{Z} of a directed graph cannot be functorially constructed from its 1st homology monoid, nor vice versa. They convey different information because the homology group detects cycles that are not linear combinations of directed edge loops, while the homology monoid only detects cycles that are linear combinations of directed edge loops. We saw examples earlier here. You can find an example with two vertices and two edges.

view this post on Zulip John Baez (Apr 21 2025 at 20:16):

So, we're not doing homology of ordinary spaces with novel coefficients; we're doing homology of a very simple class of directed spaces with coefficients in a commutative monoid (mainly N\mathbb{N}).

view this post on Zulip Tobias Fritz (Apr 21 2025 at 21:14):

John Baez said:

No. What we want (and have) is that the usual 1st homology group over Z\mathbb{Z} of a directed graph cannot be functorially constructed from its 1st homology monoid, because there are cycles in the usual sense that are not linear combinations of directed loops. We saw examples earlier here. You can find an example with two vertices and two edges.

But I'm trying to look at the homology over Z\mathbb{Z} of the underlying undirected graph, which I think would be the construction that corresponds to forming enveloping groups of the chain complex and then looking at homology. Then it seems to me that the two parallel edges example is consistent with my expectation, no?

(I'm also not 100% sure that my approach really conforms with my expectation; I guess it would be best to work out some examples of what those kernels in the category that I've described actually are.)

view this post on Zulip John Baez (Apr 21 2025 at 21:19):

Okay. I don't want to recover the homology of the underlying undirected graph, since I already know everything I need to know about that. But we can do this:

Start with a directed graph s,t:EVs, t: E \to V and first apply the free commutative monoid functor to this and get something I'll call s,t:N[E]N[E]s, t: \mathbb{N}[E] \to \mathbb{N}[E]. So far this is very interesting to me.

Then apply the enveloping group functor and get something I'll call s,t:Z[E]Z[E]s, t : \mathbb{Z}[E] \to \mathbb{Z}[E]. (I'm lazy so I keep calling the maps ss and tt.) Define d=std = s - t and get a 2-step chain complex of abelian groups. The homology of this is the usual homology of the underlying undirected graph.

Then it seems to me that the two parallel edges example is consistent with my expectation, no?

Yes. We'd get a chain complex with H1=ZH_1 = \mathbb{Z}.

view this post on Zulip John Baez (Apr 21 2025 at 21:21):

I'm mainly interested in the information that gets lost when we apply the enveloping group functor!

view this post on Zulip Tobias Fritz (Apr 21 2025 at 21:22):

Yep, that's what I'm getting at! Of course these enveloping groups are not really the thing that's of interest to you, but what I'm trying to see is whether my proposal would satisfy your desiderata, and that looks promising. (Although my head is arguably too cloudy now to think clearly and I should call it a day.)

view this post on Zulip John Baez (Apr 21 2025 at 21:26):

I must be misunderstanding because I was trying to explain why your proposal doesn't satisfy my desiderata.

view this post on Zulip John Baez (Apr 21 2025 at 21:28):

By the way, when you said

forming enveloping groups of the chain complex and then looking at homology.

I didn't understand that, because in the proposal I just sketched we don't have a chain complex until we take enveloping groups: we instead have a pair of parallel maps s,t:N[E]N[V]s, t: \mathbb{N}[E] \to \mathbb{N}[V]. Only after taking the enveloping groups can we subtract these and get the desired chain complex, with differential d=std = s - t.

view this post on Zulip John Baez (Apr 21 2025 at 21:30):

(Admittedly any morphism between anything can be seen as a 2-step chain complex, because the condition d2=0d^2 = 0 doesn't enter! But I don't want to think of s:N[E]N[V]s: \mathbb{N}[E] \to \mathbb{N}[V] or t:N[E]N[V]t: \mathbb{N}[E] \to \mathbb{N}[V] as a 2-step chain complex... unless you're telling me I should.)

view this post on Zulip Tobias Fritz (Apr 21 2025 at 21:32):

John Baez said:

I didn't understand that, because in the proposal I just sketched we don't have a chain complex until we take enveloping groups: we instead have a pair of parallel maps s,t:N[E]N[V]s, t: \mathbb{N}[E] \to \mathbb{N}[V]. Only after taking the enveloping groups can we subtract these and get the desired chain complex, with differential d=std = s - t.

Ah, what I was referring to is a chain complex in the category that I had described, where morphisms are precisely pairs of parallel additive maps (s,t)(s,t) between commutative monoids. A chain complex in this category is a composable sequence of such pairs (sn,tn)(s_n, t_n), where the twisted composition of any two is null. This means that sn1sn+tn1tn=sn1tn+tn1sns_{n-1} s_n + t_{n-1} t_n = s_{n-1} t_n + t_{n-1} s_n.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 21:33):

So it's a chain complex in a certain generalized sense, namely the one of Grandis's non-abelian homological algebra. I believe that this is exactly what you have.

view this post on Zulip James Deikun (Apr 21 2025 at 21:38):

The enveloping groups of the directed homology monoids should not be the same as the undirected homology groups because when you move from directed to undirected entirely new cycles appear that are not formal differences of any directed cycles. That's what the example of two parallel arrows is supposed to illustrate.

view this post on Zulip Tobias Fritz (Apr 21 2025 at 21:40):

John Baez said:

I must be misunderstanding because I was trying to explain why your proposal doesn't satisfy my desiderata.

Sorry, you're right, and I can blame it on my cloudy head :sweat_smile: So either what I'm suggesting isn't what you want, or my expectation about homology commuting with group envelopes is wrong, or "my" homology monoid acts differently in a way that you perhaps would want if interpreted suitably. One difference is that there then would be two maps from cycles to chains, and this probably means that elements of my monoid of cycles should not be thought of as sums of directed loops. But they could still have a different meaningful interpretation in which directed loops are encoded.

view this post on Zulip John Baez (Apr 22 2025 at 01:04):

I'll think about it, thanks!

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 04:01):

John Baez said:

Adittya Chaudhuri said:

John Baez According to what I understand: The failure of the map Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of ff and gg is an isomorphism if and only if f=gf=g.

My questions are the following:

1) What are some nice criteria (other than f=gf=g, because to know f=gf=g, I think we precisely need to know whether there exists any cycle in XYX \cup Y which is not made up of a cycle in XX and a cycle in YY) which would imply Σ\Sigma is not an isomorphism?

2) Can we give a measure of how much Σ\Sigma is far from being an isomorphism?

These are good questions. I don't know good answers.

The answers may involve relative homology. Suppose you have a graph where XX and YY are the walking edge, XYX \cap Y consists of two vertices, and XYX \cup Y has a nonzero cycle. Where does the cycle in XYX \cup Y come from? It seems to come from relative cycles aZ1(X,XY)a \in Z_1(X, X \cap Y) and bZ2(Y,XY)b \in Z_2(Y, X \cap Y) that "fit together to form a cycle":

da=dbd a = - d b

so a+ba + b is a cycle.

Here I am using Z\mathbb{Z} coefficients, which allows me to mention dd and -. To use N\mathbb{N} coefficients we should instead say

sa=tb s a = t b
ta=sb t a = s b

i.e. "bb starts where aa ends, and aa starts where bb ends".

That's all I have to say right now - maybe you can go further.

I still feel I haven't gotten the analogue of the Mayer-Vietoris sequence perfectly worked out, and I want to do that, perhaps using ideas from the papers Tobias pointed us to. I also want to write up the stuff we know is working well.

Thank you!! Interesting. Yes, I agree that the relative homology is a correct language to characterise "which portions of a cycle cc in XYX \cup Y is coming from XX" and "which portion of cc is coming from YY" by looking at how XX and YY are glued together. I am trying to understand it more clearly (concretely) to say something precise about my previous questions (1) and (2).

view this post on Zulip John Baez (Apr 22 2025 at 06:02):

I remembered now that my former student @Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".

I could try to summarize this work, but we actually want something different. Here's one idea we can extract from Jade's thesis. We are studying a graph XYX \cup Y for which the intersection XYX \cap Y is just a set of vertices, and trying to understand H1(XY,N)H_1(X \cup Y, \mathbb{N}), which is isomorphic to the set of cycles in XYX \cup Y. As we know, every cycle in XYX \cup Y is a sum of minimal cycles, and every minimal cycle is an equivalence class of simple loops, where two simple loops are equivalent iff they differ only by their starting point - so let's focus on those.

Every simple loop either

or

or

or... etc. So the set of simple loops in XYX \cup Y is N\mathbb{N}-graded!

So is the set of equivalence classes of simple loops (check that the grade doesn't depend on the representative) - or in other words, minimal cycles.

Since cycles are sums of minimal cycles, but not necessarily in a unique way, I don't know if this grading on the set of minimal cycles gives a grading on H1(XY)H_1(X \cup Y)... but at least it gives a [[filtration]]:

H1(XY)=n=0An H_1(X \cup Y) = \bigcup_{n = 0} A_n

where the submonoid AnH1(XY)A_n \subseteq H_1(X \cup Y) consists of all cycles that are sums of minimal cycles of grade n\le n.

view this post on Zulip John Baez (Apr 22 2025 at 06:05):

Note that

A0=im(Σ:H1(X)H1(Y)H1(XY)) A_0 = \text{im} (\Sigma : H_1(X) \oplus H_1(Y) \to H_1(X \cup Y) )

and all the higher AiA_i are 'corrections' to the simple but wrong guess that this image is all of H1(XY)H_1(X \cup Y).

view this post on Zulip John Baez (Apr 22 2025 at 06:07):

Hmm! I think we can analyze the higher AiA_i in terms of the map

:H1(XY)H0(XY) \partial : H_1(X \cup Y) \to H_0(X \cap Y)

since this map takes a minimal cycle and gives an element of H0(XY)H_0(X \cap Y) that keeps track of where this cycle crosses between XX and YY.

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 08:16):

Thank you. I am now trying to understand your ideas.

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 09:20):

Although it may not be anything serious, I am sharing some realisations that I had today:

I think given a directed graph GG, the bijection between the set of minimal cycles in GG and the set of equivalence classes of simple loops in GG precisely tells that our minimal cycles correspond to directed circles (according to the definition of a [[directed topological space]]). I am thinking like this because when we are taking the equivalence class of a simple loop, then we just remember the direction of the simple loop but forget everything else, and thus, in a way, I think we end up in a topological space homeomorphic to a circle with a sense of direction. Now, since minimal cycles of a graph generate all the cycles, we can focus only on the directed circles in a graph. Hence, in the context of emergent directed cycles, when we glue graphs XX and YY along XYX \cap Y (containing only vertices), then we may think of a measure of emergence as the set EX,Y,=Sl(XY)Sl(X)Sl(Y)E_{X,Y,}=Sl({X \cup Y}) - Sl(X) \cup Sl(Y), where Sl(XY)Sl(X \cup Y), Sl(X)Sl(X) and Sl(Y)Sl(Y) are the sets of directed circles in XYX \cup Y, XX and YY, respectively. Now, if we are working with finite graphs, then we may say that the natural number representing the cardinality of EX,YE_{X,Y} i.e EX,YN|E_{X,Y}| \in \mathbb{N} gives a measure of the emergence here.

view this post on Zulip Tobias Fritz (Apr 22 2025 at 09:27):

One final comment on the Grandis-Connes-Consani approach. I've just done some calculations, and it looks like the precise category that I've described probably isn't the "right" one, because it seems hard to determine what kernels and cokernels might be.

But following the related ideas from Section 5 of Homological algebra in characteristic one instead, I've arrived at the following notion of "kernel" for the pair of additive maps s,t:N[E]N[V]s, t : \mathbb{N}[E] \to \mathbb{N}[V] associated to a directed graph. This kernel is the monoid of all pairs (p,q)N[E]2(p,q) \in \mathbb{N}[E]^2 for which s(p)+t(q)=s(q)+t(p)s(p) + t(q) = s(q) + t(p). You should think of such a pair as representing the formal difference pqp - q without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation

d(pq)=(st)(pq)=0.d(p - q) = (s - t)(p - q) = 0.

This monoid of cycles has a canonical involution given by (p,q)(q,p)(p,q) \mapsto (q,p). The elements that are invariant under the involution should be thought of as "trivial", because they trivially satisfy the defining equation. The equalizer that you're considering is the submonoid of elements of the form (p,0)(p,0). On the other hand, the usual group of cycles of the underlying undirected graph is what you get upon identifying all trivial elements with zero.

For example for the graph with two parallel edges e1e_1 and e2e_2, writing p=p1e1+p2e2p = p_1 e_1 + p_2 e_2 and likewise for qq identifies the space of cycles with the monoid {(p1,p2,q1,q2)N4p1+p2=q1+q2}\{(p_1,p_2,q_1,q_2) \in \mathbb{N}^4 \mid p_1 + p_2 = q_1 + q_2\}. Taking q=0q = 0 necessitates p=0p = 0, recovering the fact that there are no directed cycles. Quotienting by trivial cycles produces a group isomorphic to the standard group of cycles Z\mathbb{Z}. So it seems to me that this object contains all the information that one would hope for, even though it's not so easy to interpret the meaning of a general element.

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 09:31):

Thank you @Tobias Fritz. I will take some time to understand your ideas.

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 12:24):

John Baez said:

Hmm! I think we can analyze the higher AiA_i in terms of the map

:H1(XY)H0(XY) \partial : H_1(X \cup Y) \to H_0(X \cap Y)

since this map takes a minimal cycle and gives an element of H0(XY)H_0(X \cap Y) that keeps track of where this cycle crosses between XX and YY.

But, did we actually define the map \partial when the coefficients are from N\mathbb{N}? I thought you defined the Mayer-Vietoris for the case of directed graphs in the form of following statement:

Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) is the equalizer of f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X .

Am I misunderstanding?

view this post on Zulip Tobias Fritz (Apr 22 2025 at 12:31):

Tobias Fritz said:

This kernel is the monoid of all pairs (p,q)N[E]2(p,q) \in \mathbb{N}[E]^2 for which s(p)+t(q)=s(q)+t(p)s(p) + t(q) = s(q) + t(p). You should think of such a pair as representing the formal difference pqp - q without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation

d(pq)=(st)(pq)=0.d(p - q) = (s - t)(p - q) = 0.

I've figured out a good way to think about it: the first component pp is a directed chain, while the second component qq plays the role of a directed chain with the opposite direction, and the equation then says that these two together must form a cycle. With this idea in mind, it's clear that this notion of kernel keeps track of both directed cycles and undirected cycles at the same time. On a related note, the involution (p,q)(q,p)(p,q) \mapsto (q,p) keeps track of orientation reversal as a structure, while in the usual undirected setting with Z\mathbb{Z} coefficients orientation reversal is more like a mere property (namely the existence of additive inverses).

view this post on Zulip Matt Cuffaro (he/him) (Apr 22 2025 at 15:44):

this construction seems evocative of the Grothendieck group of the integers. is there a connection?

Tobias Fritz said:

Tobias Fritz said:

This kernel is the monoid of all pairs (p,q)N[E]2(p,q) \in \mathbb{N}[E]^2 for which s(p)+t(q)=s(q)+t(p)s(p) + t(q) = s(q) + t(p). You should think of such a pair as representing the formal difference pqp - q without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation

d(pq)=(st)(pq)=0.d(p - q) = (s - t)(p - q) = 0.

view this post on Zulip Tobias Fritz (Apr 22 2025 at 15:49):

Yes! For a commutative monoid MM, you can factor the construction of its enveloping group (Grothendieck group) into two steps: first, form MMM \oplus M, which is an involutive monoid with respect to the swap (x,y)(y,x)(x,y) \mapsto (y,x) as involution, to be thought of as some sort of "pre-negation". Second, identify all elements of the form (x,x)(x,x), which are precisely those that are invariant under the involution, with 00. The resulting monoid happens to be a group, and it's the Grothendieck group of MM.

view this post on Zulip Adittya Chaudhuri (Apr 22 2025 at 20:08):

John Baez said:

I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".

Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).

view this post on Zulip John Baez (Apr 22 2025 at 21:09):

Adittya Chaudhuri said:

John Baez said:

Hmm! I think we can analyze the higher AiA_i in terms of the map

:H1(XY)H0(XY) \partial : H_1(X \cup Y) \to H_0(X \cap Y)

since this map takes a minimal cycle and gives an element of H0(XY)H_0(X \cap Y) that keeps track of where this cycle crosses between XX and YY.

But, did we actually define the map \partial when the coefficients are from N\mathbb{N}?

No, not really - I made a mistake here. To fix that mistake I could be more clear about coefficients and define a map

i:H1(XY,N)H0(XY,Z) \partial \circ i : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{Z})

as follows:

1) First, there's a natural map

i:H1(XY,N)H1(XY,Z) i: H_1(X \cup Y, \mathbb{N}) \to H_1(X \cup Y, \mathbb{Z})

since H1(XY,N)Z1(XY,N) H_1(X \cup Y, \mathbb{N}) \cong Z_1(X \cup Y, \mathbb{N}) is (isomorphic to) the commutative monoid of N\mathbb{N}-linear combinations of equivalence classes of simple loops, and any such linear combination gives a cycle in Z1(XY,Z)Z_1(X \cup Y, \mathbb{Z}). Wow, that sounds complicated when I say it, but the picture in my mind is simple! Another way to say it is:

We can define the directed first homology of a directed graph GG with coefficients in any commutative monoid, and any homomorphism of commutative monoids ABA \to B induces a map

H1(G,A)H1(G,B) H_1(G, A) \to H_1(G,B)

in a functorial way. To get ii we apply this to the inclusion NZ\mathbb{N} \to \mathbb{Z}.

2) Conjecture: The map ii is is injective.

This doesn't really matter for what I'm about to say, but my mental image is that cycles with coefficients in N\mathbb{N} are just cycles with coefficients in Z\mathbb{Z} with a special property.

3) So, we can form the composite

H1(XY,N)iH1(XY,Z)H0(XY,Z) H_1(X \cup Y, \mathbb{N}) \xrightarrow{i} H_1(X \cap Y, \mathbb{Z}) \xrightarrow{\partial} H_0(X \cap Y, \mathbb{Z} )

and that's what I was unconsciously doing when I wrote "H1(XY,N)H0(XY,Z) H_1(X \cup Y, \mathbb{N}) \xrightarrow{\partial} H_0(X \cap Y, \mathbb{Z} )".

There is still more to say about how we might use this map i\partial \circ i, or some similar map, to understand emergent cycles that form when we glue together two graphs. But I should think about it some more first.

view this post on Zulip John Baez (Apr 22 2025 at 21:29):

Adittya Chaudhuri said:

John Baez said:

I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".

Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).

Here's one thing that may help. Jade considered, not labeled graphs, but what she called "RR-matrices" for a rig RR. An RR-matrix is simply a set XX together with a function M ⁣:X×XRM \colon X \times X \to R. So, it's a square matrix with entries in RR, where the rows and columns are indexed by the set XX.

If we take RR to be the boolean rig B={T,F}B = \{T,F\}, an RR-matrix is a directed graph with at most one edge from any vertex xXx \in X to any vertex yXy \in X: we say there's an edge from xx to yy iff M(x,y)=TM(x,y) = T.

Note this is not the same as our kind of graph (namely a [[quiver]]): you can think of it as a special case. Both RR-matrices and quivers are a special case of graphs enriched in a category C\mathsf{C}. A graph enriched in C\mathsf{C} a set XX and a function

M:X×XOb(C)M : X \times X \to \mathrm{Ob}(\mathsf{C}).

(A quiver is a graph enriched in Set\mathsf{Set}. An RR-matrix is a graph enriched in the discrete category corresponding to the set RR.)

Also, Jade focused on the case where the rig RR is actually a [[quantale]]. A quantale is a monoid in the monoidal category of [[sup-semilattices]]. A sup-semilattice is a poset that has all colimits, called 'suprema' or 'sups'. Any quantale becomes a rig where the addition is given by the binary case of the sup and the multiplication is given by the monoidal structure.

But why did Jade restrict to quantales?

Here's why: so she could do infinite sums without worrying about them! You'll see a lot of 'matrix multiplication' in her work, where she multiplies RR-matrices using usual formula

(MN)(x,z)=yM(x,y)M(y,z) (MN)(x,z) = \sum_y M(x,y) M(y,z)

When you use a quantale, these sums make sense even when yy ranges over an infinite set!

However, if we restrict her ideas to make sure we're only doing finite sums, we can then generalize them to graphs enriched in a rig category like Set\mathsf{Set} or FinSet\mathsf{FinSet}.

view this post on Zulip John Baez (Apr 23 2025 at 00:08):

Tobias Fritz said:

Tobias Fritz said:

This kernel is the monoid of all pairs (p,q)N[E]2(p,q) \in \mathbb{N}[E]^2 for which s(p)+t(q)=s(q)+t(p)s(p) + t(q) = s(q) + t(p). You should think of such a pair as representing the formal difference pqp - q without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation

d(pq)=(st)(pq)=0.d(p - q) = (s - t)(p - q) = 0.

I've figured out a good way to think about it: the first component pp is a directed chain, while the second component qq plays the role of a directed chain with the opposite direction, and the equation then says that these two together must form a cycle. With this idea in mind, it's clear that this notion of kernel keeps track of both directed cycles and undirected cycles at the same time. On a related note, the involution (p,q)(q,p)(p,q) \mapsto (q,p) keeps track of orientation reversal as a structure, while in the usual undirected setting with Z\mathbb{Z} coefficients orientation reversal is more like a mere property (namely the existence of additive inverses).

Great, now I understand what you're doing! This description is quite vivid.

view this post on Zulip John Baez (Apr 23 2025 at 03:02):

@Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.

Among other things I wrote a proof of what we were calling Claim 3. I've strengthened it a bit:

Claim 3. Every loop that gives the same cycle as a simple loop

v0e1v1e2en1vn1env0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0

is of the form

vkek+1vk+1ek+2en+k1vn+k1en+kvk v_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \xrightarrow{e_{n+k-1}} v_{n+k-1} \xrightarrow{e_{n+k}} v_k

where we treat the subscripts as elements of Z/n\mathbb{Z}/n and do addition mod nn.

Proof. Suppose

γ=(v0e1v1e2en1vn1env0) \gamma = \Big( v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0 \Big)

is a simple loop and

δ=(w0f1w1f2fm1wm1fmw0) \delta = \Big( w_0 \xrightarrow{f_1} w_1 \xrightarrow{f_2} \cdots \xrightarrow{f_{m-1}} w_{m-1} \xrightarrow{f_m} w_0 \Big)

gives the same cycle as γ\gamma, so that

e1++en=f1++fm. e_1 + \cdots + e_n = f_1 + \cdots + f_m .

Since γ\gamma is simple, all the vertices v0,,vn1v_0, \dots, v_{n-1} are distinct, so the edges e1,,ene_1, \dots, e_n are distinct. We must thus have m=nm = n, with the list of edges f1,,fnf_1, \dots, f_n being some permutation of the list of edges e1,,ene_1, \dots, e_n. Since all the vertices v0,,vn1v_0, \dots, v_{n-1} are distinct, the only permutations that make δ\delta into a loop are cyclic permutations. \qquad :black_large_square:

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 04:45):

John Baez said:

Adittya Chaudhuri said:

John Baez said:

Hmm! I think we can analyze the higher AiA_i in terms of the map

:H1(XY)H0(XY) \partial : H_1(X \cup Y) \to H_0(X \cap Y)

since this map takes a minimal cycle and gives an element of H0(XY)H_0(X \cap Y) that keeps track of where this cycle crosses between XX and YY.

But, did we actually define the map \partial when the coefficients are from N\mathbb{N}?

No, not really - I made a mistake here. To fix that mistake I could be more clear about coefficients and define a map

i:H1(XY,N)H0(XY,Z) \partial \circ i : H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{Z})

as follows:

1) First, there's a natural map

i:H1(XY,N)H1(XY,Z) i: H_1(X \cup Y, \mathbb{N}) \to H_1(X \cup Y, \mathbb{Z})

since H1(XY,N)Z1(XY,N) H_1(X \cup Y, \mathbb{N}) \cong Z_1(X \cup Y, \mathbb{N}) is (isomorphic to) the commutative monoid of N\mathbb{N}-linear combinations of equivalence classes of simple loops, and any such linear combination gives a cycle in Z1(XY,Z)Z_1(X \cup Y, \mathbb{Z}). Wow, that sounds complicated when I say it, but the picture in my mind is simple! Another way to say it is:

We can define the directed first homology of a directed graph GG with coefficients in any commutative monoid, and any homomorphism of commutative monoids ABA \to B induces a map

H1(G,A)H1(G,B) H_1(G, A) \to H_1(G,B)

in a functorial way. To get ii we apply this to the inclusion NZ\mathbb{N} \to \mathbb{Z}.

2) Conjecture: The map ii is is injective.

This doesn't really matter for what I'm about to say, but my mental image is that cycles with coefficients in N\mathbb{N} are just cycles with coefficients in Z\mathbb{Z} with a special property.

3) So, we can form the composite

H1(XY,N)iH1(XY,Z)H0(XY,Z) H_1(X \cup Y, \mathbb{N}) \xrightarrow{i} H_1(X \cap Y, \mathbb{Z}) \xrightarrow{\partial} H_0(X \cap Y, \mathbb{Z} )

and that's what I was unconsciously doing when I wrote "H1(XY,N)H0(XY,Z) H_1(X \cup Y, \mathbb{N}) \xrightarrow{\partial} H_0(X \cap Y, \mathbb{Z} )".

There is still more to say about how we might use this map i\partial \circ i, or some similar map, to understand emergent cycles that form when we glue together two graphs. But I should think about it some more first.

Thanks for explaining. I find it interesting that the homomorphism of commutative monoids induces a functor H1(G,) ⁣:ComMonComMonH_1(G,-) \colon \mathsf{ComMon} \to \mathsf{ComMon}, as you explained. Somehow, it makes me recall that we already have a functor from the ComMonCat\mathsf{ComMon} \to \mathsf{Cat} defined as RRGraphsR \to R- \mathsf{Graphs} or LCat/BLL \mapsto \mathsf{Cat}/BL. Now, I think the later case means we can functorially change the labels on the edges of the our graphs (as when we change the labels, we keep the underlying directed graph structure as it is), while I think in the former case, it means we can functorially forget/change the sense of directions in the edges of our graphs( as when we change the coefficients of our homology monoids, we keep the underlying "undirected graph structure" as it is). It makes me wonder, if we consider coefficients from more general commutative monoids, I think "intuition of direction" is not making much sense to me at the moment. Then, probably, I am thinking in the wrong direction.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 04:57):

John Baez said:

Adittya Chaudhuri said:

John Baez said:

I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".

Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).

Here's one thing that may help. Jade considered, not labeled graphs, but what she called "RR-matrices" for a rig RR. An RR-matrix is simply a set XX together with a function M ⁣:X×XRM \colon X \times X \to R. So, it's a square matrix with entries in RR, where the rows and columns are indexed by the set XX.

If we take RR to be the boolean rig B={T,F}B = \{T,F\}, an RR-matrix is a directed graph with at most one edge from any vertex xXx \in X to any vertex yXy \in X: we say there's an edge from xx to yy iff M(x,y)=TM(x,y) = T.

Note this is not the same as our kind of graph (namely a [[quiver]]): you can think of it as a special case. Both RR-matrices and quivers are a special case of graphs enriched in a category C\mathsf{C}. A graph enriched in C\mathsf{C} a set XX and a function

M:X×XOb(C)M : X \times X \to \mathrm{Ob}(\mathsf{C}).

(A quiver is a graph enriched in Set\mathsf{Set}. An RR-matrix is a graph enriched in the discrete category corresponding to the set RR.)

Also, Jade focused on the case where the rig RR is actually a [[quantale]]. A quantale is a monoid in the monoidal category of [[sup-semilattices]]. A sup-semilattice is a poset that has all colimits, called 'suprema' or 'sups'. Any quantale becomes a rig where the addition is given by the binary case of the sup and the multiplication is given by the monoidal structure.

But why did Jade restrict to quantales?

Here's why: so she could do infinite sums without worrying about them! You'll see a lot of 'matrix multiplication' in her work, where she multiplies RR-matrices using usual formula

(MN)(x,z)=yM(x,y)M(y,z) (MN)(x,z) = \sum_y M(x,y) M(y,z)

When you use a quantale, these sums make sense even when yy ranges over an infinite set!

However, if we restrict her ideas to make sure we're only doing finite sums, we can then generalize them to graphs enriched in a rig category like Set\mathsf{Set} or FinSet\mathsf{FinSet}.

Thank you very much. Graphs enriched in C\mathsf{C} seems to be a very general notion of graphs. Interesting!! Yes, I agree that if we make sure to do only finite (well-defined sums), we may be able to extend Jade's ideas to the case of categories enriched in Set\mathsf{Set} or FinSet\mathsf{FinSet}.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 05:20):

John Baez said:

Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.

Among other things I wrote a proof of what we were calling Claim 3. I've strengthened it a bit:

Claim 3. Every loop that gives the same cycle as a simple loop

v0e1v1e2en1vn1env0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0

is of the form

vkek+1vk+1ek+2en+k1vn+k1en+kvk v_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \xrightarrow{e_{n+k-1}} v_{n+k-1} \xrightarrow{e_{n+k}} v_k

where we treat the subscripts as elements of Z/n\mathbb{Z}/n and do addition mod nn.

Proof. Suppose

γ=(v0e1v1e2en1vn1env0) \gamma = \Big( v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0 \Big)

is a simple loop and

δ=(w0f1w1f2fm1wm1fmw0) \delta = \Big( w_0 \xrightarrow{f_1} w_1 \xrightarrow{f_2} \cdots \xrightarrow{f_{m-1}} w_{m-1} \xrightarrow{f_m} w_0 \Big)

gives the same cycle as γ\gamma, so that

e1++en=f1++fm. e_1 + \cdots + e_n = f_1 + \cdots + f_m .

Since γ\gamma is simple, all the vertices v0,,vn1v_0, \dots, v_{n-1} are distinct, so the edges e1,,ene_1, \dots, e_n are distinct. We must thus have m=nm = n, with the list of edges f1,,fnf_1, \dots, f_n being some permutation of the list of edges e1,,ene_1, \dots, e_n. Since all the vertices v0,,vn1v_0, \dots, v_{n-1} are distinct, the only permutations that make δ\delta into a loop are cyclic permutations. \qquad :black_large_square:

Thank you. Yes, I will read Section 2.5 of our paper. Also, thanks for the proof of Claim 3 with the strengthening.

view this post on Zulip John Baez (Apr 23 2025 at 06:09):

Sure, thanks!

By the way, here's a fun puzzle for everyone:

Puzzle. Show this strengthened version of Claim 3 is false: every loop that gives the same cycle as a loop

v0e1v1e2en1vn1env0v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0

is of the form

vkek+1vk+1ek+2en+k1vn+k1en+kvkv_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \xrightarrow{e_{n+k-1}} v_{n+k-1} \xrightarrow{e_{n+k}} v_k

where we treat the subscripts as elements of Z/n\mathbb{Z}/n and do addition mod nn.

view this post on Zulip John Baez (Apr 23 2025 at 06:13):

@Adittya Chaudhuri wrote:

Graphs enriched in C\mathsf{C} seems to be a very general notion of graphs.

Yes, I like them. They're important because any category enriched in a monoidal category C\mathsf{C} has an underlying graph enriched in the underlying category C\mathsf{C}: we just forget the composition and identities.

view this post on Zulip John Baez (Apr 23 2025 at 06:19):

I find it interesting that the homomorphism of commutative monoids induces a functor H1(G,) ⁣:ComMonComMonH_1(G,-) \colon \mathsf{ComMon} \to \mathsf{ComMon}, as you explained. Somehow, it makes me recall that we already have a functor from the ComMonCat\mathsf{ComMon} \to \mathsf{Cat} defined as RRGraphsR \to R- \mathsf{Graphs} or LCat/BLL \mapsto \mathsf{Cat}/BL.

Yes, I'm a bit confused about the relation between these two concepts: graphs labeled by elements of a commutative monoid, and cycles on a graph with coefficients taken from some commutative monoid!

view this post on Zulip John Baez (Apr 23 2025 at 06:23):

In general if LL is a commutative monoid an LL-labeling of a graph GG is exactly the same as a 1-cochain on GG with coefficients in LL. We should probably say this and exploit it a bit.

view this post on Zulip John Baez (Apr 23 2025 at 06:24):

(I haven't defined 1-cochains with coefficients in a commutative monoid, so my claim above is more like a definition than a theorem, but if you extend the usual definition of 1-cochain from abelian groups to commtuative monoids, that's what you get.)

view this post on Zulip John Baez (Apr 23 2025 at 06:25):

I hadn't expect this paper to become so homological, but I think this direction is good: we get some interesting theorems, and we begin to set up a connection between fancy-sounding pure math and very elementary-sounding applied math.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 06:26):

Yes, I agree, and I find this connection very interesting!!

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 06:27):

Should we do functorial semantics (like construction of symmetric monoidal double functor/lax symmetric monoidal double functor) in this paper? (in the section of open labeled graphs)..And use Mayer-Vietoris in the semantics part?

view this post on Zulip John Baez (Apr 23 2025 at 06:56):

Yes, I think we should.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 07:01):

John Baez said:

Adittya Chaudhuri wrote:

Graphs enriched in C\mathsf{C} seems to be a very general notion of graphs.

Yes, I like them. They're important because any category enriched in a monoidal category C\mathsf{C} has an underlying graph enriched in the underlying category C\mathsf{C}: we just forget the composition and identities.

Nice!! Yes, it is a natural generalisation(enrichment) of the forgetful functor U ⁣:CatGphU \colon \mathsf{Cat} \to \mathsf{Gph}, because every small category is a locally small category and, hence, a category enriched in Set\mathsf{Set}.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 07:05):

John Baez said:

Yes, I think we should.

Nice!!

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 07:22):

Adittya Chaudhuri said:

John Baez said:

Adittya Chaudhuri wrote:

Graphs enriched in C\mathsf{C} seems to be a very general notion of graphs.

Yes, I like them. They're important because any category enriched in a monoidal category C\mathsf{C} has an underlying graph enriched in the underlying category C\mathsf{C}: we just forget the composition and identities.

Nice!! Yes, it is a natural generalisation(enrichment) of the forgetful functor U ⁣:CatGphU \colon \mathsf{Cat} \to \mathsf{Gph}, because every small category is a locally small category and, hence, a category enriched in Set\mathsf{Set}.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 12:53):

John Baez said:

Sure, thanks!

By the way, here's a fun puzzle for everyone:

Puzzle. Show this strengthened version of Claim 3 is false: every loop that gives the same cycle as a loop

v0e1v1e2en1vn1env0v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_0

is of the form

vkek+1vk+1ek+2en+k1vn+k1en+kvkv_k \xrightarrow{e_{k+1}} v_{k+1} \xrightarrow{e_{k+2}} \cdots \xrightarrow{e_{n+k-1}} v_{n+k-1} \xrightarrow{e_{n+k}} v_k

where we treat the subscripts as elements of Z/n\mathbb{Z}/n and do addition mod nn.

I think I solved your puzzle. In the attached file, I think I managed to construct a counterexample.
counterexampleclaim3stregthened.PNG

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 13:30):

John Baez said:

Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.

Thanks. I enjoyed reading through section 2.5, and I want to discuss the following points in section 2.5:

view this post on Zulip James Deikun (Apr 23 2025 at 14:24):

Adittya Chaudhuri said:

Why is "n2+n3n_2 + n_3" included in that first line? It doesn't seem like it should be.

view this post on Zulip John Baez (Apr 23 2025 at 15:38):

Adittya Chaudhuri said:

I guess the word "determined" is ambiguous, or else I'm using it incorrectly.

When I said l~\tilde{l} is determined by its value on minimal cycles, I didn't mean you could arbitrarily choose its values on the minimal cycles. That would indeed require that H1(G)H_1(G) be freely generated by the minimal cycles. I merely meant that if you know the value of l~\tilde{l} on all minimal cycles, you know l~\tilde{l}. I.e. if l~(c)=l~(c)\tilde{l}(c) = \tilde{l'}(c) for all minimal cycles cc, then l~=l~\tilde{l} = \tilde{l'}.

Suppose AA and BB are commutative monoids, SAS \subseteq A is a subset, and f:SBf: S \to B is a function. Can we find a homomorphism F:ABF: A \to B extending ff?

  1. If SS freely generates AA the extension exists and is unique.
  2. If SS generates AA the extension is unique, but it may not exist.

We are in situation 2, and that's what I was trying to say. But since you didn't understand me I must not have said it well.

I've rewritten this passage in the paper to make it clearer.

view this post on Zulip John Baez (Apr 23 2025 at 15:53):

Adittya Chaudhuri said:

Thanks, you're right! I had e2e_2 and e3e_3 switched in the diagram. I wanted the edges to be labeled e1,e2,e3,e4e_1, e_2, e_3, e_4 as we go from top to bottom. I think it's okay now - please take a look.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 17:49):

James Deikun said:

Why is "n2+n3n_2 + n_3" included in that first line? It doesn't seem like it should be.

Thanks!! Yes, it was a mistake. My thoughts and drawings were not running in parallel somehow.... Sorry!! You are right, the first line should be (x1+x3)+(x2+x4)(x_1 +x_3) + (x_2+x_4), and hence, in that case, there is no contradiction. Hence, my example is not at all a counterexample.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 18:05):

John Baez said:

We are in situation 2, and that's what I was trying to say. But since you didn't understand me I must not have said it well.

I've rewritten this passage in the paper to make it clearer.

Thanks!! I understand your point. I mixed up the things somehow!! I feel the portion is very well written, but I misunderstood!! And, yes, as I mentioned in the previous message, my counterexample was incorrect. Maybe in the afternoon today, my mind was not working properly!! Sorry!! I just checked the portion of overleaf file..Thanks!! But I also emphasise that it was very well written even before also!!

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 18:11):

John Baez said:

Thanks, you're right! I had e2e_2 and e3e_3 switched in the diagram. I wanted the edges to be labeled e1,e2,e3,e4e_1, e_2, e_3, e_4 as we go from top to bottom. I think it's okay now - please take a look.

Yes, it is fine. Thanks. Also, the condition that I wrote to show H1(Q,N)H_1(Q, \mathbb{N}) is not free is incorrect in the same way as @James Deikun pointed out my mistake in my drawing.

view this post on Zulip Adittya Chaudhuri (Apr 23 2025 at 18:29):

Adittya Chaudhuri said:

I think the above statement also does not make much sense, as by definition of [γ][\gamma], the function is injective.

view this post on Zulip John Baez (Apr 24 2025 at 01:16):

Okay, good, I hadn't gotten around to commenting on that last statement, because it confused me.

Now I'm thinking about the section on rig-valued polarities. I notice you do something interesting there: given a path

γ=(v0e1v1e2en1vn1envn)\gamma = \Big(v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_n\Big)

in a graph GG, you define gen(γ)\text{gen}(\gamma) to be the set of edges in γ\gamma:

gen(γ)={e1,,en} \text{gen}(\gamma) = \{e_1, \dots, e_n \}

I would prefer to think about the multiset of edges involved in γ\gamma. This keeps track not only of which edges appear in γ\gamma but also how many times they appear.

Here's why this interests me.

The collection of multisets of edges of GG is the free commutative monoid on the set EE of edges of EE. In our paper we've been calling this free commutative monoid N[E]\mathbb{N}[E]. We also call it C1(G,N)C_1(G,\mathbb{N}), the set of 1-chains with coefficients in N[E]\mathbb{N}[E].

If we use this to modify your idea, we get the following nice things.

Any path γ\gamma has a multiset of edges; let me call this [γ]C1(G,N)[\gamma] \in C_1(G,\mathbb{N}). When γ\gamma is a loop [γ][\gamma] is a 1-cycle, and we've already been using the notation [γ][\gamma] in this case. But there are some advantages to generalizing this idea to arbitrary paths!

As we know, paths are morphisms in the free category on the graph GG, which we call Free(G)\text{Free}(G). Composition of paths obeys this nice rule:

[γδ]=[γ]+[δ] [\gamma \circ \delta] = [\gamma] + [\delta]

So, we get a functor

[]:Free(G)B(C1(G,N)) [-] : \mathrm{Free}(G) \to \mathsf{B}(C_1(G,\mathbb{N}))

where B(C1(G,N))\mathsf{B}(C_1(G,\mathbb{N})) is the 1-object category corresponding to the the commutative monoid C1(G,N) C_1(G,\mathbb{N}) .

view this post on Zulip John Baez (Apr 24 2025 at 01:26):

We have already noted in the paper that if :ER\ell : E \to R is a way of labeling edges by elements of a commutative monoid RR, it extends uniquely to a monoid homomorphism

~:C1(G,N)R \tilde{\ell} : C_1(G,\mathbb{N}) \to R

since C1(G,N)C_1(G,\mathbb{N}) is the free commutative monoid on EE. This doesn't work if RR is noncommutative. But if RR is a monoid that's not necessarily commutative, we still get a functor

Free(G)R \text{Free}(G) \to R

that sends each edge eEe \in E to (e)\ell(e).

view this post on Zulip John Baez (Apr 24 2025 at 01:27):

So, there's a sense in which C1(G,N)C_1(G,\mathbb{N}) is an 'abelianized' version of Free(G)\mathsf{Free}(G). But there's more to say about this.

view this post on Zulip Adittya Chaudhuri (Apr 24 2025 at 05:08):

Thanks!! The above ideas look very interesting!! I am trying to write down what I thought about it.

We have already established a bijection between simple loops and minimal cycles which connects graph-theoretic way of studying cycles and topological way of studying cycles in a directed graph.

1st point

Now, I think this correspondence does not extend bijectively between "set of paths in GG" and the set of elements of the free commutative monoid(abelianised paths) C1(G,N)C_1(G, \mathbb{N}). However, as you explained, we still get a functor Free(G)B(C1(G,N))Free(G) \to B(C_1(G, \mathbb{N})), which I think restricts to the bijective correspondence between our simple loops in GG and minimal cycles in GG. In a way, the functor Free(G)B(C1(G,N))Free(G) \to B(C_1(G, \mathbb{N})) extends our bijection between simple loops in GG and minimal cycles in GG to the level of paths.

2nd point:

Some days back, you also mentioned that "it might be useful to have a correspondence" between graph theoretic picture and topological picture. Now, as you explained, if RR is non-commutative, then we can not extend  ⁣:ER\ell \colon E \to R to ~ ⁣:C1(G,N)R\tilde{\ell} \colon C_1(G, \mathbb{N}) \to R, however, we still get a functor Free(G)BRFree(G) \to B R. Thus, we may consider it as a graph theoretic way of seeing paths are behaving better here over seeing paths from the point of topology when we are labeling the edges with a non-commutative monoid.

Now, when RR is commutative I think we get a commutative diagram in Cat\mathsf{Cat} described by the functors Free(G)B(C1(G,N))Free(G) \to B(C_1(G, \mathbb{N})) and the functor B~ ⁣:B(C1(G,N))BRB\tilde{\ell} \colon B(C_1(G, \mathbb{N})) \to BR and Free(G)BRFree(G) \to BR. I think this might be seen as what you explained "C1(G,N)C_1(G, \mathbb{N}) is an abelianised version of Free(G)Free(G)".

view this post on Zulip John Baez (Apr 24 2025 at 05:36):

Yes, I agree with all this!

When I said "there's more to say about this", here's what I was hinting at. Free(G)\mathsf{Free}(G) is a category with one object for each vertex of GG, while C1(G,N)C_1(G,\mathbb{N}) has just a single object. There's also a category that's halfway between these two! This is the category where morphisms are "homology classes of paths".

This category has one object for each vertex of GG, and morphisms f:vwf: v \to w are equivalence classes of paths γ:vw\gamma: v \to w where two paths γ,γ\gamma, \gamma' are equivalent if

[γ]=[γ] [\gamma] = [\gamma']

using my notation above, i.e. γ\gamma and γ\gamma' give the same 1-chain.

I claim this category is a kind of "abelianization" of Free(G)\mathsf{Free}(G), though it's a category rather than a commutative monoid like C1(G,N)C_1(G,\mathbb{N}). For example, suppose ff and gg are two morphisms in this category such that fgf \circ g and gfg \circ f are both well-defined. Then fg=gff \circ g = g \circ f.

view this post on Zulip John Baez (Apr 24 2025 at 05:44):

This idea is also interesting in ordinary topology: for any space XX we can define a category with one object for each point of XX and morphisms f:xyf : x \to y being equivalence classes of paths γ:xy\gamma : x \to y, where two paths γ,γ\gamma, \gamma' are equivalent iff the 1-chains in singular homology C1(X,Z)C_1(X,\mathbb{Z}) that they define are homologous, i.e. their difference, which is clearly a 1-cycle, is actually a 1-boundary.

view this post on Zulip John Baez (Apr 24 2025 at 05:46):

Homotopic paths are homologous, but the converse is not necessarily true, even when our space XX is a graph (thought of as a topological space).

view this post on Zulip John Baez (Apr 24 2025 at 05:47):

So, if we call this category H1(X)\mathsf{H}_1(X), we get a functor from the fundamental groupoid of XX to this category:

Π1(X)H1(X) \Pi_1(X) \to \mathsf{H}_1(X)

and it's full but not faithful. Like Π1(X)\Pi_1(X), H1(X)\mathsf{H}_1(X) is a groupoid.

view this post on Zulip John Baez (Apr 24 2025 at 05:54):

As you know, if XX is path connected and we pick any point xXx \in X, the automorphism group of xx in Π1(X)\Pi_1(X) is the fundamental group π1(X,x)\pi_1(X,x). Similarly I believe the automorphism group of xx in H1(X)\mathsf{H}_1(X) is the first homology group H1(X,Z)H_1(X,\mathbb{Z}).

I used this idea when XX is a graph (seen as a topological space) in my paper Topological Crystals.

view this post on Zulip Adittya Chaudhuri (Apr 24 2025 at 13:14):

Wow!! I find these ideas very beautiful and interesting:

In the context of directed graphs:

If we denote the homology category of a graph GG by H1(G)H_1(G), then we get a functor Ab ⁣:Free(G)H1(G)Ab \colon \mathsf{Free}(G) \to H_1(G) which is identity on vetices and takes a path γ\gamma to its homology class [γ].[\gamma]. I am attaching an example where loops γγ\gamma \neq \gamma' but are homologous([γ]=[γ][\gamma]=[\gamma']) .
homologouspath.PNG

In other words, I think the following may be true

Lemma
If γ,γ ⁣:vw\gamma, \gamma' \colon v \to w are two paths in GG, then γ\gamma is homologous to γ\gamma' if and only if γ\gamma and γ\gamma' differs by a permutation of edges.

However, I think more generally the "homologous path relation" defines an equivalence relation on the set of all paths in GG. Then this equivalence relation induces a congruence relation on Free(G)\mathsf{Free}(G) by which the quotient category Free(G)\frac{\mathsf{Free}(G)}{\sim} is the homology category H1(G)H_1(G), the one you defined. Now, I think, there is a faithful functor τ ⁣:H1(G)B(C1(G,N))\tau \colon H_1(G) \to B(C_1(G, \mathbb{N})), which sends all the objects of H1(G)H_1(G) to the unique object in B(C1(G,N))B(C_1(G, \mathbb{N})), and sends the [γ=(v0e1v1e2en1vn1envn)]i=onei[\gamma = \Big(v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_n\Big)] \to \sum^{n}_{i=o}e_i. Hence, τAb\tau \circ Ab is the functor [] ⁣:Free(G)B(C1(G,N))[-] \colon \mathsf{Free}(G) \to B(C_1(G, \mathbb{N})) you defined here #theory: applied category theory > Graphs with polarities @ 💬 Now, I think the category H1(G)H_1(G) should satisfy some universal property that would justify your claim that H1(G)H_1(G) is an abelianisation of Free(G)\mathsf{Free}(G). (I think this should follow from the fact that Free(G)H1(G)\frac{\mathsf{Free}(G)}{\sim} \cong H_1(G)).

Now, given a labeling l ⁣:ERl \colon E \to R for a commutative monoid RR, we can extend it to a functor Bl~ ⁣:B(C1(G,N))BRB\tilde{l} \colon B(C_1(G, \mathbb{N})) \to BR. Then, I think the functor l^ ⁣:Free(G)BR\widehat{l} \colon \mathsf{Free}(G) \to BR induced from ll is same as (Bl~τ)Ab(B\tilde{l} \circ \tau) \circ Ab, which is a factorisation of l^\widehat{l} through H1(G)H_1(G). However, such factorisation of l^\widehat{l} does not exist when RR is non commutative. In other words, I think it proves your conjecture (#theory: applied category theory > Graphs with polarities @ 💬 "when RR is commutative, the homologous paths have the same RR-valued holonomy" .

In the context of Topological spaces/manifolds:

The results on RR-labeled directed graphs are now making me wonder whether similar results are true for RR-labeled paths in a (directed) topological space. I think for the case of smooth manifolds MM, the usual gauge theoretic holonomy is a particular case when RR is considered as a Lie group (structure group of a bundle (with connection data) over MM ) (Although I am not aware of the existence of the notion of an appropriate smooth homology category of a smooth manifold).

view this post on Zulip John Baez (Apr 24 2025 at 20:32):

All these ideas are fascinating to me - thanks for explaining all this. Though I haven't proved it, I believe your lemma is true:

Lemma
If γ,γ ⁣:vw\gamma, \gamma' \colon v \to w are two paths in GG, then γ\gamma is homologous to γ\gamma' if and only if γ\gamma and γ\gamma' differs by a permutation of edges.

because I proved the same sort of lemma for undirected graphs in my paper Topological Crystals. (Sorry for repeatedly giving this link, but I just noticed that the version on my website was an old version, and this is the new version.)

In Section 2, I define two paths in an undirected graph to be homotopic if they differ by a sequence of the following moves:

I say two paths are homologous if they differ by a sequence of the following moves:

and

("Makes sense" is a reminder that we can't compose paths unless the first path ends where the next one starts! So we need this restriction in Move A and Move B.)

Then in Lemma 3, I prove that two paths are homologous in the above sense iff they define the same 1-chain! The proof uses the fact that in H1(X)H_1(X) is the abelianization of π1(X)\pi_1(X). We can't use that proof in the directed case.

Still, I believe your lemma is true in the directed case. In the undirected case Move A doesn't make sense because edges (and paths) don't have inverses. So, the concept of "homologous" only involves Move B, as in your lemma.

(The "makes sense" restriction doesn't need to be mentioned in your lemma, but it's implicit there. Also, you consider arbitrary permutations rather than the limited sort allowed in Move B, which is more efficient.)

view this post on Zulip John Baez (Apr 24 2025 at 21:20):

I don't know if we need this stuff in our paper, but maybe we could put it in an appendix. It's nice pure math, and it's relevant because when LL is a monoid any LL-labeling of a graph GG gives a functor

Free(G)B(L) \mathsf{Free}(G) \to \mathsf{B}(L)

but when LL is a commutative monoid this factors through the 'homology 1-groupoid' H1(G)\mathsf{H}_1(G)

Free(G)H1(G)B(L) \mathsf{Free}(G) \to \mathsf{H}_1(G) \to \mathsf{B}(L)

view this post on Zulip John Baez (Apr 25 2025 at 02:11):

On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 04:23):

John Baez said:

All these ideas are fascinating to me - thanks for explaining all this.

Thank you so much!!

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 04:40):

Thanks for the ideas in the undirected case. I will read the proof of Lemma 3 in your paper, Topological crystals. Then, I will try to prove the Lemma in the directed case.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 04:44):

Thanks for telling about the newer version of your Topological Crystals paper. Somehow, the paper link on my server is not working. It says, "404 not found". I do not know how to fix this error.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 05:13):

John Baez said:

I don't know if we need this stuff in our paper, but maybe we could put it in an appendix. It's nice pure math, and it's relevant because when LL is a monoid any LL-labeling of a graph GG gives a functor

Free(G)B(L) \mathsf{Free}(G) \to \mathsf{B}(L)

but when LL is a commutative monoid this factors through the 'homology 1-groupoid' H1(G)\mathsf{H}_1(G)

Free(G)H1(G)B(L) \mathsf{Free}(G) \to \mathsf{H}_1(G) \to \mathsf{B}(L)

Thanks. I was thinking the following:

Since our paper considers the possibility of the labelling monoid to be non-commutative (In fact, many nice results like "finding motifs via Kleisli category", "defining a symmetric monoidal double category of LL-labeled graphs via structured cospans" are possible for both commutative and non-commutative monoids). However, I think, to come up with "useful semantics of our labeled graphs (like analysing feedback loops) via directed homology theory" we had to restrict ourselves to commutative monoids. If I am thinking correctly, then the factorisation that you mentioned Free(G)H1(G)B(L) \mathsf{Free}(G) \to \mathsf{H}_1(G) \to \mathsf{B}(L) is a nice way to highlight the strength of using the commutative monoids for labeling. Basically, I think it precisely tells: holonomy of paths in directed graphs is invariant under directed homology. I am assuming by the statement "I don't know if we need this stuff in our paper" you meant the proof of my Lemma. In that case, yes, I fully agree that it is much better to keep the statement in the paper's main body and write the proof in the appendix. However, I feel it is essential (it is very possible that I am misunderstanding) that we keep the factorisation Free(G)H1(G)B(L) \mathsf{Free}(G) \to \mathsf{H}_1(G) \to \mathsf{B}(L) in the paper's body itself to hightlight the strength of using commutative monoids as labelling monoids.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 05:14):

John Baez said:

On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.

Thank you so much. I will read your examples in Section 2.

view this post on Zulip John Baez (Apr 25 2025 at 06:32):

Adittya Chaudhuri said:

Thanks for telling about the newer version of your Topological Crystals paper. Somehow, the paper link on my server is not working. It says, "404 not found". I do not know how to fix this error.

Whoops - I gave the wrong link. I fixed it. Here is the right link: Topological Crystals.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 06:58):

Thank you.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 08:42):

John Baez said:

On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.

Examples are very nice!!

The reality of Example 2.7: "a causal loop diagram serving as a simple model of students doing homework" is very clear, and I think it is relatable to any student at any level in any place.

In Example 2.8 I like how "absence of an edge" represents no influence and a label 00 represents unknown influence. Actually, this is very essential for SBGN-AF representations of biochemical reaction networks (as there is a specific symbol for denoting an unknown influence) in their notations. From a general perspective also it is very realistic as many times (we are only aware of an existence of an influence), and only come to know about its type (positive or negtive) after analysing the situation for a sufficient period of time. In a way, given a directed graph GG, I think (G, ⁣:EZ/3)(G, \ell \colon E \to \mathbb{Z}/3) may represent an initial(not fully understood) causal loop diagram while (G, ⁣:EZ/2)(G, \ell' \colon E \to \mathbb{Z}/2) represents an evolved version(understood) of the causal loop diagram (G, ⁣:EZ/3)(G, \ell \colon E \to \mathbb{Z}/3).

Here, I preferred the word understood over fully understood because, over time the graph structure of GG itself may change. For example, one may add additional edges or remove some existing edges to discover a new interaction or remove an unnecessary interaction, respectively.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 11:08):

John Baez said:

Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.

I was thinking about constructing an example of a useful non-commutative monoid. Below I am trying to write down what I am thinking:

Let LL be a non-commutative monoid and VV be a set. Then we can define an action of LL on the set VV as a function ρ ⁣:L×VV\rho \colon L \times V \to V of MM that satisfies compatibility condition with associativity and identity laws of the monoid LL. Now, if we consider the underlying graph of the action category [L×VV][L \times V \rightrightarrows V], then I think we can get a LL-labeled graph defined by  ⁣:EL,(l,v)l\ell \colon E \to L, (l, v) \mapsto l.

Now, from the point of applications (usefulness), I was thinking about transition monoid of semiautomation. I do not know about these objects in detail. Below I am trying to describe my basic understanding:

Let \sum^{*} be the [[free monoid]] generated on the set \sum. Now, there is an action ρ ⁣:×QQ\rho^{*} \colon \sum^{*} \times Q \to Q of the monoid \sum^{*} on the set QQ induced by the transition function TT. Now, for every ww \in \sum^{*}, we get an endomorphism Tw ⁣:QQ,qwqT_{w} \colon Q \to Q, q \mapsto wq. Now, the set M(Q,,T):={Tw ⁣:w}M(Q, \sum, T):= \lbrace T_{w} \colon w \in \sum^{*} \rbrace is a monoid (non-commutative) with the binary operation as composition of functions. The monoid M(Q,,T)M(Q, \sum, T) is called the transition monoid of the semiautomation (Q,,T)(Q, \sum, T).

Now my guess is the following:

The \sum^{*}-labeled underlying graph of the action category [×QQ][\sum^{*} \times Q \rightrightarrows Q] induced by ρ\rho^{*} contains all the information that the transition monoid M(Q,,T)M(Q, \sum, T) of the semiautomation (Q,,T)(Q, \sum, T) possess.

Advantages of representing semiautomation via labeled graphs may be the following:

1) finding motifs in semiautomata networks via Kleisli morphisms. I am imagining a directed cycle may represent a kind of regulatory mechanism in the associated labeled graph.
2) Building bigger semiautomata networks by gluing small automata networks using structured cospans.

Lastly, my knowledge about automata/semiautomata is very poor. So, I may be fundamentally misunderstanding certain things while thinking about semiautomata networks.

view this post on Zulip Adittya Chaudhuri (Apr 25 2025 at 19:48):

Adittya Chaudhuri said:

I will try to prove the Lemma in the directed case.

I am trying to write a proof of the above Lemma with a little strengthening:

Lemma
If γ,γ\gamma, \gamma' are two paths in GG, then γ\gamma is homologous to γ\gamma' if and only if γ\gamma and γ\gamma' differs by a permutation of edges.

Proof: One direction is obvious that is if γ\gamma and γ\gamma' differs by a permutation of edges, then [γ]=[γ][\gamma]=[\gamma']. Now, for the other direction let γ:=v0e1v1e2en1vn1envn\gamma:= v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_n and γ:=v0e1v1e2em1vm1emvm\gamma' := v'_0 \xrightarrow{e'_1} v'_1 \xrightarrow{e'_2} \cdots \xrightarrow{e'_{m-1}} v_{m-1} \xrightarrow{e'_m} v_m be two paths in GG such that

e1+e2+en=e1+e2+eme_1 + e_2 + \cdots e_{n}= e'_1+e'_2+ \cdots e'_{m}.

Now, let AA and BB be the multisets representing the L.H.S =[γ][\gamma] and R.H.S=[γ][\gamma'] of the above equation. Now, if eAe \in A, then eBe \in B because [γ]=[γ]N[E][\gamma]=[\gamma'] \in \mathbb{N}[E]. Similarly, if eBe \in B, then eAe \in A. Again, since [γ]=[γ]N[E][\gamma] = [\gamma'] \in \mathbb{N}[E], the number of times an element ee occurs in AA is same as the number of times ee occur in BB. Hence, [γ]=[γ]=α1f1+α2f2+αkfk[\gamma]=[\gamma']= \alpha_1f_1 + \alpha_2f_2 + \cdots \alpha_{k}f_{k} where f1,f2,fkf_1, f_2, \cdots f_{k} are distinct edges of GG. Hence, by definition, both γ\gamma and γ\gamma' are permutations of the string

S:=f1f1f1α1timesf2f2f2α2timesfkfkfkαktimesS:=\underbrace{f_1f_1 \cdots f_1}_{\alpha_1 \text{times}} \underbrace{f_2 f_2 \cdots f_2}_{\alpha_2 \, \, \text{times}} \ldots \underbrace{f_k f_k \cdots f_k}_{\alpha_k \, \, \text{times}}.

In the above equation, by the string f1f1f1α1times\underbrace{f_1f_1 \cdots f_1}_{\alpha_1 \text{times}} I meant f11,f12,f1α1f^{1}_1,f^{2}_{1}, \cdots f^{\alpha_1}_{1} i.e although the elements are same, but they are considered different when they appear multiple times in the string. I meant similar for other fif_{i} too.

Hence, γ\gamma is a permutaion of γ\gamma'.

view this post on Zulip John Baez (Apr 25 2025 at 20:11):

Great, that's nice. This statement doesn't immediately imply the following stronger statement:

Stronger Lemma

If [γ]=[γ][\gamma] = [\gamma'] then [γ][\gamma'] can be obtained from [γ][\gamma] by a finite sequence of moves of type B:

Move B: replacing a path of the form α1α2α3α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 by a path of the form α1α3α2α4\alpha_1 \alpha_3 \alpha_2 \alpha_4.

(Note that in this move we require that α1α3α2α4\alpha_1 \alpha_3 \alpha_2 \alpha_4 is still a well-defined path, e.g. α3\alpha_3 ends where α2\alpha_2 starts, and so on.)

We may not need this stronger lemma. But I think it's true.

view this post on Zulip John Baez (Apr 25 2025 at 21:12):

Let me say why I care this stronger lemma.

Let's say a category is commutative when for any pair of morphisms f:xy,g:xyf: x \to y, g: x' \to y' we have fg=gffg = gf whenever this could possibly make sense. That is, fg=ghf g = g h whenever both fgfg and gfgf are both well-defined and have the same source and both have the same target.

You can check that for both fgfg and gfgf to be well-defined we need y=x,y=xy = x', y' = x,
and for them to both have the same source and the same target we need x=x,y=yx = x', y = y' . So x=x=y=yx = x' = y = y'.

Thus, a category is commutative iff for all objects xx in that category the endomorphism monoid hom(x,x)\text{hom}(x,x) is commutative.

Any preorder is a commutative category, but there are many other commutative categories.

We can take any category and force it to be commutative by imposing a bunch of new equations between morphisms. To do this we say that

α1α2α3α4α1α3α2α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 \sim \alpha_1 \alpha_3 \alpha_2 \alpha_4

whenever αi\alpha_i are morphisms in the original category where both composites above are well-defined. Then we mod out all homsets by the equivalence relation \sim.

view this post on Zulip John Baez (Apr 25 2025 at 21:16):

Let's call this abelianizing the category - though the result is a commutative category, not an [[abelian category]], which is something completely different. I can't get myself to say 'commutativizing'.

If we take a topological space XX and let Π1(X)\Pi_1(X) be its fundamental groupoid, then abelianizing Π1(X)\Pi_1(X) gives the homology 1-groupoid H1(X)\mathsf{H}_1(X), where:

view this post on Zulip John Baez (Apr 25 2025 at 21:20):

Similarly I believe that if GG is a graph and Free(G)\mathsf{Free}(G) is the free category on that graph, then abelianizing Free(G)\mathsf{Free}(G) gives H1(G)\mathsf{H}_1(G), the category where

This will follow from the Strong Lemma but not from the Lemma.

view this post on Zulip John Baez (Apr 25 2025 at 22:06):

To prove the Strong Lemma, we need to show this:

Suppose we have an edge path

v0e1v1e2en1vn1envn v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_n

and an edge path

v0=w0f1w1f2fn1wnfnwn=vn v_0 = w_0 \xrightarrow{f_1} w_1 \xrightarrow{f_2} \cdots \xrightarrow{f_{n-1}} w_n \xrightarrow{f_n} w_n = v_n

such that the list of edges f1,,fnf_1, \dots , f_n is a permutation of the list of edges e1,,ene_1, \dots, e_n. Then this permutation can be accomplished by a finite sequences of moves of the form

α1α2α3α4α1α3α2α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 \mapsto \alpha_1 \alpha_3 \alpha_2 \alpha_4

where αi\alpha_i are (possibly empty) edge paths and the composites α1α2α3α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 and α1α3α2α4\alpha_1 \alpha_3 \alpha_2 \alpha_4 are both well-defined edge paths from v0v_0 to vnv_n.

view this post on Zulip John Baez (Apr 26 2025 at 02:32):

Adittya Chaudhuri said:

I was thinking about constructing an example of a useful non-commutative monoid. Below I am trying to write down what I am thinking:

Let LL be a non-commutative monoid and VV be a set. Then we can define an action of LL on the set VV as a function ρ ⁣:L×VV\rho \colon L \times V \to V of MM that satisfies compatibility condition with associativity and identity laws of the monoid LL. Now, if we consider the underlying graph of the action category [L×VV][L \times V \rightrightarrows V], then I think we can get a LL-labeled graph defined by  ⁣:EL,(l,v)l\ell \colon E \to L, (l, v) \mapsto l.

Now, from the point of applications (usefulness), I was thinking about transition monoid of semiautomation.

Thanks, this was very helpful. I read a bit of a book on automaton theory and added Example 2.14 in Section 2 as an example of graphs labeled by elements of noncommutative monoids. I think this should be enough about noncommutative monoids, since we should focus on our topic of polarities.

Here's what I wrote:

All the above monoids above are commutative, and indeed commutative monoids are by far the most commonly used for polarities. Thus, we discuss special features of the commutative case in Sections 2.4 and 2.5. However, graphs with edges labeled by not-necessarily-commutative monoids do show up naturally in some contexts. For example, in computer science [Sec. 2.1, Ginzburg 1968], a semiautomaton consists of a set V V of states, a set A A of inputs, and a map α ⁣:AVV \alpha \colon A \to V^V that describes how each input acts on each state to give a new state. Let L L be the monoid of maps from V V to itself generated by all the maps α(a) \alpha(a) for aA a \in A . Let G G be the graph where:

Since the monoid of maps LVV L \subseteq V^V is generated by elements α(a) \alpha(a) for aA a \in A , there is an L L -labeling of G G given by
(a,v)=α(a). \ell(a,v) = \alpha(a) .
In short, we obtain an L L -labeled graph where the vertices represent states, and for each input a a mapping a state v v to a state w w there is an edge labeled by the monoid element α(a) \alpha(a) .

view this post on Zulip John Baez (Apr 26 2025 at 02:55):

By the way, one of the founders of category theory, Samuel Eilenberg, wrote at least two books on categories for automaton theory! I've heard that at the time most people were shocked, and wondered why he didn't stick to working on algebraic topology. So he may be one of the early examples of someone turning to applied category theory.

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 04:39):

John Baez said:

Thanks, this was very helpful. I read a bit of a book on automaton theory and added Example 2.14 in Section 2 as an example of graphs labeled by elements of noncommutative monoids. I think this should be enough about noncommutative monoids, since we should focus on our topic of polarities.

Here's what I wrote:

Thank you so much!! Yes, this example looks great. I think I was just lucky!! I might not have thought in the direction of automata while finding an example of a non-commutative monoid. My inspiration was knowledge graphs (which I saw before), where edges are labeled by strings of English-alphabets. Now, I realised the free monoid on English alphabets is non-commutative. In my undergraduation, I had a basic course on automata theory. Then, I searched and found the Wikipedia article about semiautomation. By, looking at the definition, I realised it is a bit similar to the "action Lie groupoid construction in higher Lie Theory (that I studied in my PhD)". Then, I realised, the underlying graph of the action category has a natural labelling system induced from the action of the monoid on the set of vertices.

I agree entirely that since our focus is on polarities, this example should be sufficient for our paper when we talk about labelling edges with elements of non-commutative monoids.

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:00):

John Baez said:

By the way, one of the founders of category theory, Samuel Eilenberg, wrote at least two books on categories for automaton theory! I've heard that at the time most people were shocked, and wondered why he didn't stick to working on algebraic topology. So he may be one of the early examples of someone turning to applied category theory.

Wow!! That's really interesting and very inspiring. I just downloaded the books (by Samuel Eilenberg ) that you suggested. I will try to read some parts of it in the coming days.

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:03):

John Baez said:

Great, that's nice.

Thank you so much!!

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:27):

John Baez said:

Thus, a category is commutative iff for all objects xx in that category the endomorphism monoid hom(x,x)\text{hom}(x,x) is commutative.

I find this definition of "commutative category" very interesting because previously, I encountered such kind of a definition in the the context of higher gauge theory which I am briefly explaining below:

For example, in the definition of Transport functors (Definition 3.5 in the PARALLEL TRANSPORT ON PRINCIPAL BUNDLES OVER STACKS by (Collier-Lerman-Wolbert), a functor F ⁣:Πthin(M)GTorsF \colon \Pi^{thin}(M) \to G - \mathsf{Tors} from the thin fundamental groupoid of a manifold MM to the category of GG-torsors is defined to be smooth if it is smooth locally that is the restriction Fπ1thin(M,x) ⁣:π1thin(M,x)Aut(F(x))F|_{\pi^{thin}_1(M,x)} \colon \pi^{thin}_1(M,x) \to Aut(F(x)) is smooth (in the sense of diffeology) for all xMx \in M.

I am now guessing that if we want to define a property (like here commutativity or smoothness) on the whole category, then we may define it locally on all the automorphism monoids of the category.

However, I think in many cases this may not be sufficient. For instance, given a Lie groupoid [X1X0][X_1 \rightrightarrows X_0], every automorphism group Aut(x){\rm{Aut}}(x) is a Lie group for each xX0x \in X_0. However, I think the converse is not sufficient i.e if [C1C0][C_1 \rightrightarrows C_0] is a category such that Aut(c){\rm{Aut}}(c) is a Lie group for each cC0c \in C_0, then [C1C0][C_1 \rightrightarrows C_0] still may not be a Lie groupoid (though at the moment I am not able to find an example).

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:37):

John Baez said:

Let's call this abelianizing the category - though the result is a commutative category, not an [[abelian category]], which is something completely different. I can't get myself to say 'commutativizing'.

If we take a topological space XX and let Π1(X)\Pi_1(X) be its fundamental groupoid, then abelianizing Π1(X)\Pi_1(X) gives the homology 1-groupoid H1(X)\mathsf{H}_1(X), where:

Now, I understand what you meant by abelianizing!!

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:41):

John Baez said:

This will follow from the Strong Lemma but not from the Lemma.

Interesting!! I agree.

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 05:43):

John Baez said:

To prove the Strong Lemma, we need to show this:

Suppose we have an edge path

v0e1v1e2en1vn1envn v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_{n-1}} v_{n-1} \xrightarrow{e_n} v_n

and an edge path

v0=w0f1w1f2fn1wnfnwn=vn v_0 = w_0 \xrightarrow{f_1} w_1 \xrightarrow{f_2} \cdots \xrightarrow{f_{n-1}} w_n \xrightarrow{f_n} w_n = v_n

such that the list of edges f1,,fnf_1, \dots , f_n is a permutation of the list of edges e1,,ene_1, \dots, e_n. Then this permutation can be accomplished by a finite sequences of moves of the form

α1α2α3α4α1α3α2α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 \mapsto \alpha_1 \alpha_3 \alpha_2 \alpha_4

where αi\alpha_i are (possibly empty) edge paths and the composites α1α2α3α4\alpha_1 \alpha_2 \alpha_3 \alpha_4 and α1α3α2α4\alpha_1 \alpha_3 \alpha_2 \alpha_4 are both well-defined edge paths from v0v_0 to vnv_n.

Thanks. Yes, I agree. I will try to prove this.

view this post on Zulip John Baez (Apr 26 2025 at 05:54):

Good comments! Returning for a moment to automaton theory, I think it's cool that you took a course on that subject. I've read a bit about it, but never quite enough. I've seen bits of information about the various classes of automata shown here, and how they correspond to various classes of languages recognized by these automata (the Chomsky hierarchy), but I've never studied the details.

Someone gave me a fascinating book called The Wild Book, also called Applications of Automata Theory and Algebra via the Mathematical Theory of Complexity to Biology, Physics, Psychology, Philosophy, and Games, by John Rhodes. Wikipedia describes it as an "underground classic". It almost sounds like the work of a crackpot, but it includes a serious theorem, the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! I wish I understood it!

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 07:13):

Thank you!! These directions look very interesting!! Especially, I find your statement

"the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! "

very exciting!!

I do not know if there are any, but I really wish that Applied Category Theory should produce theorems like this with respect to domain-specific already existing systems theory.

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 10:12):

John Baez said:

Someone gave me a fascinating book called The Wild Book, also called Applications of Automata Theory and Algebra via the Mathematical Theory of Complexity to Biology, Physics, Psychology, Philosophy, and Games, by John Rhodes. Wikipedia describes it as an "underground classic". It almost sounds like the work of a crackpot, but it includes a serious theorem, the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! I wish I understood it!

Thank you so much for sharing about the The Wild Book. I just read the section "Foreword to Rhodes’ Applications of Automata Theory and Algebra" by Morris W. Hirsch. It's super fascinating!!! It feels like "the reason for my interest in biological systems" aligns very much with what is written there in that section. Although I have to read many times, I think my understanding will improve over time.

I encountered a portion in that section which states:

In Evolution the application of semigroup theory is necessarily more speculative, but also more comprehensible. Here the objective is not precise computations of complexity or SNAGs, but rather general principles influencing Evolution. Highly evolved organisms, he suggests, are in “perfect harmony” with their environments— otherwise they would either die out or evolve further:

It reminds me of Gaia hypthesis. I was not aware of it until you mentioned it in a discussion in the comments section in one of your mastodon post https://mathstodon.xyz/@johncarlosbaez/113946882428468110 about the mutation in Palmer amaranth which made them glyphosate-resistant, where you relate this to current crisis of our global socio-political-environmental-ecosystem. At the end, you described it beautifully as "a crisis of nature not wanting to die".

view this post on Zulip Adittya Chaudhuri (Apr 26 2025 at 21:08):

I was thinking about rig-labelled graphs. Below, I am sharing my thoughts.

Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. A RR-labeled graph is a set labeled finite graph G=[EV]G=[E \rightrightarrows V] with a labeling  ⁣:ER\ell \colon E \to R. Now, let G=[EV]G'=[E' \rightrightarrows V'] be another RR-labeled graph with a labeling  ⁣:ER\ell' \colon E' \to R. Then, we define a morphism from (G,)(G, \ell) to (G,)(G', \ell') as a morphism of graphs f ⁣:GGf \colon G \to G' such that (γ)={γ ⁣:γE ⁣:f(γ)=γ}l(γ)\ell'(\gamma')= \sum_{\lbrace \gamma \colon \gamma \in E \colon f(\gamma)=\gamma'\rbrace} l(\gamma), where the addition is defined w.r.t the commutative monoid structure (R,+,0)(R, +,0). To define polarities on (R,+,0)(R,+,0)-labeled graphs we will use the additional monoid structure (R,×,1)(R, \times, 1) of the rig (R,×,1,+,0)(R, \times,1, +,0).

We label a morphism e1e2ene_1e_2 \ldots e_n of Free(G)\mathsf{Free}(G) by (e1)×(e2)××(en)\ell(e_1) \times \ell(e_2) \times \cdots \times \ell(e_{n}), with respect to the monoid structure (R,×,1)(R, \times, 1). Now, for every additive morphism f ⁣:(G,)(G,)f \colon (G, \ell) \to (G', \ell') , we get a functor Free(f) ⁣:Free(G)Free(G),e1e2enf(e1)f(e2)f(en)\mathsf{Free}(f) \colon \mathsf{Free}(G) \to \mathsf{Free}(G'), e_1e_2 \ldots e_n \mapsto f(e_1)f(e_2) \ldots f(e_n). We label f(e1)f(e2)f(en)f(e_1)f(e_2) \ldots f(e_n) by (f(e1))×(f(e2))××(f(en))\ell'(f(e_1)) \times \ell'(f(e_2)) \times \ldots \times \ell'(f(e_n)), where for each ii, (f(ei))\ell'(f(e_i)) is defined additively w.r.t to the commutative monoid structure (R,+,0)(R, +,0). Hence, although, Free(G)\mathsf{Free}(G) and Free(G)\mathsf{Free}(G') are (R,×,1)(R,\times,1)-labeled categories, but Free(F)\mathsf{Free}(F) is not a morphism of (R,×,1)(R, \times, 1)-labeled categories.

My question is the following:

What if we define rig-labeled graph as above and we do not define "rig-labeled categories" like the way we have defined monoid -labeled categories at all? Then, we will not face any "infinite/undefined sum issues", and also, we can describe polarities. However, one bad thing is that while defining polaritites we can not use the free forgetful adjunction between graphs and categories. Are we loosing anything qualitatively with these limitations?

More precisely:
Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. Define a rig-labeled graph as functor P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1). Now, we define a morphism from P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1) to P ⁣:Free(G)B(R,×,1)P' \colon \mathsf{Free}(G') \to B(R, \times, 1) as a functor Free(f) ⁣:Free(G)Free(G)\mathsf{Free}(f) \colon \mathsf{Free}(G) \to \mathsf{Free}(G') induced from an additive morphism of (R,+,0)(R, +, 0)-labeled graphs f ⁣:GGf \colon G \to G'. Now, I think with these definitions, the collection of rig-labeled graphs will form a category. However, I think it is essential to see how nice this category is!!

view this post on Zulip John Baez (Apr 26 2025 at 23:30):

Yes, that's important to study!

On a different note, my friend Marco Grandis put out a paper on directed topology; in the first page he explains two concepts of directed space, which he calls a d-space and (more generally) a c-space. Both these kinds of space have an obvious concept of "fundamental category". So, we could abelianize that and get a "directed first homology monoid". But I don't actually want to work on that now.

He wrote:

The fourth and last paper in my series on ’The topology of critical processes’ is published, in Cahiers (as the previous parts):

The topology of critical processes, IV (The homotopy structure)
M. Grandis

Abstract. Directed Algebraic Topology studies spaces equipped with a form of direction, to include models of non-reversible processes. In the present extension we also want to cover 'critical processes', indecomposable and un-stoppable - from the change of state in a memory cell to the action of a thermostat.

The previous parts of this series introduced controlled spaces, examining how they can model critical processes in various domains, and studied their fundamental category. Here we deal with their formal homotopy theory.

Cah. Topol. Géom. Différ. Catég. 66 (2025), no. 2, 46-93.
https://cahierstgdc.com/index.php/volume-lxvi-2025/

view this post on Zulip Adittya Chaudhuri (Apr 27 2025 at 04:39):

John Baez said:

Yes, that's important to study!

Thanks!!

view this post on Zulip Adittya Chaudhuri (Apr 27 2025 at 04:47):

John Baez said:

On a different note, my friend Marco Grandis put out a paper on directed topology; in the first page he explains two concepts of directed space, which he calls a d-space and (more generally) a c-space. Both these kinds of space have an obvious concept of "fundamental category". So, we could abelianize that and get a "directed first homology monoid". But I don't actually want to work on that now.

Thanks!! I will check the construction of the fundamental categories in Marco Grandis' paper.

view this post on Zulip Adittya Chaudhuri (Apr 27 2025 at 05:39):

Adittya Chaudhuri said:

Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. Define a rig-labeled graph as functor P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1). Now, we define a morphism from P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1) to P ⁣:Free(G)B(R,×,1)P' \colon \mathsf{Free}(G') \to B(R, \times, 1) as a functor Free(f) ⁣:Free(G)Free(G)\mathsf{Free}(f) \colon \mathsf{Free}(G) \to \mathsf{Free}(G') induced from an additive morphism of (R,+,0)(R, +, 0)-labeled graphs f ⁣:GGf \colon G \to G'. Now, I think with these definitions, the collection of rig-labeled graphs will form a category. However, I think it is essential to see how nice this category is!!

I think I figured it out. I am trying to write it down.

Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. I am denoting the category of (R,×,1,+,0)(R, \times,1, +,0) labeled graphs as R+,×GphR_{+,\times}\mathsf{Gph}, whose description is given above. Now, I claim there is an isomorphism of categories between (R,+,0)Gph(R, +, 0)-\mathsf{Gph} and R+,×GphR_{+,\times}\mathsf{Gph} described by the following functor:

F ⁣:(R,+,0)GphR+,×GphF \colon (R, +, 0)-\mathsf{Gph} \to R_{+,\times}\mathsf{Gph}, defined

If I am not misunderstanding anything, then all the nice things that we can do with (R,+,0)Gph,(R, +, 0)-\mathsf{Gph}, we can also do with R+,×GphR_{+,\times}\mathsf{Gph}.

Now, I think we can construct a symmetric lax monoidal pseudofunctor T ⁣:FinSetCat\mathcal{T} \colon \mathsf{FinSet} \to \mathsf{Cat} for (R,+,0)(R, +, 0)- labeled graphs, and thus it would allow us to have a compositional framework for R+,×R_{+,\times}-labeled graphs using decorated cospans.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 05:39):

If I have not made any fundamental mistakes while thinking about rig-labeled graphs above, I am trying to see it from a little general perspective:

Consider a rig (R,×,1,+,0)(R, \times,1, +,0). Now, define a category Gph/(R,×,+)\mathsf{Gph}/(R, \times, +) whose

Note that we could have also replaced P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1) by P ⁣:Free(G)B(R,+,1)P \colon \mathsf{Free}(G) \to B(R, +, 1) and rest accordingly to obtain a coorresponding category , but in each case I claim there is an isomorphism of categories between Gph/GR\mathsf{Gph}/GR and Gph/(R,×,+)\mathsf{Gph}/(R, \times, +) described by the following functor:

F ⁣:Gph/GRGph/(R,×,+)F \colon \mathsf{Gph}/GR \to \mathsf{Gph}/(R, \times, +), defined

But in this setting, I think, there is no natural way to use the propoerty of (R,+,0)(R, +,0) in labeleling the morphisms in Gph/(R,×,+) \mathsf{Gph}/(R, \times, +). However, we have seen that we can actually use both (R,×,1)(R, \times,1) and (R,+,0)(R, +,0) in labeling the morphisms in R+,×GphR_{+,\times}\mathsf{Gph}.

view this post on Zulip John Baez (Apr 28 2025 at 05:49):

Adittya Chaudhuri said:

Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. Define a rig-labeled graph as functor P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1). Now, we define a morphism from P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to B(R, \times, 1) to P ⁣:Free(G)B(R,×,1)P' \colon \mathsf{Free}(G') \to B(R, \times, 1) as a functor Free(f) ⁣:Free(G)Free(G)\mathsf{Free}(f) \colon \mathsf{Free}(G) \to \mathsf{Free}(G') induced from an additive morphism of (R,+,0)(R, +, 0)-labeled graphs f ⁣:GGf \colon G \to G'.

Now, I think with these definitions, the collection of rig-labeled graphs will form a category.

I'm trying to understand the interplay of ×\times and ++ here. Your definition of objects seems to use B(R,×,1)B(R, \times, 1), while your definition of morphism uses (R,+,0)(R, +, 0). That's a bit strange, though interesting.

However, notice that morphisms

P ⁣:Free(G)B(R,×,1)P \colon \mathsf{Free}(G) \to \mathsf{B}(R, \times, 1)

correspond bijectively to functions

P ⁣:EGRP \colon E_G \to R

from the set EGE_G of edges of GG to the underlying set of RR. That's because a functor from Free(G)\mathsf{Free}(G) to B(R,×,1)\mathsf{B}(R, \times, 1) is determined by its value on edges of GG, and these values can be arbitrary since the edges of GG freely generate the category Free(G)\mathsf{Free}(G).

So, unless I'm confused, for any rig RR the category of rig-labeled graphs you just described is equivalent to the one where:

And this in turn is the category of (R,+,0)(R,+,0)-labeled graphs and additive morphisms!

Adittya Chaudhuri said:

I think I figured it out. I am trying to write it down.

Let (R,×,1,+,0)(R, \times,1, +,0) be a rig. I am denoting the category of (R,×,1,+,0)(R, \times,1, +,0) labeled graphs as R+,×GphR_{+,\times}\mathsf{Gph}, whose description is given above. Now, I claim there is an isomorphism of categories between (R,+,0)Gph(R, +, 0)-\mathsf{Gph} and R+,×GphR_{+,\times}\mathsf{Gph} ...

Okay, good, we agree! I realized this while wondering whether your initial definition of (R,×,1,+,0)(R, \times,1, +,0) really used the multiplicative structure. While it did superficially, we've seen that only the additive structure of the rig really matters.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 05:52):

Yes, I also feel, the use of (R,×,1)(R, \times, 1) is superficial about which I am not feeling that good. But I am also feeling that it is serving the purpose of "polarities" in a way.

view this post on Zulip John Baez (Apr 28 2025 at 05:52):

Given that the multiplication of RR plays no essential role in the definition of a rig-valued category, we can wonder what it's good for. Here we should turn to examples and think about what we want to do with them!

For starters consider the rig of polarities Z/2\mathbb{Z}/2, which is actually a ring.

view this post on Zulip John Baez (Apr 28 2025 at 05:55):

Okay, I need to think about this a while.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 06:11):

Somehow, after the semiautomation example, I started thinking of polarities as a kind of "a collection of actions of a monoid (L,×,1)(L, \times, 1) on a set VV" which can be expressed as a LL-labeled category. (Here the action law of the monoid LL on VV is translated into the compositional law of the corresponding LL-labeled category).

Here, I said "collection of actions" because our labeled graphs can contain multiple edges between a pair of vertices.

The above is just an intuitive feeling I developed. I may be completely misunderstanding.

view this post on Zulip John Baez (Apr 28 2025 at 07:27):

I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.

view this post on Zulip John Baez (Apr 28 2025 at 07:29):

I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 07:52):

John Baez said:

I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.

Thank you. I will read the portion (you mentioned) in the Overleaf file.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 08:16):

John Baez said:

I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.

Yes, I agree.

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 11:59):

I think I realized certain things about rig-labeled graphs. I am trying to write down my thoughts below:

In short, every "type" of transformation of a graph canonically induces a "similar type" of tranformation on the associated graph with polarity.

I am attaching a picture where I picturised the situation.
polaritytransformation.png

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 14:47):

John Baez said:

I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.

I enjoyed reading the portion. Now, the definition of 1st and zeroth homology groups looks very natural (from the point of generalisation from abelian groups to commutative monoids). Only a very minor point: If I am not misunderstanding, I think the exact sequence before Example 2.26 is written in the reverse order.

view this post on Zulip John Baez (Apr 28 2025 at 14:48):

Adittya Chaudhuri said:

John Baez said:

I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.

Yes, I agree.

Do you do Github? Or Dropbox?

view this post on Zulip Adittya Chaudhuri (Apr 28 2025 at 14:50):

I used to use Dropbox. For some years, I have not used it. But I can try Dropbox.

view this post on Zulip Adittya Chaudhuri (Apr 29 2025 at 11:42):

Adittya Chaudhuri said:

John Baez said:

I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.

Now for the relation: a cycle in an LL-labeled graph GG has feedback L\ell \in L if and only if there is a Kleisli map from the walking loop with feedback equal to \ell to (G,)(G,\ell), that maps the loop to this cycle.

Today, I was thinking about possible relationships between Kleisli morphisms and directed graph homology. I am trying to write down my thoughts below:

For a monoid RR, a monic Kleisli morphism ϕ ⁣:(X,X)UFree(G,G)\phi \colon (X, \ell_{X} ) \to U \circ \mathsf{Free}(G,\ell_{G} ) determines an occurrence of an (X,X)(X, \ell_{X})-shaped graph in the graph (G,G)(G,\ell_{G} ). Now, from the point of view of causal loop diagrams, we are mostly interested in a particular shape i.e when XX is a directed labeled path(graph theoretic) in GG. Let us denote the set of occurrences of all directed labeled paths in GG by P(G)R\mathsf{P}(G)_{R}.

On the other hand, the monoid C1(G,N)C_1(G, \mathbb{N}) of 1-chains of GG gives us the information of all the 1-chains in GG, which in particular, also include the information of all directed paths (graph theoretic) in GG. Now, (when RR is commutative), the labeling map G ⁣:ER\ell_{G} \colon E \to R defines a monoid homomorphism (holonomy) ~G ⁣:C1(G,N)R\tilde{\ell}_{G} \colon C_1(G, \mathbb{N}) \to R, which contain the information of all labeled 1-chains in (G,G)(G, \ell_{G}) and hence, in particular, the information of all labeled directed paths in (G,G)(G, \ell_{G}).

Now I want to explore the relationship between the elements ϕ ⁣:(X,X)(G,G)\phi \colon (X, \ell_{X}) \to (G, \ell_{G}) of P(G)R\mathsf{P}(G)_{R} and the the holonomy map ~ ⁣:C1(G,N)R\tilde{\ell} \colon C_1(G, \mathbb{N}) \to R.

More precisely, let us consider a directed labeled path

γ=v0G(e1)v1G(e2)G(en)vn\gamma=v_0 \xrightarrow{\ell_{G}(e_1)} v_1 \xrightarrow{\ell_{G}(e_2)} \cdots \xrightarrow{\ell_{G}(e_n)} v_n.

Hence, by definition, the holonomy l~G(γ)=G(e1)G(e2)G(en)\tilde{l}_{G}(\gamma)=\ell_{G}(e_1) \cdots \ell_{G}(e_2) \cdots \ell_{G}(e_{n}).

Now, consider a monic Kleisli morphism ϕ ⁣:(γ,γ)(G,G)\phi \colon (\gamma, \ell_{\gamma}) \to (G, \ell_{G}), where γ\gamma is considered as a directed subgraph of GG and the labeling map γ\ell_{\gamma} is the restriction of G\ell_{G} on the subgraph γ\gamma. Since ϕ\phi is monic, the image ϕ((γ,γ))\phi\big((\gamma, \ell_{\gamma}) \big) looks like the following:

ϕ((γ,γ)):=v0G(e11)v11G(e12)v1k11G(e1k1)v1vn1G(en1)vn1vnkn1lG(enkn)vn\phi\big((\gamma, \ell_{\gamma}) \big):=v_0 \xrightarrow{\ell_{G}(e^{1}_1)}v^{1}_{1} \xrightarrow{\ell_{G}(e^{2}_1)} \cdots v^{k1-1}_1\xrightarrow{\ell_{G}(e^{k1}_1)}v_1 \cdots v_{n-1} \xrightarrow{\ell_{G}(e^{1}_{n})}v^{1}_{n} \cdots v^{kn-1}_n\xrightarrow{l_{G}(e^{kn}_{n})} v_n

such that holonomy remains invariant in each edge eiEe_{i} \in E or, more precisely, the following holds for each i=1,2,ni=1,2, \cdots n:

G(ei1)G(ei2)G(eik11)G(eiki)=G(ei)\ell_{G}(e^{1}_{i}) \ell_{G}(e^{2}_{i}) \cdots \ell_{G}(e^{k1-1}_{i}) \ell_{G}(e^{ki}_{i}) = \ell_{G}(e_i) .

Now, consider the set hommonic Kleisli((γ,γ),(G,G))\hom_{\text{monic Kleisli}} \Big((\gamma, \ell_{\gamma}), (G, \ell_{G}) \Big), the set of all monic Kleisli morphisms from (γ,γ)(\gamma, \ell_{\gamma}) to (G,G)(G, \ell_{G}) .

The following are my realisations regarding the relationship between homology and Kleisli morphisms:

The above ideas can also be restricted to loops/cycles and then, we can get similar results w.r.t to loops or elements in H1(G,N)H_1(G, \mathbb{N}).

The above description may not be anything interesting (or may contain mistakes). However, today I was thinking about these things.

view this post on Zulip John Baez (May 02 2025 at 22:08):

I'm back in action, having spent one night in Edinburgh. I think your idea of relating Kleisli morphisms to holonomies of paths is interesting. We should think about it more and see if there's anything surprising we can do with it.

view this post on Zulip John Baez (May 02 2025 at 22:11):

I also have a bunch of new thoughts on 'emergent cycles' formed when gluing together two directed graphs, which I will try to write down here.

view this post on Zulip John Baez (May 02 2025 at 23:15):

So, let me start. I want to study how "the whole is greater than the sum of the parts" when we combine two systems to form a larger system. But I want to study this in an extremely simple context, where a system is a graph. The vertices of this graph represent entities of various kind. The edges represent how one entity can directly affect another. Paths represent how one entity can indirectly affect another. Loops represent how one entity can indirectly itself, so we sometimes call them 'feedback loops'.

We've talked about a setup to quantify the feedback around a loop in a graph where the edges are labeled by elements of a commutative monoid. I won't recall it here. I'll just talk about loops, and simple loops (which are loops that don't cross themselves, so they can't be broken into smaller loops), and cycles (which are very roughly speaking linear combinations of simple loops - the real definition is given here).

I want to understand the cycles, and also the loops, and also the simple loops, in a graph formed by gluing together two graphs XX and YY along some set of vertices. Some cycles (resp. loops, simple loops) will come from cycles (resp. loops, simple loops) in XX, and some will come from (resp. loops, simple loops) in YY. The rest I'll call emergent, since they appear only when we glue XX and YY together. I want to understand the emergent ones!

A graph is a set of edges EE, a set of vertices VV and source and target maps s,t:EVs, t: E \to V. A map of graphs consists of a map sending edges to edges and a map sending vertices to vertices, compatible with the source and target maps. The resulting category of graphs, Gph\mathsf{Gph}, is a presheaf category.

Let's consider the pushout in Gph\mathsf{Gph} of the following diagram

XiAjY X \xleftarrow{i} A \xrightarrow{j} Y

where AA is a graph with no edges, only vertices, and ii and jj are monic. I'll call this pushout XYX \cup Y and write A=XYA = X \cap Y.

So, intuitively speaking, we have a graph XYX \cup Y that's the union of two subgraphs XX and YY, and their intersection XYX \cap Y has no edges, just vertices.

view this post on Zulip John Baez (May 02 2025 at 23:23):

Now, any loop γ\gamma in XYX \cup Y will either

1) stay in XX or stay in YY

or

2) go back and forth between XX and YY some number of times, say nn.

We can think of the first case as the case where n=0n = 0.

view this post on Zulip John Baez (May 02 2025 at 23:25):

I want to more precise and mathematical, and I'd like to generalize this classification of loops to a classification of cycles. But now it's getting late, so I'll quit here.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 03:39):

John Baez said:

I'm back in action, having spent one night in Edinburgh. I think your idea of relating Kleisli morphisms to holonomies of paths is interesting. We should think about it more and see if there's anything surprising we can do with it.

Nice!! Thanks. Yes!!

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 03:46):

John Baez said:

I'd like to generalize this classification of loops to a classification of cycles.

I find this idea very interesting!! I am thinking in terms of your previous discussion #theory: applied category theory > Graphs with polarities @ 💬.

view this post on Zulip John Baez (May 03 2025 at 08:23):

Let me first start by digging into the classification of loops in X+YX + Y a bit more. Consider any loop

v0e1v1e2envn=v0 v_0 \xrightarrow{e_1} v_1 \xrightarrow{e_2} \cdots \xrightarrow{e_n} v_n = v_0

It will have certain vertices viv_i that lie in the intersection XYX \cap Y (which remember is a graph consisting solely of vertices, no edges). These vertices are of four distinct types:

  1. going from XX to YY: the previous vertex vi1v_{i-1} is in XX but not YY, the next vertex vi+1v_{i+1} is in YY but not XX
  2. going from YY to XX: the previous vertex vi1v_{i-1} is in YY but not XX, the next vertex vi+1v_{i+1} is in XX but not YY
  3. staying in XX: the previous vertex vi1v_{i-1} is in XX but not YY, the next vertex vi+1v_{i+1} is in XX but not YY
  4. staying in YY: the previous vertex vi1v_{i-1} is in YY but not XX, the next vertex vi+1v_{i+1} is in YY but not XX

This takes a little thought to check.

Puzzle. Why is it impossible for vi1v_{i-1} or vi+1v_{i+1} to be in XYX \cap Y?

view this post on Zulip John Baez (May 03 2025 at 08:28):

I don't really care about vertices of type 3 and 4, since we're trying to understand how the loop moves from XX to YY or YY to XX, not how it stays in XX or stays in YY.

view this post on Zulip John Baez (May 03 2025 at 08:34):

Just to make sure you're paying attention:

Puzzle. Why is there an equal number of vertices of type 1 and of type 2?

view this post on Zulip John Baez (May 03 2025 at 08:37):

As we go around the loop, keeping track only of which vertices are of type 1 and which are of type 2, we'll see that they alternate: after a vertex of type 1 there must be one of type 2, and vice versa.

view this post on Zulip John Baez (May 03 2025 at 08:40):

There is more to say about this, but I'm not really interested in analyzing the behavior of one specific loop. I'm more interested in the set of all loops. So let me turn to that.

view this post on Zulip John Baez (May 03 2025 at 08:42):

We've seen the set LL of all loops in X+YX+Y is N\mathbb{N}-graded:

L=nNLn L = \bigcup_{n \in \mathbb{N}} L_n

where loops in LnL_n have nn vertices of type 1 and nn of type 2: that is, they cross from XX into YY nn times, and cross back from YY back into XX nn times.

view this post on Zulip John Baez (May 03 2025 at 08:45):

The loops in LnL_n with n>0n \gt 0 are the 'emergent' loops. The big question is whether we can say anything nontrivial about these. For example: can we find conditions that guarantee that emergent loops exist, or don't exist? Can we find an efficient algorithm to count or otherwise understand the emergent loops?

view this post on Zulip John Baez (May 03 2025 at 08:46):

So far I can only think of an 'obvious' condition that rules out emergent loops.

If there's an emergent loop:

Also:

view this post on Zulip John Baez (May 03 2025 at 09:14):

In fact we can classify loops in XYX \cup Y not only by how many vertices of type 1 and type 2 they have, but by how many times they go through each passage from XX into YY, and how many times they use each passage from YY to XX. So we get a much more refined grading of the set of loops.

view this post on Zulip Jade Master (May 03 2025 at 10:10):

I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before. Are you dealing with Set\mathsf{Set}-enriched graphs? Or perhaps you want other enrichments. Regardless, you can understand the emergent loops in your graph pushout as follows:
Let this graph be GG
image.png
The main insight is that the paths of this graph grade the paths in your pushout:
Paths(X+Y)fPaths(G)A(f) Paths(X+Y) \cong \sum_{f \in Paths(G)} A(f) For now just think of A(f)A(f) as a dependent set A:Paths(G)SetA : Paths(G) \to \mathsf{Set}. What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let GG have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of nn graphs). Also this result explains why the emergent paths are graded by N\mathbb{N}, it's because the paths of GG are graded by their length. I know that you are interested in loops specifically and not just paths? Perhaps there is a variant of this result specialized to loops. Now I will do my best to explain what AA actually is, it is the loose morphism component of a double functor A:FGSetMat\mathcal{A} : FG \to \mathsf{SetMat} where

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:22):

John Baez said:

Puzzle. Why is it impossible for vi1v_{i-1} or vi+1v_{i+1} to be in XYX \cap Y?

Thanks. I am not able to find where I am making a mistake in the attached example.
counterextotype1234.PNG

In the attached example, both v2v_2 and v3v_3 lie in XYX \cap Y. If I am understanding correctly, then it is an example of a graph where none of the vertices in the intersection are of Type 1,2,3 and 4.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:32):

Jade Master said:

Are you dealing with Set\mathsf{Set}-enriched graphs?

Yes, I think so.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:44):

John Baez said:

So far I can only think of an 'obvious' condition that rules out emergent loops.

If there's an emergent loop:

Also:

Yes, I agree.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:49):

John Baez said:

In fact we can classify loops in XYX \cup Y by how many times they go through each passage from XX into YY, and how many times they use each passage from YY to XX. So we get a much more refined grading of the set of loops.

I agree. If I am not misunderstanding, the grading on simple loops that you discussed before https://categorytheory.zulipchat.com/#narrow/channel/229156-theory.3A-applied-category-theory/topic/Graphs.20with.20polarities/near/513523364 precisely describes this idea ?

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:51):


Jade Master said:

I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before.

Thank you!! I am trying to understand your ideas!!

view this post on Zulip John Baez (May 03 2025 at 11:54):

Adittya Chaudhuri said:

John Baez said:

Puzzle. Why is it impossible for vi1v_{i-1} or vi+1v_{i+1} to be in XYX \cap Y?

Thanks. I am not able to find where I am making a mistake in the attached example.
counterextotype1234.PNG

In the attached example, both v2v_2 and v3v_3 lie in XYX \cap Y. If I am understanding correctly, then it is an example of a graph where none of the vertices in the intersection are of Type 1,2,3 and 4.

Thanks! You're right, I was confused!

I think I have a better way of thinking about all this stuff now, which evolved today while I was eating eggs Benedict at Kilimanjaro Coffee, my favorite place for breakfast in Edinburgh.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:56):

John Baez said:

I think I have a better way of thinking about all this stuff now, which evolved today while I was eating eggs Benedict at Kilimanjaro Coffee, my favorite place for breakfast in Edinburgh.

Thanks!! The breakfast sounds very tasty!! Is Kilimanjaro a type of coffee ? Or a cafe in Edinburgh?

view this post on Zulip John Baez (May 03 2025 at 11:57):

Kilimanjaro is a famous mountain near Kenya, and a lot of good coffee grows in Kenya, and Kilimanjaro Coffee is a cafe in Edinburgh.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 11:58):

I see.. Thanks!!

view this post on Zulip John Baez (May 03 2025 at 12:01):

I just learned that Kilimanjaro is in Tanzania... but it's visible from a game park in Kenya:

Kilimanjaro viewed from Amboseli

Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 12:02):

John Baez said:

Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.

This is really sad.. I really wish we could change it by changing our lifestyle may be?

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 12:03):

John Baez said:

I just learned that Kilimanjaro is in Tanzania... but it's visible from a game park in Kenya:

Interesting!!

view this post on Zulip John Baez (May 03 2025 at 12:09):

Jade Master said:

I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before.

Yes! I mentioned your work a while ago in this mammoth thread. The idea of gluing two graphs together and paths that zig-zag back and forth between the two graphs is visible in our old paper Open Petri Nets, and you developed it enormously after that. I wanted to apply your ideas here. So I'm glad you're joining the conversation.

Are you dealing with Set\mathsf{Set}-enriched graphs?

At first, yes. Then we're looking at Set\mathsf{Set}-enriched graphs with edges labeled by elements of a set LL, and if I'm not confused, these are the same as graphs enriched in SetL\mathsf{Set}^L.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 12:17):

John Baez said:

Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.

For many years, this article https://johncarlosbaez.wordpress.com/2015/03/27/spivak-part-1/ has motivated me in many ways. I really like this line

Can we expect or hope that our species as a whole will make decisions that are healthy, like keeping the temperature down, given the information we have available? Are we in the driver’s seat, or is our ship currently in the process of spiraling out of our control?

When you said that glaciers at Kilimanjaro will melt entirely in the end of this century, then I recall about this article.

view this post on Zulip John Baez (May 03 2025 at 12:46):

Interesting! That article was written by David Spivak, but those questions are ones I wonder about often. Right now I feel that we are spiraling out of control - politically, economically and ecologically. I'm trying to accept the possibility that our civilization may crash. For some reason many of us feel that a history of unending progress is necessary for the universe to be okay: anything else would be tragic. But just as a finite lifetime for an individual is okay, so is a finite history for a species.

Anyway, this is a huge digression! I want to talk about Jade's ideas and how they interact with my thoughts at Kilimanajaro Coffee. But also I want to finish writing the paper.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 13:12):

John Baez said:

Interesting! That article was written by David Spivak, but those questions are ones I wonder about often. Right now I feel that we are spiraling out of control - politically, economically and ecologically. I'm trying to accept the possibility that our civilization may crash. For some reason many of us feel that a history of unending progress is necessary for the universe to be okay: anything else would be tragic. But just as a finite lifetime for an individual is okay, so is a finite history for a species.

Thanks for explaining!! I am also realising about our almost inevitable upcoming times, about which you explained. Although it is very hard to accept!!

view this post on Zulip John Baez (May 03 2025 at 13:18):

People often don't accept big problems until it's too late, or almost too late. Pretending problems aren't real reduces pain in the short term.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 13:52):

John Baez said:

People often don't accept big problems until it's too late, or almost too late. Pretending problems aren't real reduces pain in the short term.

I agree!!

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 15:45):

Jade Master said:

What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let GG have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of nn graphs).

Interesting. In a way is it saying that the same argument can be extended for structured multicospans? (when we have n3n \geq 3)

view this post on Zulip John Baez (May 03 2025 at 16:26):

Almost everything about structured cospans always generalizes to structured multicospans, but I think that's a bit of a distraction for the present paper. There's a limit to how complicated we want to make things!

view this post on Zulip Jade Master (May 03 2025 at 16:36):

Adittya Chaudhuri said:

Jade Master said:

What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let GG have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of nn graphs).

Interesting. In a way is it saying that the same argument can be extended for structured multicospans? (when we have n3n \geq 3)

Well I don't know what a structured multicospan is exactly, but if you have a diagram in the category of graphs which is a bunch of connected cospans, then this argument will apply to that situation.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 16:39):

Jade Master said:

Well I don't know what a structured multicospan is exactly, but if you have a diagram in the category of graphs which is a bunch of connected cospans, then this argument will apply to that situation.

Thanks!! I understand your point. I also do not know exactly what a structured multicospan is!! I was thinking of it as a device {L(Ai)B}i=1,2,n\lbrace L(A_{i}) \to B \rbrace_{i=1,2, \cdots n}, where B\mathsf{B} is finitely cocomplete, L ⁣:SetBL \colon \mathsf{Set} \to \mathsf{B} is a finitely cocontinuous functor and BObj(B)B \in \rm{Obj}(\mathsf{B}), which has nn number of interfaces through which it can glue with nn number of objects in B\mathsf{B} at the same time. However, the argument you gave perfectly matches my intuition.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 16:40):

John Baez said:

Almost everything about structured cospans always generalizes to structured multicospans, but I think that's a bit of a distraction for the present paper. There's a limit to how complicated we want to make things!

Thanks. I understand and agree to your point.

view this post on Zulip John Baez (May 03 2025 at 17:04):

Here's something Adittya and I were discussing. Say we have an RR-labeled graph where RR is some rig. Say vv and ww are vertices of this graph. What's the difference in meaning between

and

view this post on Zulip John Baez (May 03 2025 at 17:12):

In the framework we're talking about, an edge from vv to ww labeled by rRr \in R means that vv (or the entity corresponding to vv) has a direct effect on ww of type rr.

Thus, one can argue that no edge means that vv has no direct effect on ww.

But one can also make an argument that an edge labeled by 0R0 \in R means that vv has no direct effect on ww.

This leads to a somewhat annoying situation: we have two ways of saying the same thing. I wouldn't say this is a 'contradiction', it's just a bit strange.

view this post on Zulip John Baez (May 03 2025 at 17:20):

Adittya had a different theory: he says that perhaps no edge from vv to ww simply means that we don't know the effect of vv on ww.

This is certainly believable if we consider applications to biochemical regulatory networks. These graphs can have dozens or hundred of vertices, one for each chemical under consideration, and we usually haven't checked all pairs of these chemicals to see if one directly affects another.

view this post on Zulip John Baez (May 03 2025 at 17:22):

Let's use "u" to mean "we don't know", or the absence of an edge.

view this post on Zulip John Baez (May 03 2025 at 17:25):

In an RR-labeled graph if we have labeled edges

vrvsv v \xrightarrow{r} v' \xrightarrow{s} v''

the indirect effect of vv on vv'' is given by the product rsrs.

But if there's no edge from vv to vv' what is the indirect effect?

We could try to throw the element uu into a rig, forming a new rig R{u}R \cup \{u\}, and declare that

ur=ru=uu r = r u = u

for all rRr \in R. This amounts to taking RR and giving it a new zero, namely uu.

view this post on Zulip John Baez (May 03 2025 at 17:26):

I'm not sure this is a good idea, though. (There's a lot to say about this idea, but there are different directions to explore, and I'm not sure any of them are good.)

view this post on Zulip John Baez (May 03 2025 at 17:27):

Instead of going further, I'll stop and let @Adittya Chaudhuri talk.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:29):

Thanks.

I was thinking we usually start an investigation about "how aa and bb are related/influencing each other" when we at least "think" that there is a chance that aa and bb can be possibly related in a causal way. To distinguish this situation with the situation that "we do not find any reason to investigate a causal relationship between aa and bb, we are distinguishing 00 from "no edge".

I was thinking of 00 as a correlation...that is the "zeroth" stage of any causality investigation. My assumption was " a causality investigation" should start "somewhere"... To me, we start the investigation when we see a"correlation". Then, while understanding the correlation, we may or may not come up with a causal relationship. From this perspective can we think of "0" as an unknown influence/correlation? On the other hand, when we have "absence of edge", it may mean that there is no correlation.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:34):

In a way, I want to think of an edge labeled by 00 as the one which admits the "existence of a direct influence" , but the absence of an edge does not admit the "existence".

view this post on Zulip John Baez (May 03 2025 at 17:36):

I am somewhat confused by all those words of yours.

From this perspective can we think of "0" as an unknown influence/correlation?

Here you seem to be using an edge from vv to ww labeled by 00 to mean "we don't know" if vv has a direct effect on ww.

But then someone creating a biochemical regulatory network needs to start by drawing a complete graph and labeling every edge with 00. I don't think they actually do this!

I was using no edge from vv to ww to mean "we don't know" if vv has a direct effect on ww.

Then, if we discover vv has no direct effect on ww we draw an edge from vv to ww labeled by 00.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:37):

Yes, I agree, I meant to say the existence of a direct influence is marked by 00, when we don't know the type of the said influence.

view this post on Zulip John Baez (May 03 2025 at 17:39):

Now it seems you're drawing a distinction between

and

That's okay. But there must also be a third option:

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:41):

Yes, I understand the dilemma now: How to distinguish between (1) and (3)?

view this post on Zulip John Baez (May 03 2025 at 17:42):

I should emphasize that there are 2 issues here:

For example there are some particular interesting choices of RR where RR contains one element meaning "an effect, but we don't know what kind it is" and another meaning "no effect".

view this post on Zulip John Baez (May 03 2025 at 17:42):

However, right now I was trying to understand the general theory of RR-labeled graphs. For example, we might have R=RR = \mathbb{R} or R=Z/2R = \mathbb{Z}/2 or R=BoolR = \textsf{Bool}. I want some general philosophy that applies to all of these cases.

view this post on Zulip John Baez (May 03 2025 at 17:44):

I'll quit here - time for dinner. But we're definitely not done with this!

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:46):

John Baez said:

But we're definitely not done with this!

Yes, I agree completely.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 17:49):

John Baez said:

Now it seems you're drawing a distinction between

and

That's okay. But there must also be a third option:

Maybe we have to come up with the right choice of a monoid which covers all these possibilities?

view this post on Zulip David Egolf (May 03 2025 at 18:12):

I don't know if there's a place for this as well: there may or may not be a direct effect of vv on ww, but regardless we are choosing to ignore any such effect.

This kind of choice can be practically helpful when modelling complex systems.

view this post on Zulip David Egolf (May 03 2025 at 18:15):

So there can be a difference between what we know and what we choose to model.

I don't know if this distinction shows up in the applications you are interested in. I could imagine a setting where one wants to model every direct effect one knows about.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 18:35):

Another question: If we do not know the type of an influence, then does it actually help in "not forgetting" these kinds of influences? What extra things can we benefit from the information about the "existence of an unknown direct influence" over the "ignorance of its existence" ?

If we ignore the existence of an unknown direct influence, then we are left with only two choices:

However, I think this is again going back to what you discussed here #theory: applied category theory > Graphs with polarities @ 💬 .

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 19:04):

We can also think like this:

If there is no indirect influences(path) from vv to ww, then then we may assume there can not exist any direct influence from vv to ww, and hence, an "absence of edge". However, if there exists atleast one indirect influence(path) from vv to ww, then we can draw an edge viwv \xrightarrow{i} w labeled ii to denote the possibility of a direct influence.

May be I am still misunderstanding!!

view this post on Zulip John Baez (May 03 2025 at 19:13):

David Egolf said:

So there can be a difference between what we know and what we choose to model.

I don't know if this distinction shows up in the applications you are interested in.

Quite possibly nobody using graphs with polarities is sophisticated enough to have formalized this yet. Indeed, I bet nobody except us talks about an arbitrary monoid of polarities. But we're trying to increase the level of sophistication, so your suggestion is interesting. Maybe we can invent a nice monoid or rig that includes an element meaning "a nonzero effect, but we choose to ignore it". Or other subtle concepts.

view this post on Zulip David Egolf (May 03 2025 at 19:21):

I am wondering if choosing to not put an edge from vv to ww could be a way to indicate we're choosing to not model how vv impacts ww.

view this post on Zulip David Egolf (May 03 2025 at 19:22):

And then one could attempt to use some kind of labelled edge even in the case of "we know there is no effect" or "we don't know if there is an effect".

view this post on Zulip David Egolf (May 03 2025 at 19:24):

Anyways, I'll continue to read this thread with interest! :smile:

view this post on Zulip John Baez (May 03 2025 at 19:42):

David Egolf said:

I am wondering if choosing to not put an edge from vv to ww could be a way to indicate we're choosing to not model how vv impacts ww.

More often, not putting an edge from vv to ww means you haven't had time to think about it! We start with no edges, and drawing each edge takes thought.

view this post on Zulip John Baez (May 03 2025 at 19:45):

But this is problematic, because we might also deliberately not draw an edge to indicate the absence of a direct effect.

view this post on Zulip John Baez (May 03 2025 at 19:47):

So, it's possible that current standard practice is suboptimal.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 19:57):

John Baez said:

Maybe we can invent a nice monoid or rig that includes an element meaning "a nonzero effect, but we choose to ignore it". Or other subtle concepts.

The closest thing that comes to my mind is to choose "absence of an edge" over an "edge labeled by an absorbing element". In a way, an absorbing element is not adding much information to the system? Non-zero absorbing element?

view this post on Zulip John Baez (May 03 2025 at 20:09):

Btw, a monoid can only have at most one absorbing element. But you can always take a monoid and adjoin an absorbing element! If it already had one, the new one absorbs the old one, and the old one absorbs everything else!

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 20:10):

Yes, I agree.

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 20:15):

When you said "non-zero effect" in #theory: applied category theory > Graphs with polarities @ 💬 , did you mean non-absorbing element then?

view this post on Zulip Adittya Chaudhuri (May 03 2025 at 20:26):

I think now my mind is not properly working. I should probably sleep now . In a rig, I think 00 is the only absorbing element. So, there can not be any non-zero absorbing element in a rig-labeled graph.

view this post on Zulip John Baez (May 03 2025 at 22:43):

When you said "non-zero effect" did you mean non-absorbing element then?

I was talking about concepts in system modeling, not mathematics, so I meant "an effect that differs from no effect at all".

In a rig, I think 00 is the only absorbing element.

Yes, since the 00 in a rig is an absorbing element for its multiplicative monoid:

0x=x0=0 for all x 0 x = x 0 = 0 \text{ for all } x

and since an absorbing element in a monoid is always unique (if it exists at all), the only absorbing element in a rig is 00.

Thus, the concept of 'absorbing element' only deserves a special name when discussing monoids, not the multiplicative monoids of rigs.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 04:37):

John Baez said:

I was talking about concepts in system modeling, not mathematics, so I meant "an effect that differs from no effect at all".

I see. Thanks!

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 04:40):

John Baez said:

Thus, the concept of 'absorbing element' only deserves a special name when discussing monoids, not the multiplicative monoids of rigs.

Thanks. Yes, I fully agree.

view this post on Zulip John Baez (May 04 2025 at 06:35):

I added another example of a monoid good for monoid-labeled graphs: the terminal monoid! This is a good excuse to describe some issues of interpreting monoid-labeled graphs. See what you think:

The terminal monoid is a monoid containing just one element. This is also known as the trivial group. In the applications at hand we write the group operation as multiplication and call the one element 11, so that 11=11 \cdot 1 = 1. Thus, we call this monoid {1}\{1\}. Any graph becomes a {1}\{1\}-graph in a unique way, by labeling each edge with 11, and this gives an isomorphism of categories

{1}GphGph. \{1\} \mathsf{Gph} \cong \mathsf{Gph} .

We can use a graph to describe causality in at least two distinct ways:

  1. We can use the presence of an edge from a vertex vv to a vertex ww to indicate a way in which (the entity named by) vv has a direct effect on (the entity named by) ww, and the absence of an edge to indicate that vv has no direct effect on ww. Note that even if there is no edge from vv to ww, vv may still have an indirect effect on ww if there is path of edges from vv to ww.

  2. We can use the presence of an edge from a vertex vv to a vertex ww to indicate a way in which vv has a direct effect of ww, and the absence of an edge to indicate that vv has no currently known direct effect on ww. This interpretation is useful for situations where we start with a graph having no edges, and add an edge each time we discover that one vertex has a direct effect on another.

We can also take at least three different attitudes to the presence of multiple edges from one vertex to another:

  1. We can treat them as redundant, hence unnecessary, allowing us to simplify any graph so that it has at most one edge from one vertex to another.

  2. We can treat them as indicating different ways in which one vertex directly affects another.

  3. We can use the number of edges from vv to ww to indicate the amount by which vv affects ww.

All these subtleties of interpretation can also arise for LL-graphs where LL is any other monoid. We will not mention them each time, but in applications it can be important to clearly fix an interpretation.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 07:33):

Thanks!! I feel the example of {1}Gph\lbrace 1 \rbrace \mathsf{Gph} seems a very natural and perfect starting point to argue the reason for generalising from graphs to Set-labeled graphs, and then to further monoid-labeled graphs as you already explained.

As you explained, if I am not misunderstanding, then a directed multigraph come with following structural types of basic directed influences among its vertices viz:

[Of course, I am also assuming the two kinds of interpretations of causalitites and three kinds of attitudes that you described above]

Now, I think since {1}GphGph\lbrace 1 \rbrace \mathsf{Gph} \cong \mathsf{Gph} (as you have shown ), it is already equipped to describe the above causalities. However, since, there is only 11 element that we can use to label edges, we can not distinguish between the functional types of directed influences. To capture distinct functional types of directed influences along with the structural types of directed influences I think it is now necessary to generalise from {1}Gph\lbrace 1 \rbrace \mathsf{Gph} to LGphL\mathsf{Gph}, for a non-singleton set LL. However, since we want to capture indirect influences/composed influences, i.e influences which is themselves a product of various influences, we naturally need to put some algebraic structure on the labeling set LL. I think from the point of "polarity studies", the most natural generalisation is to make LL a monoid. Although we can add other algebraic structures (in addition to the monoid structure) to capture how in different ways "we can add/compose/subtract..etc different types of directed influences.

However, I think {1}Gph\lbrace 1 \rbrace \mathsf{Gph} tells precisely that we have a valid reason to extend our framework from Gph\mathsf{Gph} to LGphL\mathsf{Gph} for realistic purposes.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 07:39):

John Baez said:

All these subtleties of interpretation can also arise for LL-graphs where LL is any other monoid. We will not mention them each time, but in applications it can be important to clearly fix an interpretation.

Yes, I fully agree.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 07:54):

John Baez said:

I added another example of a monoid good for monoid-labeled graphs: the terminal monoid! This is a good excuse to describe some issues of interpreting monoid-labeled graphs. See what you think:

We can use a graph to describe causality in at least two distinct ways:

  1. We can use the presence of an edge from a vertex vv to a vertex ww to indicate a way in which (the entity named by) vv has a direct effect on (the entity named by) ww, and the absence of an edge to indicate that vv has no direct effect on ww. Note that even if there is no edge from vv to ww, vv may still have an indirect effect on ww if there is path of edges from vv to ww.

  2. We can use the presence of an edge from a vertex vv to a vertex ww to indicate a way in which vv has a direct effect of ww, and the absence of an edge to indicate that vv has no currently known direct effect on ww. This interpretation is useful for situations where we start with a graph having no edges, and add an edge each time we discover that one vertex has a direct effect on another.

We can also take at least three different attitudes to the presence of multiple edges from one vertex to another:

  1. We can treat them as redundant, hence unnecessary, allowing us to simplify any graph so that it has at most one edge from one vertex to another.

  2. We can treat them as indicating different ways in which one vertex directly affects another.

  3. We can use the number of edges from vv to ww to indicate the amount by which vv affects ww.

I really like these interpretations. I think it also says that we can actually express quite a lot of causalities by using only the structural aspects of non-labelled graphs/ and the free categories generated on it.

view this post on Zulip John Baez (May 04 2025 at 07:57):

Great!

I added some examples of how we can use maps between monoids.

Proposition. The following assignments

Fcm ⁣:RRFinGph F_{\mathrm{cm}} \colon R \mapsto R\mathsf{FinGph}

Fcm ⁣:(RϕR)(RFinGphϕRFinGph) F_{\mathrm{cm}} \colon (R \xrightarrow{\phi} R') \mapsto (R\mathsf{FinGph} \xrightarrow{\phi_\ast} R'\mathsf{FinGph})

define a functor

Fcm ⁣:CommMonCat F_{\mathrm{cm}} \colon \mathsf{CommMon} \to \mathsf{Cat}

where CommMon\mathsf{CommMon} is the category of commutative monoids and RFinGphR \mathsf{FinGph} is the category of RR-labeled finite graphs.

The functors ϕ\phi_\ast have many practical applications:

Example. Every commutative monoid RR has a unique homomorphism ϕ ⁣:R{1}\phi \colon R \to \{1\} where {1}\{1\} is the terminal monoid. The resulting functor

ϕ ⁣:RFinGph{1}FinGphFinGph \phi_\ast \colon R\mathsf{FinGph} \to \{1\}\mathsf{FinGph} \cong \mathsf{FinGph}

takes any RR-labeled finite graph and discards the labeling, giving a finite graph. This can be used to discard information about how one vertex directly affects another and merely retain the fact that it does.

Example. There is a homomorphism ϕ ⁣:R{0}{+,}\phi \colon \mathbb{R} - \{0\} \to \{+,-\} from the multiplicative group of the reals to the group {+,}\{+,-\} sending all positive numbers to ++ and all negative numbers to -. The resulting functor ϕ\phi_\ast turns quantitative information about how much one vertex directly affects another into purely qualitative information.

Example. There is a homomorphism ϕ ⁣:R{+,0,}\phi \colon \mathbb{R} \to \{+,0,-\} from the multiplicative monoid of the reals to the monoid {+,0,}\{+,0,-\} sending all positive numbers to ++, all negative numbers to -, and 00 to 00. The resulting functor ϕ\phi_\ast again turns quantitative information into qualitative information.

Example. The homomorphisms in the previous examples all have right inverses. For example, there is a homomorphism ψ ⁣:{+,0,}R\psi \colon \{+,0,-\} \to \mathbb{R} sending ++ to 11, - to 1-1 and 00 to 00, and this has

ϕψ=1. \phi \circ \psi = 1 .

The functor ψ\psi_\ast can be used to convert qualititative information about how one vertex directly affects another into quantitative information in a simple, default manner. Of course this should be taken with a grain of salt: since ψϕ1\psi \circ \phi \ne 1, quantitative information that has been converted into qualitative information cannot be restored.

view this post on Zulip John Baez (May 04 2025 at 08:03):

By the way, @Jade Master - I do plan to think and talk more about what you said yesterday! I'm just doing a lot of different stuff, and right now I'm writing up some old thoughts.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 10:01):

Thanks!! These examples are very interesting. I am writing my thoughts below:

I was thinking that analogous to the functor Fcm ⁣:CommMonCat F_{\mathrm{cm}} \colon \mathsf{CommMon} \to \mathsf{Cat} , we also have a functor FMon ⁣:MonCat F_{\mathsf{Mon}} \colon \mathsf{Mon} \to \mathsf{Cat} , which takes a monoid LL to the category Gph/GL\mathsf{Gph}/GL. If I am not misunderstanding, then I think all the examples that you constructed by FcmF_{\mathrm{cm}} can also be constructed via FMonF_{\mathsf{Mon}}. But, of course, the setups would be different due to the presence/absence of additive morphisms. However, an interesting thing that may happen in the latter case is the liberty to use non-commutative monoids.

Now, I was thinking in terms of the semiautomation example. The concerned non-commutative monoid LL in this case is generated by the set of endomorphisms αa ⁣:VV\alpha_{a} \colon V \to V, indexed by a set of inputs AA. Now, an edge vαa(v)v \to \alpha_{a}(v) in the associated semiautomation graph GG is labeled by α(a)\alpha(a) (the process through which the state vv changes to the state αa(v)\alpha_{a}(v) ). Now, if we think of the monoid (N,+,0)(\mathbb{N},+,0) describing a discrete time system as you described in the overleaf file, then a homomorphism θ ⁣:L(N,+,0)\theta \colon L \to (\mathbb{N},+,0) induces θ ⁣:Gph/GLGph/(N,+,0)\theta_{*} \colon \mathsf{Gph}/GL \to \mathsf{Gph}/(\mathbb{N},+,0), which in particular maps the semiautomation graph GG to a graph which describe how much units of time the automation take when we change various states. So, in a way the homomorphism θ\theta turns a qualitative information into a quantitative information in a non-obvious way. However, I think we can consider this example only if we use the functor FMonF_{\mathsf{Mon}}.

view this post on Zulip John Baez (May 04 2025 at 10:26):

That sounds right. Perhaps we should include an analogue for categories of monoid-labeled graphs of Propositions 2.23 and 2.28. There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms! But I'm indecisive about this, because I don't want to stuff the paper with boringly similar propositions. Maybe the right thing to do is simply state, after Prop. 2.28, that 2.23 and 2.28 have analogues for other cases.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 11:54):

John Baez said:

That sounds right. Perhaps we should include an analogue for categories of monoid-labeled graphs of Propositions 2.23 and 2.28. There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms! But I'm indecisive about this, because I don't want to stuff the paper with boringly similar propositions. Maybe the right thing to do is simply state, after Prop. 2.28, that 2.23 and 2.28 have analogues for other cases.

Thanks. Yes, I understand your points and fully agree with your suggestions. I also realised that the content in Proposition 2.5 and Proposition 2.6 are also similar in nature. As you suggested, can we club all these results together in a single/pair of propositions with certain remarks/statements about other analogous cases after the proposition? Something with the theme "behaviour under the change of labelling systems". The reason I am saying this because I think the proof structure in each case is almost the same, although the consequences are little different.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 12:42):

John Baez said:

There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms!

This sounds interesting to me!! Especially, in the context of "how identification of motifs in a monoid-labelled graph changes when we change only the labelling system, but keep the underlying graph structure as it is". For example, in the attached file,
Kleislilabel.PNG
I constructed a very simple example when we change the labeling from L:={+,}L:=\lbrace +, -\rbrace to L:={1}L' :=\lbrace 1 \rbrace by the unique homomorphism.

Maybe the attached example is not that interesting. But may be in some other cases, it could be interesting, although I am not sure.

view this post on Zulip Adittya Chaudhuri (May 04 2025 at 14:02):

Another example could be the following (which may or may not be interesting)

Say, LL is the semiautomation monoid generated by the set of endomorphisms αa ⁣:VV\alpha_{a} \colon V \to V, indexed by a set of inputs AA, and let (G,)(G, \ell) be the associated LL-labeled graph. Now, let the monoid (N,+,0)(\mathbb{N},+,0) describing a discrete time system, and let θ ⁣:L(N,+,0)\theta \colon L \to (\mathbb{N},+,0) be a monoid homomorphism (expressing time duration for change of states), which induces θ ⁣:Gph/GLGph/(N,+,0)\theta_{*} \colon \mathsf{Gph}/GL \to \mathsf{Gph}/(\mathbb{N},+,0). Now, consider the graph θ(G,)=(G,θ)Gph/(N,+,0) \theta_{*}(G, \ell)=(G, \theta \circ \ell) \in \mathsf{Gph}/(\mathbb{N},+,0). Then, I think a monic Kleisli morphism ϕ ⁣:(X,X)(G,θ) \phi \colon (X, \ell_{X}) \to (G, \theta \circ \ell) from the walking feedback loop (X,X) (X, \ell_{X}) with holonomy say 55, is same as finding various sequences of change of states induced by the semiautomation such that it comes back to its original state after 55 units of time.

view this post on Zulip John Baez (May 05 2025 at 10:51):

I don't want to talk about semiautomata in this paper, except for the one remark we've already made. I believe automaton theory will tend to confuse readers who are interested in our primary applications, namely causal loop diagrams and regulatory networks.

However, you've convinced me that monoid-labeled graphs deserve a bit more respect. So:

Can you write up an analogue of Propositions 2.23 and 2.28, and proofs, for monoid-labeled graphs and Kleisli morphisms? Namely:

view this post on Zulip John Baez (May 05 2025 at 10:59):

Right now I plan to write up a section on Mayer-Vietoris for the homology monoids of directed graphs. I think my earlier attempt to state this result were a bit suboptimal. I want to see if the ideas described here can be helpful:

Patchkoria has been developing a generalization of homological algebra that works for categories of modules of rigs - e.g. the category of commutative monoids, since commutative monoids are N\mathbb{N}-modules.

(Patchkoria says "semimodule of a semiring" but I say "module of a rig", because I get sick of seeing the prefix "semi-", and I don't think the shorter terminology is confusing.)

view this post on Zulip John Baez (May 05 2025 at 11:06):

Patchkoria defines a concept of chain complex that works for modules over a rig, where instead of maps dn:CnCn1d_n: C_n \to C_{n-1} with dndn+1=0d_n d_{n+1}= 0 you have maps sn,tn:CnCn1s_n,t_n: C_n \to C_{n-1} obeying some conditions.

But we only need a pathetically simple special case of this, since we're dealing with the chain complex associated to a graph, where only C0C_0 and C1C_1 are nonzero!

view this post on Zulip John Baez (May 05 2025 at 11:09):

He shows that a short exact sequence of his generalized chain complexes gives a long exact sequence of homology monoids.

This should imply a Mayer-Vietoris theorem for the homology monoids of graphs. But since the chain complex associated to a graph is so simple - with just two nonzero terms - it should be just as easy to handle this special case "by hand", without explaining all of Patchkoria's machinery. (Of course we should still cite him.)

view this post on Zulip John Baez (May 05 2025 at 13:35):

Oh, actually Patchkoria's results don't directly apply: he is starting with a diagram of modules of a rig

AiBpC A \xrightarrow{i} B \xrightarrow{p} C

where ii is monic, pp is epic and some sort of exactness condition holds (see definition 1.1), saying roughly that the image of ii is the kernel of pp. But we don't have that when we're trying to prove a Mayer-Vietoris theorem for the homology monoids of graphs!

view this post on Zulip Adittya Chaudhuri (May 05 2025 at 13:49):

John Baez said:

I don't want to talk about semiautomata in this paper, except for the one remark we've already made. I believe automaton theory will tend to confuse readers who are interested in our primary applications, namely causal loop diagrams and regulatory networks.

However, you've convinced me that monoid-labeled graphs deserve a bit more respect. So:

Can you write up an analogue of Propositions 2.23 and 2.28, and proofs, for monoid-labeled graphs and Kleisli morphisms? Namely:

Thanks. I understand and agree to your points. Yes, I will write up the Kleisli morphism analog of Propositions 2.23 and 2.28, and proofs as you suggested.

view this post on Zulip John Baez (May 05 2025 at 13:51):

Great, thanks!

Now I'll say a bit about the homology of graphs with natural number coefficients, and the Mayer-Vietoris theorem for this sort of homology. As usual I'll start with the basics and repeat myself a lot... but since I keep understanding things slightly better, I think that's good to do.

If we were working with abelian groups, each graph would give a chain complex, and then we could take the homology of that. But since we're working with commutative monoids, we have to adapt our approach. It actually becomes simpler: instead of chain complexes we work with graph objects!

Here's how:

I'll call the "free commutative monoid on a set" functor N[]\mathbb{N}[-], since it sends each set XX to the commutative monoid N[S]\mathbb{N}[S], which consists of N\mathbb{N}-linear combinations of elements of SS.

By applying this functor to a graph

G=(s,t:EV)G = (s,t: E \to V)

we get a graph object in commutative monoids, namely

N[s],N[t]:N[E]N[V] \mathbb{N}[s], \mathbb{N}[t] : \mathbb{N}[E] \to \mathbb{N}[V]

I will call this graph object N[G]\mathbb{N}[G]. Then:

view this post on Zulip John Baez (May 05 2025 at 13:59):

Now, to study Mayer-Vietoris for the homology of graphs with N\mathbb{N} coefficients, I want to consider the pushout in Gph\mathsf{Gph} of a diagram like this:

XiAjY X \xleftarrow{i} A \xrightarrow{j} Y

where ii and jj are monic. I will often call this pushout XYX \cup Y and write AA as XYX \cap Y, to remind you of the usual Mayer-Vietoris theorem.

So, intuitively speaking, we have a graph XYX \cup Y that's the union of two subgraphs XX and YY, and XYX \cap Y is their intersection.

When I talked about this last I assumed the intersection XYX \cap Y has no edges, just vertices. But we may be able to drop this assumption!

view this post on Zulip John Baez (May 05 2025 at 14:03):

I just explained how the "free commutative monoid on a set" functor

N[]:SetCommMon\mathbb{N}[-] : \mathsf{Set} \to \mathsf{CommMon}

can be used to turn any graph into a graph object in CommMon\mathsf{CommMon}. This process is functorial and I'll call it

N[]:GphGph(CommMon)\mathbb{N}[-] : \mathsf{Gph} \to \mathsf{Gph}(\mathsf{CommMon})

Here Gph(CommMon) \mathsf{Gph}(\mathsf{CommMon}) is my stupid notation for the category of graph objects in CommMon\mathsf{CommMon}.

view this post on Zulip John Baez (May 05 2025 at 14:07):

The free commutative monoid on a set functor

N[]:SetCommMon\mathbb{N}[-] : \mathsf{Set} \to \mathsf{CommMon}

is a left adjoint, and I believe

N[]:GphGph(CommMon)\mathbb{N}[-] : \mathsf{Gph} \to \mathsf{Gph}(\mathsf{CommMon})

is also a left adjoint. If so this should follow from some abstract nonsense, or else we can check it directly. For now let me assume it's true.

Left adjoints preserve pushouts! So, because the graph I'm calling XYX \cup Y is the pushout of

XiXYjYX \xleftarrow{i} X \cap Y \xrightarrow{j} Y

it follows that N(XY)\mathbb{N}(X \cup Y) is the pushout of

N[X]N[i]N[XY]N[j]N[Y]\mathbb{N}[X] \xleftarrow{\mathbb{N}[i]} \mathbb{N}[X \cap Y] \xrightarrow{\mathbb{N}[j]} \mathbb{N}[Y]

view this post on Zulip John Baez (May 05 2025 at 14:15):

We can say this a different way, since a pushout is a coproduct followed by a coequalizer. We have two inclusions of graphs

I,J:XYX+Y I, J : X \cap Y \to X + Y

so we get

N[I],N[J]:N[XY]N[X+Y]\mathbb{N}[I], \mathbb{N}[J] : \mathbb{N}[X \cap Y] \to \mathbb{N}[X + Y]

and the coequalizer of these two maps is N[XY]\mathbb{N}[X \cup Y] .

view this post on Zulip John Baez (May 05 2025 at 14:19):

This fact is the closest analogue to where we start in the usual Mayer-Vietoris theorem, where we have a topological space XYX \cup Y that's the union of open sets XX and YY, and we get a short exact sequence of chain complexes

0C(XY)C(X+Y)C(XY)0 0 \to C(X \cap Y) \to C(X + Y) \to C(X \cup Y) \to 0

view this post on Zulip John Baez (May 05 2025 at 14:27):

Instead of a short exact sequence, we now have a coequalizer diagram! Let me try to argue that this is just as good.

A short exact sequence says 1) some map ff is monic, and 2) the image of ff is the kernel of some map gg, and 3) gg is epic. I think the analogues of these 3 facts are:

1) N[I]\mathbb{N}[I] and N[J]\mathbb{N}[J] are monic. This doesn't follow from N[]\mathbb{N}[-] being a left adjoint, since left adjoints don't always preserve monics, but if you stare at it I think you'll see it's true.

2) N[XY]\mathbb{N}[X \cup Y] is the coequalizer of

N[I],N[J]:N[XY]N[X+Y]\mathbb{N}[I], \mathbb{N}[J] : \mathbb{N}[X \cap Y] \to \mathbb{N}[X + Y]

3) The resulting map

p:N[X+Y]N[XY] p: \mathbb{N}[X+Y] \to \mathbb{N}[X \cup Y]

is epic. This seems obvious directly, but it's also true because coequalizers give epics.

view this post on Zulip John Baez (May 05 2025 at 14:31):

So, this is nice. Now, to get the usual Mayer-Vietoris theorem we take a short exact sequence of chain complexes

0C(XY)C(X+Y)C(XY)00 \to C(X \cap Y) \to C(X + Y) \to C(X \cup Y) \to 0

and get a long exact sequence of homology groups. We want to do something similar here. But we need to adjust our thinking, since we can't talk about kernels and cokernels: we need to talk about equalizers and coequalizers.

view this post on Zulip John Baez (May 05 2025 at 14:31):

This is what I'm trying to figure out now.

view this post on Zulip Adittya Chaudhuri (May 05 2025 at 14:35):

Thanks !! These ideas look very interesting. I need a little more time to fully understand your ideas.

view this post on Zulip Adittya Chaudhuri (May 05 2025 at 14:37):

One short question: Do you want to construct a boundary map (for getting the long exact sequence)? Or, you want to express it in the form an equaliser ? I am asking this because somedays back #theory: applied category theory > Graphs with polarities @ 💬 you gave a presription of such boundary map by the changing coefficients from N\mathbb{N} to Z\mathbb{Z}.

view this post on Zulip John Baez (May 05 2025 at 14:56):

That's a very good question.

When working with commutative monoids, it seems more "honest" to use equalizers and coequalizers of pairs of maps rather than kernels and cokernels of maps. I got a formula for a boundary map in the Mayer-Vietoris long exact sequence by switching coefficients from N\mathbb{N} to Z\mathbb{Z}, but this feels a bit like "cheating". So, right now I'm trying to get two different maps from H1(XY,N)H_1(X \cap Y, \mathbb{N}) to H0(XY,N)H_0(X \cup Y, \mathbb{N}).

Here's one recent attempt. There's an obvious "projection" map

p:C1(XY,N)C1(XY,N) p: C_1(X \cup Y, \mathbb{N}) \to C_1(X \cap Y, \mathbb{N})

which takes any linear combination of edges of XY X \cup Y and kills all the edges that aren't in XYX \cap Y. This restricts to a map

p:H1(XY,N)C1(XY,N) p: H_1(X \cup Y, \mathbb{N}) \to C_1(X \cap Y,\mathbb{N})

since H1(XY,N)H_1(X \cup Y, \mathbb{N}) is actually a submonoid of C1(XY,N)C_1(X \cup Y, \mathbb{N}). Thus, we can define two maps

sp,tp:H1(XY,N)C0(XY,N) s \circ p, t \circ p: H_1(X \cup Y, \mathbb{N}) \to C_0(X \cap Y, \mathbb{N})

and we can compose these with the quotient map

C0(XY,N)H0(XY,N) C_0(X \cap Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N})

to get two maps I'll call

σ,τ:H1(XY,N)H1(XY,N) \sigma, \tau : H_1(X \cup Y, \mathbb{N}) \to H_1(X \cap Y, \mathbb{N})

These feel like a pretty obvious substitute for the usual boundary map in the Mayer-Vietoris sequence. But I need to see if they have good properties!

view this post on Zulip Adittya Chaudhuri (May 05 2025 at 15:58):

Thanks!! To me, this construction looks much natural than the earlier one

"Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) is the equalizer of f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X "

Furthermore, the fact that "it allows to have edges in the graph XYX \cap Y" fits perfectly with the description.

view this post on Zulip Adittya Chaudhuri (May 05 2025 at 16:01):

John Baez said:

These feel like a pretty obvious substitute for the usual boundary map in the Mayer-Vietoris sequence.

I agree.

view this post on Zulip John Baez (May 05 2025 at 18:45):

Thanks! I believe these new maps

σ,τ:H1(XY,N)H1(XY,N)\sigma, \tau : H_1(X \cup Y, \mathbb{N}) \to H_1(X \cap Y, \mathbb{N})

are the same as my old maps

f,g:H1(XY,N)C0(XY,N)f, g: H_1(X \cup Y, \mathbb{N}) \to C_0(X \cap Y, \mathbb{N})

composed with the quotient map

C0(XY,N)H0(XY,N)C_0(X \cap Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N})

I'm trying to make them look more elegant.

But I still need to check that they have properties that are somehow analogous to what you'd expect from the exactness of the Mayer-Vietoris exact sequence.

view this post on Zulip John Baez (May 06 2025 at 13:56):

I've been thinking about your idea of a rig containing a "negligible" element, @David Egolf - that is, a nonzero element that we nonetheless decided to neglect.

I was trying to start with some commutative rig RR (think of it as the real numbers if you like), throw in a new element ϵ\epsilon, make up an addition and multiplication for R{ϵ}R \cup \{\epsilon\} that express the idea that ϵ\epsilon is negligibly small compared to all numbers except 00, and then check that this addition and multiplication make R{ϵ}R \cup \{\epsilon\} into a rig.

For example I want

I think this works, but I seem to have found another way to say it.

Instead of trying to put in a new negligible element, let's take RR and put in a new element 00 with all the properties that zero must have in a rig! We have to check that this gives a rig, but I believe this is fairly easy.

Now here's the trick: let's call the old zero ϵ\epsilon. This is now our negligible element! Note that it obeys

view this post on Zulip John Baez (May 06 2025 at 13:58):

Note also that

view this post on Zulip John Baez (May 06 2025 at 13:59):

I will check more carefully that all the commutative rig laws hold, but it's looking good.

view this post on Zulip John Baez (May 06 2025 at 14:06):

This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 14:39):

The idea is interesting. It seems the old ϵ\epsilon is now an almost absorbing element with respect to the multiplicative monoid of the new rig. I said `almost' because, you wrote ϵr=ϵ\epsilon \cdot r= \epsilon for all rr but, 0ϵ=00 \cdot \epsilon=0. However, I think there might be another condition rϵ=ϵr \cdot \epsilon= \epsilon for all rr?

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 14:43):

To me, it seems similar to adding an "unknown influence uu" to an already existing rig (R,×,1,+,0)(R, \times, 1, +, 0) of known influences.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 14:47):

But, I think what separates your construction with the "insertion" of an unknown influence is the condition r+ϵ=rr+ \epsilon= r for all rRr \in R. I think this justifies ϵ\epsilon as a non-zero negligible element. Interesting!!

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 14:53):

Somehow your construction is reminding me of the Extended reals

view this post on Zulip John Baez (May 06 2025 at 14:54):

Adittya Chaudhuri said:

The idea is interesting. It seems the old ϵ\epsilon is now an almost absorbing element with respect to the multiplicative monoid of the new rig. I said 'almost' because, you wrote ϵr=ϵ\epsilon \cdot r= \epsilon for all rr but, 0ϵ=00 \cdot \epsilon=0. However, I think there might be another condition rϵ=ϵr \cdot \epsilon= \epsilon for all rr?

Yes, I want that too. (I may have said 'rig' in a few places, but I wanted all rigs in this discussion to be commutative, mainly to keep things simple, so rϵ=ϵrr \cdot \epsilon = \epsilon \cdot r. The ideas may also work for noncommutative rigs, but even then I want rϵ=ϵr \cdot \epsilon= \epsilon for all rr in the original rig.)

view this post on Zulip John Baez (May 06 2025 at 14:56):

And yes, ϵ\epsilon is "almost absorbing": it's absorbing with respect to the original rig (since it was the zero of the original rig), but 0ϵ=ϵ0=00 \cdot \epsilon = \epsilon \cdot 0 = 0.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 14:59):

John Baez said:

This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!

Every field is by default a rig. So, I guess Levi-Civita field is itself a concrete example (with much more properties than we probably need)?

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:03):

Although, I am not able to guess at the moment about the physical interpretation of Levi-Civita Field elements.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:13):

If we denote the new rig by Rnew:=R{ϵ}R_{new}:= R \cup \lbrace \epsilon \rbrace, then, I think there is a rig homomorphism ϕ ⁣:RnewR\phi \colon R_{new} \to R , which is identity on RR and sends ϵ\epsilon to 00.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:15):

I think we may see it as a process of forgetting the negligible influence?

view this post on Zulip John Baez (May 06 2025 at 15:22):

Adittya Chaudhuri said:

John Baez said:

This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!

Every field is by default a rig. So, I guess Levi-Civita field is itself a concrete example (with much more properties than we probably need)?

The Levi-Civita field and the superreals and the surreals are all examples of hyperreal number systems.

A hyperreal number system is a field that has all the same properties expressible in the first-order language of fields as R\mathbb{R}, but also a nonzero number ϵ\epsilon such that

ϵ<1\epsilon < 1
ϵ+ϵ<1\epsilon + \epsilon < 1
ϵ+ϵ+ϵ<1\epsilon + \epsilon + \epsilon < 1
ϵ+ϵ+ϵ+ϵ<1\epsilon + \epsilon + \epsilon + \epsilon < 1

etcetera. (This property, being an infinite conjunction, cannot be expressed in the first-order language of fields.)

view this post on Zulip John Baez (May 06 2025 at 15:25):

Anyway, I liked David Egolf's idea of taking the real numbers and throwing in just one element that means 'negligible'. In any hyperreal number system we have to put in infinitely many new numbers. For example we have ϵ,ϵ2,ϵ3,\epsilon, \epsilon^2, \epsilon^3, \dots and also huge numbers like 1/ϵ,1/ϵ2,1/ϵ3,1/\epsilon, 1/\epsilon^2, 1/\epsilon^3, \dots

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:26):

Thanks. I understand the point now.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:28):

John Baez said:

Anyway, I liked David Egolf's idea of taking the real numbers and throwing in just one element that means 'negligible'.

Yes, I think it is interesting and also useful from the point of applications. If we really want to consider some influence as negligible, then we may not need to distinguish between two negligible influences unless it is very necessary.

view this post on Zulip John Baez (May 06 2025 at 15:31):

Right! If something is negligibly small, its square is even smaller, but we are probably happy to call its square negligible and leave it at that! There's been a lot of work on hyperreal number systems, but David's idea feels new to me, and practical, so I'll put it in our paper.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:32):

Yes, it is interesting and a new way of treating negligible influences.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:34):

I was not aware of hyperreal numbers till today. They look very interesting. Thank you for explaining them!!

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 15:40):

I like the @David Egolf 's idea of choosing some non-zero influence to neglect for a particular modeling purpose. Your construction precisely describes a choice (the old 00). I think it `reminds the modeler' that although he/she has created the model but he/she has not considered "that particular influence". As you constructed, we can just do that by considering "that particular influence" as the old 00 in the new rig.

view this post on Zulip David Egolf (May 06 2025 at 16:25):

I'm glad you are finding this idea helpful! (I didn't really come up with this idea though. I said something related and then @John Baez came up with this idea!)

view this post on Zulip John Baez (May 06 2025 at 16:50):

You said

I am wondering if choosing to not put an edge from vv to ww could be a way to indicate we're choosing to not model how vv impacts ww.

That's true! But I thought about it for so long I forgot what you said! I started thinking about using a special kind of labeled edge to indicate that we're choosing not to model how vv impacts ww. And somehow I interpreted this to mean that we're treating the impact as negligible (though I see now that's different).

My theory is that practitioners don't want to have to think very hard before not putting in an edge. So, the absence of an edge should have a very boring meaning.

view this post on Zulip John Baez (May 06 2025 at 16:58):

Anyway, there's a multiplicity of interpretations of the same math. Here is what I added to the paper just now. I realized most of what I have to say works not just for rigs but for monoids. We introduce rigs later.

Example 2.9. Another important monoid of polarities is the multiplicative monoid of Z/3\mathbb{Z}/3, which we can write as {+,0,}\{+,0,-\}. This contains the 2-element monoid {+,}\{+,-\} of the previous example as a submonoid, but it also contains a new element 00 that is absorbing: 0x=x0=x0 \cdot x = x \cdot 0 = x for all xx. So, unlike {+,}\{+,-\}, this monoid {+,0,}\{+,0,-\} is not a group.

There are at least three distinct interpretations of {+,0,}\{+,0,-\}-graphs:

Our choice of interpretation affects how we interpret the absence of an edge from one vertex to another. We discuss this in the following example.

Example 2.10. More generally, for any monoid LL we can form a new monoid L{0}L \cup \{0\} where 00 is a new absorbing element. Thus, in L{0}L \cup \{0\} the product of elements of LL is defined as before, but 0=0=0\ell \cdot 0 = 0 \cdot \ell = 0 for all L\ell \in L, and 00=00 \cdot 0 = 0. This new monoid contains LL as a submonoid. As before, there are at least three interpretations of the element 00:

1) there is no direct effect of vv on ww, or

2) there is no currently known direct effect of vv on ww.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 17:08):

Thanks!! Two examples and interpretations are very nice. I think it clarifies really well about the "absence of edge vs 00" situation.

view this post on Zulip John Baez (May 06 2025 at 17:34):

Thanks! I've been worrying about what's the difference between 0 and the absence of an edge for a long time. It seems there are a couple of consistent interpretations. Maybe now I can relax.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 17:38):

I am attaching an example
necessary stimulation.png
where the symbol nsns stands for necessary stimulation. In other words, the presence of AA is necessary for BB to perform its function that is positively affecting FF, negatively affecting GG and necessary stimulating CC. These notions of necessary stimulations are common in SBGN-AF descriptions (they have a special arrow symbol for that). Can we construct a monoid which can accomodate these kinds of causalities?

view this post on Zulip John Baez (May 06 2025 at 17:44):

Interesting. Maybe you can try to invent a nice monoid that contains ns,+ns, + and -.

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 17:47):

Thanks. I will try .

view this post on Zulip Adittya Chaudhuri (May 06 2025 at 19:13):

I am trying to write down a binary operation suitable to express the meaning of necessary stimulation.

In particular, by ansba \xrightarrow{ns} b, I am interpreting the statement "aa is necessary for bb to perform" is a kind of postive stimulation of aa on bb, but different from the usual ++.

Claim: The required monoid is N=({+,}{ns},)N=(\lbrace +, - \rbrace \cup \lbrace ns \rbrace, \cdot), where \cdot is defined as follows:

Hence, nsns is the identity element in the monoid NN. Hence, {+,}\lbrace +,-\rbrace is not a submonoid of NN. However, I think a more general statement is true:

Lemma
For any monoid (L,×L,1)(L,\times_{L},1) and a sigleton set {x}\lbrace x \rbrace, there is a unique monoid (Lx,×Lx,x)(L_{x}, \times_{L_{x}}, x) such that

For our case L=Z/2L= \mathbb{Z}/2.

view this post on Zulip John Baez (May 06 2025 at 20:17):

I see, you're forming a new monoid L{x}L \cup \{x\} by adjoining a new identity element xx to LL. This beats the old identity 1L1 \in L, which still acts like an identity when you multiply it with any element except the new identity.

This is very similar to, but distinct from, our trick of adjoining an absorbing element to a monoid LL.

When we have a rig RR, these tricks get combined: we can adjoin a new element 00 that's absorbing for \cdot and the identity for ++. We've talked about this recently. But now we've separated this idea into two parts. Very nice!

view this post on Zulip John Baez (May 06 2025 at 21:02):

I will add this idea to the paper soon. I also need to get serious about constructing the Mayer-Vietoris exact sequence for homology monoids of graphs. I think my candidate for the connecting homomorphism \partial is screwed up. I want to go back to the usual construction of the connecting homomorphism for homology groups, and see how to adapt it to commutative monoids.

view this post on Zulip Kevin Carlson (May 06 2025 at 22:44):

It’s going to be very cool if you do get a Mayer-Vietoris sequence! In Grandis’ directed topology program, there are “flexible” directed spaces which have a nice van Kampen and Mayer Vietoris theory, ie their homotopy pushouts behave a lot like those of undirected spaces, but also “inflexible” ones for which you can’t prove such theorems. The key distinction is whether you’re allowed to stop partway through a directed path. Graphs are thus inflexible as directed spaces, because you have to get all the way from one vertex to another—there’s no such thing as stopping “in the middle of an edge.” So if there is a good Mayer-Vietoris theorem, it’ll probably be quite particular to the details of graphs, and thus nice and concrete and interesting!

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 04:37):

John Baez said:

When we have a rig RR, these tricks get combined: we can adjoin a new element 00 that's absorbing for \cdot and the identity for ++. We've talked about this recently. But now we've separated this idea into two parts. Very nice!

Thank you so much!! Yes, this looks interesting both from the application perspective and from the Math perspective.

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 04:41):

Kevin Carlson said:

So if there is a good Mayer-Vietoris theorem, it’ll probably be quite particular to the details of graphs, and thus nice and concrete and interesting!

Thank you. Yes, I got your points. Interesting!!

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 04:46):

John Baez said:

I will add this idea to the paper soon.

Thanks!

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 09:24):

Although it may not add anything extra to help the required construction of the boundary map, but still I am just trying to rephrase the construction category theoretically as follows:

If I am not misunderstanding, then our required boundary map \partial (if we want to see it as a map) is a special map from the equalizer of sXY,tXY ⁣:C1(XY)C0(XY)s_{X \cup Y}, t_{X \cup Y} \colon C_1(X \cup Y) \to C_0(X \cup Y) to the coequalizer of sXY,tXY ⁣:C1(XY)C0(XY)s_{X \cap Y}, t_{X \cap Y} \colon C_1(X \cap Y) \to C_0(X \cap Y). Although I am not able to guess any such map from "any universal property" involved.

view this post on Zulip John Baez (May 07 2025 at 09:39):

Yes, that's what the source and target of \partial should be. I'm trying to define \partial using a diagram chase similar to the usual one, but with various kernels and cokernels replaced by the appropriate equalizers and coequalizers. I'll report on my progress in a while!

view this post on Zulip John Baez (May 07 2025 at 09:44):

I feel I have a rough mental image of what \partial looks like in examples. It's easier to describe if we're doing homology with Z\mathbb{Z} coefficients. You take a 1-cycle cc on XYX \cup Y and chop it up as the sum of a part cXc_X supported on XX and a part cYc_Y supported on YY. cXc_X and cYc_Y are 1-chains but not 1-cycles, and let c\partial c be the homology class of dcX=dcYC0(XY)dc_X = -dc_Y \in C_0(X \cap Y).

It's really helpful to draw a picture. Take a circle cc, chop it in half, and let c\partial c be the two endpoints of one of the intervals, counted with appropriate signs.

There's not a unique way to chop cc into a part supported on XX and a part supported on YY, but different choices should give choices of dcXdc_X that are homologous, i.e. differ by the boundary of something in C1(XY)C_1(X \cap Y).

view this post on Zulip John Baez (May 07 2025 at 09:51):

In the case I used to focus on, where XYX \cap Y is a graph with no edges, there is a unique way to write cC1(XY)c \in C_1(X \cup Y) as the sum of cXC1(X)c_X \in C_1(X) and cYC1(Y)c_Y \in C_1(Y), because

C1(XY)=C1(X)C1(Y) C_1(X \cup Y) = C_1(X) \oplus C_1(Y)

C1(XY)=0C_1(X \cap Y) = 0

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 12:26):

Thank you. I am now thinking along the ideas you explained.

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 13:58):

I have a question(most probably, I am misunderstanding)

You already proved the following

Σ:Z1(X)Z1(Y)Z1(XY)\Sigma : Z_1(X) \oplus Z_1(Y) \to Z_1(X \cup Y) is the equalizer of f,g ⁣:Z1(XY)C0(XY) f, g \colon Z_1(X \cup Y) \to C_0(X \cap Y) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X . here #theory: applied category theory > Graphs with polarities @ 💬

With new terminology, we have

Σ:H1(X,N)H1(Y,N)H1(XY)\Sigma : H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \to H_1(X \cup Y) is the equalizer of f,g ⁣:H1(XY,N)H0(XY,N) f, g \colon H_1(X \cup Y, \mathbb{N}) \to H_0(X \cap Y, \mathbb{N}) if we take fc=scX,gc=tcX fc = sc_X, gc = tc_X .

Now, we have, a map θ ⁣:H1(XY,N)H1(XY,N)\theta \colon H_1(X \cap Y, \mathbb{N}) \to H_1(X \cup Y, \mathbb{N}), by c2cc \mapsto 2c. So, we have f(2c)=g(2c)f(2c)=g(2c) by the definition of ff and gg. Now, by the universal property of equalizer, we have a unique map η ⁣:H1(XY,N)H1(X,N)H1(Y,N)\eta \colon H_1(X \cap Y, \mathbb{N}) \to H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}), c(c,c)c \mapsto (c,c).

Now, what would be the problem if we consider the following as our Mayer Vietoris?

0H1(XY,N)ηH1(X,N)H1(Y,N)ΣH1(XY,N)H0(XY,N)00 \to H_1(X \cap Y, \mathbb{N}) \xrightarrow{\eta} H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \xrightarrow{\Sigma} H_1(X \cup Y, \mathbb{N}) \rightrightarrows H_0(X \cap Y, \mathbb{N}) \to 0, where the double arrows denote ff and gg.

view this post on Zulip John Baez (May 07 2025 at 14:13):

Something like this might work. If this were a conventional exact sequence of abelian groups, you would be claiming that the image of η\eta is the kernel of Σ\Sigma. But these are commutative monoids. So what analogue of exactness are you claiming for η\eta and Σ\Sigma?

view this post on Zulip Adittya Chaudhuri (May 07 2025 at 14:47):

Thank you. I got your point. At the moment I am not sure about the exactness condition of Σ\Sigma and η\eta. However, I feel if my construction is correct, then an exactness condition can be framed in terms of equalizer and coequalizer. I am thinking about this point.

view this post on Zulip John Baez (May 07 2025 at 20:04):

It's possible that instead of working with the single map

H1(XY,N)ηH1(X,N)H1(Y,N) H_1(X \cap Y, \mathbb{N}) \xrightarrow{\eta} H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N})

we should focus a lot of attention on the two obvious inclusions

i1,i2 ⁣:H1(XY,N)H1(X,N)H1(Y,N)i_1, i_2 \colon H_1(X \cap Y, \mathbb{N}) \xrightarrow{} H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N})

When working with commutative monoids instead of abelian groups, we often need to replace single maps by pairs of maps, kernels by equalizers and cokernels by coequalizers.

view this post on Zulip John Baez (May 07 2025 at 20:07):

Of course η\eta followed by the two projections from H1(X,N)H1(Y,N) H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) onto its summands gives i1i_1 and i2i_2.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 04:09):

Thanks. Yes, I got your point.

view this post on Zulip John Baez (May 08 2025 at 09:16):

I've been trying to develop an analogue of 'exact sequence' for commutative monoids and I think maybe I have. It's good to start by thinking about exact sequences of abelian groups (or objects in any abelian category, but let's keep it concrete).

Suppose we have a strict nn-category AA internal to AbGp\mathsf{AbGp}. This has an abelian group AiA_i for each i=0,,ni = 0, \dots, n and source and target maps

s,t:AiAi1 s, t: A_i \to A_{i-1}

for each i=1,,ni = 1, \dots , n, obeying the usual laws of a [[globular set]]:

ss=ts,st=tt ss = ts, \qquad st = tt

Of course the nn-category AA also has composition maps, and identities, but I believe all of these can be reconstructed starting from just the source and target maps and addition in the abelian groups AiA_i!

That may seem surprising, but it's well-known for n=1n = 1, see for example HDA6 where it's shown that a category AA internal to Vect\mathsf{Vect} can be reconstructed from its underlying graph internal to Vect\mathsf{Vect}, meaning the pair of linear maps

s,t:A1A0 s, t: A_1 \to A_0

The argument works equally well for categories internal to AbGp\mathsf{AbGp}.

But in fact we can go further: up to equivalence we can reconstruct a category internal to AbGp\mathsf{AbGp} from a single map

d:A1A0 d: A_1 \to A_0

defined by

d=st d = s - t

This was also explained in HDA6.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 09:21):

Thank you!! I am trying to understand your ideas.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 10:13):

I do not want to distract your explanation. I just wanted to share what I was thinking today morning after seeing your post about using double arrows. I am probabaly fully wrong. I just have a mental picture(could be wrong), with which I will try to make a general statement, and would try to justify it in our framework.

Definition
Let CC be a category with equalizers, coequalisers. Then, I define an exact sequence in CC as the following diagram:

A1A2A3A4A5A6A7An1An A_1 \rightarrow A_2 \rightrightarrows A_3 \rightarrow A_4 \rightarrow A_5 \rightrightarrows A_6 \rightarrow A_7 \cdots \rightrightarrows A_{n-1}\rightarrow A_n

such that A1A_1 is the equalizer of A2A3A_2 \rightrightarrows A_3, A4A_{4} is the coequalizer of A2A3A_2 \rightrightarrows A_3, A4A_4 is the equalizer of A5A6A_5 \rightrightarrows A_6 , A7A_7 is the coequalizer of A5A6A_5 \rightrightarrows A_6 , and so on!!

My claim is the following:

00H1(XY,N)ηH1(X,N)H1(Y,N)ΣH1(XY,N)H0(XY,N)pcoq(f,g)idcoq(f,g)coeq(f,g)idcoeq(f,g)0 \rightarrow 0 \rightrightarrows H_1(X \cap Y, \mathbb{N}) \xrightarrow{\eta} H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \xrightarrow{\Sigma} H_1(X \cup Y, \mathbb{N}) \rightrightarrows H_0(X \cap Y, \mathbb{N}) \xrightarrow{p} coq(f,g) \xrightarrow{id} coq(f,g) \rightrightarrows coeq(f,g) \xrightarrow{id} coeq(f,g)

is the exact sequence (as per my definition). Here, f,gf, g are as you defined before, and η\eta takes cc to (c,c)(c,c). (This is my proposed Mayer-Vietoris)

The above definition is motivated by the definition you constructed for H1(G,N)H_1(G, \mathbb{N}) and H0(G,N)H_0(G, \mathbb{N}).

Let me justify why I think so!

H1(G,N)C1(G,N)C0(G,N)H0(G,N) H_1(G, \mathbb{N}) \rightarrow C_1(G, \mathbb{N}) \rightrightarrows C_0(G, \mathbb{N}) \rightarrow H_0(G, \mathbb{N}), which is satisfying my definition of an exact sequence in the category of commutative monoids.

Now, another claim:

For a category CC with nice properties like (category of abelian groups), there is a one-one correspondence between

A1A2A3A4A5A6A7An1AnA_1 \rightarrow A_2 \rightrightarrows A_3 \rightarrow A_4 \rightarrow A_5 \rightrightarrows A_6 \rightarrow A_7 \cdots \rightrightarrows A_{n-1}\rightarrow A_n

and usual exact sequences.

view this post on Zulip John Baez (May 08 2025 at 10:27):

Our ideas may converge. My ideas lead to a somewhat different concept of exact sequence in CommMon\mathsf{CommMon}, but we may be able to reconcile them. I'll continue and try to explain my idea.

HDA6 gives the (previously known) argument that up to equivalence we can reconstruct a category internal to AbGp\mathsf{AbGp} from a single map

d:A1A0d: A_1 \to A_0

defined by

d=std = s - t

In fact more generally I believe this - I don't know if anyone has written a proof, so I'll call it a conjecture:

Conjecture. Strict nn-categories in an abelian category C\mathsf{C} are equivalent to chain complexes in C\mathsf{C}.

This conjecture is a bit vague because I haven't chosen a definition of 'equivalence' here. I believe there's a very concrete procedure for extracting the underlying chain complex from a strict nn-category in C\mathsf{C}, and also a very concrete procedure for taking a chain complex and building a strict nn-category in C\mathsf{C} from it. We can just do one and then the other, and see what happens. For example, if we start with a chain complex, I believe the round trip will get us a chain complex that's chain homotopy equivalent.

However, this conjecture won't apply to strict nn-categories in CommMon\mathsf{CommMon}, because we used subtraction to reduce the source and target to a single map:

d=std = s - t

view this post on Zulip John Baez (May 08 2025 at 10:43):

If we don't go so far as to replace ss and tt with d=std = s - t, we have the important concept of a 'globular object'. A globular object of height nn in a category C\mathsf{C} is a sequence of objects A0,A1,,AnA_0, A_1, \dots, A_n with maps

s,t:AiAi1 s, t : A_i \to A_{i-1}

obeying

ss=ts,st=tt s s = t s , s t = t t

Every strict nn-category has an underlying globular object of height nn, where we keep the source and target maps but forget about composition and identities. I believe this:

Conjecture. Strict nn-categories in an abelian category C\mathsf{C} are equivalent to globular objects of height nn in C\mathsf{C}.

view this post on Zulip John Baez (May 08 2025 at 11:16):

I'm afraid this too may use subtraction! Let's look at the case n=1n = 1 and take C=AbGp\mathsf{C} = \mathsf{AbGp} A globular object of height nn in C\mathsf{C} is just a graph object

s,t:A1A0 s, t: A_1 \to A_0

How do we define composition of 1-cells starting with a graph object in AbGp\mathsf{AbGp}? Given fA1f \in A_1 with s(f)=x,t(f)=ys(f) = x, t(f) = y let's write f:xyf : x \to y. Let's define the arrow part of ff by

f=f1x\vec{f} = f - 1_x

Notice that

f:0yx\vec{f} : 0 \to y - x

Here we've taken the morphism ff and 'translated it back to the origin', as we often do when explaining vector addition in a basic course, where it's important to explain the difference between a vector starting from an arbitrary point in space and a vector starting from the origin. In fact that stuff is an example of what we're doing here!

Given f:xyf: x \to y and g:yzg : y \to z here's how we define gf:xzg \circ f : x \to z.

gf=f+g g \circ f = f + \vec{g}

Notice that

s(gf)=s(f)+s(g)=s(f)=x s(g \circ f) = s(f) + s(\vec{g}) = s(f) = x

while

t(gf)=t(f)+t(g)=y+zy=z t(g \circ f) = t(f) + t(\vec{g}) = y + z - y = z

so indeed

gf:xz g \circ f : x \to z

as desired.

I did all this just to remind myself why we need subtraction in our abelian groups to define composition of morphisms starting from a graph object or more general general globular object in AbGp\mathsf{AbGp}. This won't work in the case I'm actually interested in: a globular object in CommMon\mathsf{CommMon}. :cry:

view this post on Zulip John Baez (May 08 2025 at 11:31):

Nonetheless I'll try to explain my conjecture that exact sequences in AbGp\mathsf{AbGp} are the same thing as namely contractible globular objects in AbGp\mathsf{AbGp}.

And - here finally is the point of all this stuff I'm saying! - I'm hoping that the right generalization of exact sequences to CommMon\mathsf{CommMon} may be contractible globular objects in CommMon\mathsf{CommMon}.

(I could have just started with the definition of these, but I wanted to explain what I'm actually thinking. I hadn't expected this to take so long.)

Okay: say AA is a globular object in a category C\mathsf{C}. We call AiA_i the object of ii-cells. As before, let's write

f:xy f : x \to y

whenever ff is an ii-cell with s(f)=x,t(f)=ys(f) = x, t(f) = y.

We say AA is contractible if

1) given any 0-cells x,yx,y there exists a 1-cell ff with f:xyf : x \to y

2) given any ii-cells f,g:xyf,g: x \to y for i>0i \gt 0, there exists an (i+1)(i+1)-cell α\alpha with α:fg\alpha: f \to g

Intuitively, 1) means that AA is connected and 2) means that the ii th homotopy group of AA vanishes because we can fill in any ii-sphere.

view this post on Zulip John Baez (May 08 2025 at 11:33):

Conjecture. Suppose AA is a globular object in AbGp\mathsf{AbGp} and CC is its underlying chain complex. Then CC is exact iff AA is contractible.

Note that this conjecture, unlike my previous ones, is quite precisely stated, at least if you remember that the procedure for getting a chain complex CC from a globular object AA in AbGp\mathsf{AbGp} is to let Ci=AiC_i = A_i and d=std = s - t.

view this post on Zulip John Baez (May 08 2025 at 11:35):

Why do I believe this conjecture? The idea is that the 'vanishing of homotopy groups' intuitively captured by the contractibility of AA is nothing other than the vanishing of homology groups for CC. But a chain complex has vanishing homology groups iff it's exact.

view this post on Zulip John Baez (May 08 2025 at 11:51):

So now I'll make a guess, which we see is not completely supported by the evidence. What's the correct analogue for commutative monoids of an exact sequence of abelian groups?

Guess: it's a globular object of commutative monoids

A0A1A2 A_0 \leftleftarrows A_1 \leftleftarrows A_2 \leftleftarrows \cdots

that is contactible.

view this post on Zulip John Baez (May 08 2025 at 12:26):

That was quite a lot of talk to motivate a concrete suggestion! So, I'm suggesting that our 'Mayer-Vietoris exact sequence of commutative monoids', or any exact sequence of commutative monoids, should actually be a list of commutative monoids AiA_i together with maps

s,t:AiAi1 s, t : A_i \to A_{i-1}

obeying the globular identities

ss=ts,st=tt s s = t s, \qquad s t = t t

together with contractibility:

if s(f)=s(g)=xs(f) = s(g) = x and t(f)=t(g)=xt(f) = t(g) = x then there exists α\alpha with s(α)=f,t(α)=gs(\alpha) = f, t(\alpha) = g.

view this post on Zulip John Baez (May 08 2025 at 12:30):

I plan to keep studying this, but now I'll try to understand what you did. Use seem to be using a more ad hoc generalization of exact sequence, which may be what we actually seen in this example. It's possible that in your diagram

00H1(XY,N)ηH1(X,N)H1(Y,N)ΣH1(XY,N)H0(XY,N)pcoeq(f,g)0 \rightarrow 0 \rightrightarrows H_1(X \cap Y, \mathbb{N}) \xrightarrow{\eta} H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \xrightarrow{\Sigma} H_1(X \cup Y, \mathbb{N}) \rightrightarrows H_0(X \cap Y, \mathbb{N}) \xrightarrow{p} \text{coeq}(f,g)

when you have a single arrow between commutative monoids this arrow is both my ss and my tt.

view this post on Zulip John Baez (May 08 2025 at 12:31):

I have already argued that perhaps your η\eta should be treated as two arrows, which I called ii and jj.

view this post on Zulip John Baez (May 08 2025 at 12:32):

Anyway, I'm ready to stop thinking about grandiose abstract nonsense for a while (unless anyone has anything interesting to say about my conjectures!) and work on our specific problem, which really should be rather simple.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 12:51):

John Baez said:

That was quite a lot of talk to motivate a concrete suggestion!

Thank you!! I am trying to understand your motivation.

view this post on Zulip John Baez (May 08 2025 at 13:10):

Most of it is closely connected to old ideas from HDA6; I don't know if you've read that paper. Actually most of those ideas were not original to HDA6; they can be found in the work of Grothendieck and others. The idea of contractible globular objects is also not new.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 15:14):

John Baez said:

Most of it is closely connected to old ideas from HDA6; I don't know if you've read that paper. Actually most of those ideas were not original to HDA6; they can be found in the work of Grothendieck and others. The idea of contractible globular objects is also not new.

Thanks. Yes, although I have not read your paper HDA6 in detail, but I used your notion of 2-vector spaces to represent strict Lie 2-algebra of a strict Lie 2-group/as well as fibres in VB-groupoids in our paper https://arxiv.org/pdf/2107.13747 and https://arxiv.org/pdf/2309.05355.

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 15:59):

John Baez said:

Conjecture. Strict nn-categories in an abelian category C\mathsf{C} are equivalent to chain complexes in C\mathsf{C}.

I just went through 1st part of the proof of Lemma 6 in HDA6 (Construction of the composition law from the s,t,is,t,i and the underlying group structuctue of vector spaces). From the construction, it seems its a n=1n=1 version of the construction of 1-category (internal to Vect\mathsf{Vect}) from the data of a globular 1-set [[globular set]] in Vect\mathsf{Vect}. Now, as you exaplined in the post and as well proved in Lemma 6 in HDA6 that actually the construction is possible from a single map d=std=s-t, your conjecture seems very much to be true and natural from my point of view. I feel what a globular 1-set is to a 1-category, a globular 2-set is to a strict 2-category (at least from the definition and laws of globular set). Thus, I feel your conjecture seems very natural generalisation (n-level) of the Lemma 6 in HDA6 .

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 16:21):

Let GG be a graph in Set\mathsf{Set}. Then, if we apply the UndUnd and Free\mathsf{Free} functors, UndFree(G)Und \circ \mathsf{Free}(G) has all the data of Free(G)\mathsf{Free}(G), because we define the composition by concatenation of paths in UndFree(G)Und \circ \mathsf{Free}(G).

However, if we consider an arbitrary category CC in Set\mathsf{Set}, then from the underlying graph of CC it is not possible to reconstruct the category (For example, consider a category with a single object and only identity arrow).

Now, let CC be category in Vect\mathsf{Vect} or a 2-vector space as in HDA6. If I assume there is an internal free-forgetful version of FreeUnd\mathsf{Free}-\mathsf{Und} adjunction between 2Vect2\mathsf{Vect} (Category of 2-vector spaces) and GphVect\mathsf{Gph}_\mathsf{Vect} (Graph internal to vector spaces), then my question is the following:

Is Free(Und(C))\mathsf{Free}(\mathsf{Und}(C)) equivalent to CC? Or, we may recover some "extra morphisms" than what is already present in CC? Maybe I am thinking in wrong direction (about a relation between the "construction of CC from the underlying globular 1-set in Vect\mathsf{Vect} of CC (as in Lemma 6 in HDA6) and (may be possible ) construction of CC from Und(C)\mathsf{Und}(C) via Free\mathsf{Free} functor")

view this post on Zulip Adittya Chaudhuri (May 08 2025 at 19:59):

John Baez said:

That was quite a lot of talk to motivate a concrete suggestion! So, I'm suggesting that our 'Mayer-Vietoris exact sequence of commutative monoids', or any exact sequence of commutative monoids, should actually be a list of commutative monoids AiA_i together with maps

s,t:AiAi1 s, t : A_i \to A_{i-1}

obeying the globular identities

ss=ts,st=tt s s = t s, \qquad s t = t t

together with contractibility:

if s(f)=s(g)=xs(f) = s(g) = x and t(f)=t(g)=xt(f) = t(g) = x then there exists α\alpha with s(α)=f,t(α)=gs(\alpha) = f, t(\alpha) = g.

Interesting!! I think I got your motivation now!! Thanks!! If I assume the correctness of your 3rd conjecture (which seems very reasonable from your definition of contractibility) that "if CC is the underlying chain complex of a globular object AA in AbGp\mathsf{AbGp}, then CC is exact if and only if AA is contractible" which I feel is a kind of restriction of the equivalence given by the combination of your 1st two conjectures i.e between chain complexes in Abelian category CC and globular objects in CC, which to me is a sort of globular version of [[Dold-Kan correspondence]].

Now, although the "exactness condition" does not directly makes sense from the point of a suitable definition of a chain complex in the category of commutative monoids, interestingly(as you explained), from the point of vanishing of homotopy groups (your contractibility definition), a notion of contractible globular object in the category of commutative monoids still perfectly makes sense.

Thus, I feel your proposed definition of exact sequence of commutative monoids seems very natural and very reasonable. I find this point of view (of changing the perspective from defining an exactness condition in chain complex to defining a contractibility condition on globular objects for generalising the notion of an exact sequence of abelian groups to an exact sequence of commutative monoids) very interesting!!!

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 10:43):

Now, I am trying to make a connection between a possible relation between my definition of exact sequences of commutative monoids and your contractible globular objects in commutative monoids.

Consider the following diagram in a category CC.

A:=A1A2A3A4A5A6A7An1An A:= A_1 \rightarrow A_2 \rightrightarrows A_3 \rightarrow A_4 \rightarrow A_5 \rightrightarrows A_6 \rightarrow A_7 \cdots \rightrightarrows A_{n-1}\rightarrow A_n .

Let fi,gif_{i}, g_{i} denote the parallel arrows AiAi+1 A_i \rightrightarrows A_{i+1}, and ϕi ⁣:AiAi+1\phi_i \colon A_i \to A_{i+1} denote the single arrows.

Then, I am calling AA is a chain complex in CC if for all ii, we have

Now, consider a category CC with equalizers and coequalizers. Then, I am defining an exact sequence in CC as a chain complex AA in CC

A:=A1A2A3A4A5A6A7An1An A:= A_1 \rightarrow A_2 \rightrightarrows A_3 \rightarrow A_4 \rightarrow A_5 \rightrightarrows A_6 \rightarrow A_7 \cdots \rightrightarrows A_{n-1}\rightarrow A_n

such that A1A_1 is the equalizer of A2A3A_2 \rightrightarrows A_3, A4A_{4} is the coequalizer of A2A3A_2 \rightrightarrows A_3, A4A_4 is the equalizer of A5A6A_5 \rightrightarrows A_6 , A7A_7 is the coequalizer of A5A6A_5 \rightrightarrows A_6 , and so on.

Now, by combining of your 1st two conjectures, we get a one-one correspondence between chain complexes in Abelian category CC and globular objects in CC. Now, if I am not misunderstanding, I think you have not defined a notion of a chain complex in the category of commutative monoids, instead you defined the notion (probably equivalent) interms of globular objects in the category of commutative monoids to avoid the problem of "exactness".

However, I have two question:

1) Is there a one-one correspondence between appropriate notion of chain complexes in the category of commutative monoids and globular objects in the category of commutative monoids?

2) What are the appropriate notion of chain complexes in the category of commutative monoids ?

I have an intiuitive feeling that via question (1) and (2) our ideas may converge.

In other words, I am feeling, that my definition of chain complex may answer the question 1) and question 2). Moreover, I am feeling may be, my exactness condition in my chain complexes will produce your contractractibility conditions in your globular objects and vice versa.

The above discusssion is just a feeling. I know I have to make my statements more concrete. I am working on it.

Also, it is very possible that the above discussion leads no where.

view this post on Zulip James Deikun (May 09 2025 at 13:31):

Just as a kernel pair describes the failure of an arrow to be monic, and a cokernel pair its failure to be epic, one could have a pair that describes the failure of a pair to be jointly monic/epic. I wonder if this idea is of use in this context.

view this post on Zulip John Baez (May 09 2025 at 16:46):

Adittya Chaudhuri said:

However, I have two questions:

1) Is there a one-one correspondence between appropriate notion of chain complexes in the category of commutative monoids and globular objects in the category of commutative monoids?

2) What are the appropriate notion of chain complexes in the category of commutative monoids ?

I was trying to argue that the appropriate notion of chain complex in the category of commutative monoids is precisely the notion of globular object in the category of commutative monoids.

I conjectured that globular objects in the category of abelian groups are equivalent to chain complexes in the category of abelian groups, where we define d=std = s - t. (I think checking this conjecture is just a matter of generalizing some calculations in HDA6.)

But we can't subtract morphisms between commutative monoids, in general. So I believe we should avoid the temptation to work with chain complexes, and simply use globular objects - which are, I believe, more fundamental!

view this post on Zulip John Baez (May 09 2025 at 16:53):

But here's another idea: simplicial objects in the category of abelian groups are also equivalent to chain complexes of abelian groups: that's the [[Dold-Kan theorem]].

Maybe simplicial objects in the category of commutative monoids are even better than globular objects!

However, what I want to do now is think about your ideas regarding Mayer-Vietoris for the homology monoid of a graph. A graph is such a simple thing that many of the fancy ideas I'm talking about become much simpler in this case. We don't need to figure out the general theory of how to generalize homological algebra to commutative monoids to finish our paper!

(A graph is a very simple sort of globular set, and a reflexive graph is a very simple sort of simplicial sets.)

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 18:03):

John Baez said:

(A graph is a very simple sort of globular set, and a reflexive graph is a very simple sort of simplicial sets.)

Thanks! Yes, I agree!!

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 18:04):

John Baez said:

Maybe simplicial objects in the category of commutative monoids are even better than globular objects!

Interesting!!

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 18:14):

John Baez said:

(a reflexive graph is a very simple sort of simplicial sets.)

Interestingly, it seems a graph in our sense can can also be seen as a simplicial set of dimension 1\leq 1 as in Directed Graphs as Simplicial Sets, however, morphisms in our sense is different from the morphisms in their sense. Hence, our categories and their categories may be different.

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 18:16):

James Deikun said:

Just as a kernel pair describes the failure of an arrow to be monic, and a cokernel pair its failure to be epic, one could have a pair that describes the failure of a pair to be jointly monic/epic. I wonder if this idea is of use in this context.

Interesting! Thank you!

view this post on Zulip Adittya Chaudhuri (May 09 2025 at 18:19):

John Baez said:

A graph is such a simple thing that many of the fancy ideas I'm talking about become much simpler in this case. We don't need to figure out the general theory of how to generalize homological algebra to commutative monoids to finish our paper!

I agree!!

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:07):

I think we are now thinking about two problems (probably independent) in the context of emergence of feedback loops in directed graphs while gluing graphs along vertices/ vertices and edges.

Now, I think, I found something in the direction of (2), although in a much simpler case (at the moment), that is when XYX \cap Y only contain vertices.

Since we have shown

Description of the idea:

Let X,YGphX, Y \in \mathsf{Gph}, such that XYX \cap Y only contain vertices. Now, let Path(X)\mathsf{Path}(X) and Path(Y)\mathsf{Path}(Y) denote that set of paths in XX and the set of paths in YY, respectively. Let V(X)V(X) and V(Y)V(Y) denote the vertices of XX and vertices of YY, respectively.

Now, let us define the following:

Now, let PX,Y:=i=0nPiX,Y=i=0nPiX,Y\mathsf{P}^{X,Y}:= \sqcup^{n}_{i=0}\mathsf{P}^{X,Y}_{i}= \cup^{n}_{i=0}\mathsf{P}^{X,Y}_{i}.

Claim:
There is a category HX,Y\mathsf{H}_{X,Y} whose

Proof: There are obvious source, target and unit maps. Composition is defined by concatenation, which can be seen to be associative. Hence, HX,Y=[PX,YP0X,Y]H_{X,Y}=[\mathsf{P}^{X,Y} \rightrightarrows \mathsf{P}^{X,Y}_{0}] is a category.

Definition (Emergent paths)
Let X,YGphX, Y \in \mathsf{Gph} such that XYX \cap Y contain only vertices. Then, for each ii , define PiX,YP^{X,Y}_i as the set of emergent paths of degree ii in the graph XYX \cup Y.

Definition (Emergent loops)
Let X,YGphX, Y \in \mathsf{Gph} such that XYX \cap Y contain only vertices. Then for each vV(X)V(Y)v \in V(X) \cup V(Y), define the automorphism monoid Aut(v){\rm{Aut}}(v) in the category HX,Y\mathsf{H}_{X,Y} as the set of emergent loops based at the vertex vv in the graph XYX \cup Y. Furthermore, define the set theoretic intersection PiAut(v)\mathsf{P}_{i} \cap {\rm{Aut}}(v) as the set of emergent loops of degree ii based at vv in the graph XYX \cup Y.

Now, it may be interesting to see how holonomy comes into picture i.e emergent holonomy . More precisely, for a commutative monoid (L,×,1)(L, \times, 1), let (X,X)(X, \ell_{X}) and (Y,Y)(Y, \ell_{Y}) be two LL-labeled graphs such that XYX \cap Y contain only vertices. Now, we have induced holonomy maps

which I think naturally induces a functor X,Y ⁣:HX,YBL\ell_{X,Y} \colon \mathsf{H}_{X,Y} \to BL, which I call as the emergent holonomy functor with respect to the gluing of the graphs XX and YY along vertices.

Inspiration of the above idea:

I used a similar idea to introduce a notion parallel transport functor along Haefliger paths on a principal Lie 2-group bundle over a Lie groupoid (groupoid object in the category of principal bundles) in the section 4 and section 5 of my paper PARALLEL TRANSPORT ON A LIE 2-GROUP BUNDLE OVER A LIE GROUPOID ALONG HAEFLIGER PATHS. Although I think the idea of Haefliger paths go back to André Haefliger (for example see section 4.1.3 of CLOSED GEODESICS ON ORBIFOLDS). Due to this, if my above construction seems correct, I would like to call my category HX,YH_{X,Y} as the Haefliger path category of emergence in XYX \cup Y.

view this post on Zulip John Baez (May 10 2025 at 11:35):

This looks interesting. Let's see if I can explain the main idea simply using fewer symbols. I try to avoid formulas until the main idea has been explained in words. It's much easier to understand complex formulas if you already know what they say!

Here's what I guess you're saying:

Suppose we have a graph that's the union of two subgraphs XX and YY that have no edges in common, only vertices. Then there's a category where objects are vertices in XYX \cap Y and morphisms are paths between such vertices.

The set of morphisms is thus the disjoint union of sets Pi\mathsf{P}_i, where Pi\mathsf{P}_i consists of paths such that....

... here I'm confused. I would have guessed that Pi\mathsf{P}_i should consist of paths that go back and forth between XX and YY ii times. Then:

But you seem to be saying something else. First, you say P0\mathsf{P}_0 consists of vertices in XYX \cap Y, which are objects. So the morphisms start with P1\mathsf{P}_1, and you say

P1X,Y:=(Path(X)×tX,XY,s(Y)Path(Y))(Path(Y)×tY,XY,sXPath(X))\mathsf{P}^{X,Y}_{1}:= \Big( \mathsf{Path}(X) \times _{t_{X}, X \cap Y, s(Y)}\mathsf{Path}(Y) \Big) \cup \Big( \mathsf{Path}(Y) \times _{t_{Y}, X \cap Y, s_X} \mathsf{Path}(X) \Big)

I don't see any need for those superscripts X,YX,Y but that's a minor point. More importantly: what do all these symbols say?

You seem to be describing the set of paths that either start in XX and then go into YY, or start in YY and then go into XX. So this is exactly what I would have called P1\mathsf{P}_1. Okay, good!

So the only difference between what you're saying and what I would have guessed is that:

1) you're ignoring the set of paths that stay in XX or stay in YY, which I would call P0\mathsf{P}_0

2) you're instead using P0\mathsf{P}_0 to mean the set of morphisms.

Is that correct?

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:38):

Thanks!! Yes, we need another layer as you said. Somehow I forgot to consider.

view this post on Zulip John Baez (May 10 2025 at 11:39):

Okay, great!

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:40):

Objects of HX,YH_{X,Y} is V(X)V(Y)V(X) \cup V(Y) and morphisms are "what I described" and additionally with your description of "paths that entirely lie either in XX or in YY".

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:40):

John Baez said:

Okay, great!

Thanks!

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:44):

John Baez said:

Suppose we have a graph that's the union of two subgraphs XX and YY that have no edges in common, only vertices. Then there's a category where objects are vertices in XYX \cap Y and morphisms are paths between such vertices.

As objects, I meant V(X)V(Y)V(X) \cup V(Y) not vertices in XYX \cap Y.

view this post on Zulip John Baez (May 10 2025 at 11:45):

Now, it would be really great if the composite of a morphism in Pi\mathsf{P}_i and a morphism in Pj\mathsf{P}_j was a morphism in Pi+j\mathsf{P}_{i+j}, since then we'd have a category enriched in N\mathbb{N}-graded sets, but that's not true with this definition, since:

but

I think we can fix this by writing Pi\mathsf{P}_i as the disjoint union of two subsets, say QiQ_i and RiR_i. QiQ_i consists of paths in Pi\mathsf{P}_i whose edges start in XX, while RiR_i consists of paths in Pi\mathsf{P}_i whose edges start in YY.

Then we have rules like: composing a path in Q1Q_1 and a path in R1R_1, we get a path in Q2Q_2. The rules are a bit complicated but we could probably express them in a nice way after some thought.

view this post on Zulip John Baez (May 10 2025 at 11:46):

Adittya Chaudhuri said:

As objects, I meant V(X)V(Y)V(X) \cup V(Y) not vertices in XYX \cap Y.

Okay, good.

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:49):

John Baez said:

Then we have rules like: composing a path in Q1Q_1 and a path in R1R_1, we get a path in Q2Q_2. The rules are a bit complicated but we could probably express them in a nice way after some thought.

Thank you!! This idea looks very interesting!! As then, we can actually do algebraic operations on the emergent paths

view this post on Zulip John Baez (May 10 2025 at 11:50):

Right! I hope we get a way of grading the homsets where the set of grades is some monoid MM, and composing a morphism in Pm\mathsf{P}_m with one in Pn\mathsf{P}_{n} gives a morphism in Pm+n\mathsf{P}_{m+n} where m,nMm, n \in M.

But this monoid is more complicated than N\mathbb{N}. As a set it's N+N\mathbb{N}+\mathbb{N}, but it has a funny addition.

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:52):

Thanks.. yes I agree.

view this post on Zulip John Baez (May 10 2025 at 11:52):

Anyway, I need to spend time continuing to think about Mayer-Vietoris. My progress has been very slow, and my excuse is that I'm getting started in my job at the University of Edinburgh, doing a lot of stuff like getting a library card, taking required online courses, meeting the category theory grad students, etc.

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 11:54):

No no.. Its completely fine. I am then trying to think on grading the homsets in the way you suggested.

view this post on Zulip Adittya Chaudhuri (May 10 2025 at 17:41):

Let me recall the category HX,YH_{X,Y} as I defined before (along with adding the paths that lie entirely in XX or, in YY).

Now, let P:=i=0Pi=i=0Pi\mathsf{P}:= \sqcup^{\infty}_{i=0}\mathsf{P}_{i}= \cup^{\infty}_{i=0}\mathsf{P}_{i}.

Then, as I explained, we can define a category HX,Y\mathsf{H}_{X,Y} whose

Now, you said

I hope we get a way of grading the homsets where the set of grades is some monoid MM, and composing a morphism in Pm\mathsf{P}_m with one in Pn\mathsf{P}_{n} gives a morphism in Pm+n\mathsf{P}_{m+n} where m,nMm, n \in M.

Claim:

Below I am trying to show such a monoid can not exist (when the grading is based on the number of times the paths in XYX \cup Y have made the transition from XX to YY and from YY to XX as a result of gluing XX and YY along vertices.)

Let us assume that such a monoid MM exists. In that case, note that by the way I defined composition in HX,Y\mathsf{H}_{X,Y} (i.e by concatenation), we need to have distinct grading elements for

Lets check the associativity!!

(x+y)+z=z+z(x+y)+z= z +z as x+y=zx+y=z (by the way we compose two elements in P1\mathsf{P}_1 in the category HX,Y\mathsf{H}_{X,Y}.)

Now, consider x+(y+z)=x+zx+(y+z)=x+z as y+z=zy+z=z (by the way we compose an element in P1\mathsf{P}_1 and an element in P2\mathsf{P}_2 in the category HX,Y\mathsf{H}_{X,Y}). Now, x+z=zx+z=z (by the same reasoning).

Hence, x+(y+z)=zx+(y+z)=z. However, from the way, we compose two elements in P2\mathsf{P}_2 in the category HX,Y\mathsf{H}_{X,Y}, we have z+zzz+z \neq z. Thus, the binary operation ++ is not associative, and hence, MM can not be a monoid. (Proved)

Now, I will try to show that we can still get a nice grading-monoid(well behaved with the concatenation of paths) based on the holonomy functor

Let me first recall the set up.

For a commutative monoid (L,×,1)(L, \times, 1), let (X,X)(X, \ell_{X}) and (Y,Y)(Y, \ell_{Y}) be two LL-labeled graphs such that XYX \cap Y contain only vertices. Now, we have induced holonomy maps

which induces a functor X,Y ⁣:HX,YBL\ell_{X,Y} \colon \mathsf{H}_{X,Y} \to BL.

Now, I am defining a LL-grading which works perfectly with the composition law in the category HX,Y\mathsf{H}_{X,Y} (as X,Y\ell_{X,Y} is a functor) .

For each morphism ll in BLBL, let Pl=X,Y1(l)\mathsf{P}_{l}=\ell_{X,Y}^{-1}(l), the set of morphism in HX,Y\mathsf{H}_{X,Y} whose holonomy is ll. Note that lBLX,Y1(l)=Mor(HX,Y)\cup_{l \in BL}\ell_{X,Y}^{-1}(l)={\rm{Mor}}(\mathsf{H}_{X,Y}), and thus defines a LL-grading.

Thus, using the category HX,Y\mathsf{H}_{X,Y}, we can now grade the paths in the graph XYX \cup Y in two different ways:

As I have shown, 1 is not well behaved with the operation of concatnation of paths but 2 is well behaved.

However, I think this ill-behaviour of 1 with respect to the the composition law of the category HX,Y\mathsf{H}_{X,Y} precisely tells that the behaviour described by "the number of times the paths in XYX \cup Y have made the transition from XX to YY and from YY to XX as a result of gluing XX and YY along vertices is indeed an emrgent phenomenon. To the contrary, the well behaviour of 2 with respect to the composition law of the category HX,Y\mathsf{H}_{X,Y} tells that the holonomy of paths is actually not an emergent phenomenon when we glue XX and YY along vertices.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 06:54):

I am writing down somethings I noticed:

When XYX \cap Y contain only vertices then, I think the category HX,YH_{X,Y} is same as Free(XY)\mathsf{Free}(X \cup Y). However, interestingly, the description of HX,YH_{X,Y} defines a N\mathbb{N}-grading P0,P1,P2,\mathsf{P}_0, \mathsf{P}_1, \mathsf{P}_2, \ldots on the hom\hom-sets of Free(XY)\mathsf{Free}(X \cup Y) on the basis of the number of times the paths in XYX \cup Y have made the transition from XX to YY and from YY to XX as a result of gluing XX and YY along vertices.

Thus, from the above discussion, I think the functor X,Y ⁣:HX,YBL\ell_{X,Y} \colon \mathsf{H}_{X,Y} \to BL is same as the functor Free(XY)BL\mathsf{Free}(X \cup Y) \to BL. I think basically it tells that for a monoid LL, horizontal composition of open LL-labeled graphs go to horizontal composition of open LL-labeled categories in a functorial way.

Hence, I think, we can intereprete every holonomy map F ⁣:Free(XY)BL\mathsf{F} \colon \mathsf{Free}(X \cup Y) \to BL as a LL-grading on the set Mor(Free(XY))\rm{Mor}(\mathsf{Free}(X \cup Y)) (hence, on the set of paths on XYX \cup Y) induced by the holonomies on paths in XX and holonomies of paths in YY by the monoid LL (which is well-behaved with the concatenation of paths) when we glue XX and YY along vertices.

view this post on Zulip John Baez (May 11 2025 at 09:14):

That sounds right! By the way, this stuff is closely connected to @Jade Master's work and we must not forget that.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 11:51):

Thanks. Yes, definitely!! I never thought about any relationship between grading and emergence before you shared @Jade Master's idea on it.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 12:00):

Jade Master said:

The theorem I want to claim without proof is that
AF(X+Y) \int \mathcal{A} \cong F(X + Y)
where \int is a variant of Grothendieck construction, or more accurately the displayed category construction, and F(X+Y)F(X + Y) is the free category on the pushout of XX and YY.

@John Baez Although I have not gone through every detail of Jade's idea, but now, it feels my Haefliger path category HX,YH_{X,Y} might be same as A \int \mathcal{A}, because I also claimed HX,YFree(XY)H_{X,Y} \cong \mathsf{Free}(X \cup Y). May be the Jade's idea is more general (I think she is also considering the case when XYX \cap Y contains edges ?)

view this post on Zulip John Baez (May 11 2025 at 12:20):

I'll have to review Jade's work! I'm less familiar with the Grothendieck construction aspect and more familiar with the F(X+ZY)F(X +_Z Y) aspect - the free category on a pushout of graphs.

Here are some preliminaries to get ready for that second aspect.

First, a little theorem: whenever we have a monoidal category with coproducts where the tensor product distributes over the coproducts, the free monoid on an object XX is

n0Xn \sum_{n \ge 0} X^{\otimes n}

so it's automatically N\mathbb{N}-graded.

For example in (Set,×)(\mathsf{Set}, \times) this says that the free monoid on a set XX is

n0Xn \sum_{n \ge 0} X^n

This consists of 'words' in the 'alphabet' XX, where XnX^n is the set of words of length nn. The product in this monoid comes from the obvious maps

Xn×XmXn+m X^n \times X^m \to X^{n+m}

I forget if this little theorem applies directly to the free category on a graph, but I'm thinking maybe it does, as follows:

There is a bicategory Span\mathbf{Span} of sets, spans of sets, and maps of spans, where composition of spans is done via pullback. A graph is the same thing as an endospan: a span from a set to itself. What's a category in these terms?

Since a one-object bicategory is a monoidal category, the category Span(V,V)\mathbf{Span}(V,V) of spans from a fixed set VV to itself is a monoidal category. So there's an interesting monoidal structure on the category of graphs with a fixed set VV of vertices.

And a monoid in this monoidal category is just a category with VV as its set of objects! This is fun to check.

I believe Span(V,V)\mathbf{Span}(V,V) has coproducts and the tensor product (composition of endospans) distributes over coproducts. If so, the free category on a graph XSpan(V,V)X \in \mathbf{Span}(V,V) is

n0Xn \sum_{n \ge 0} X^{\otimes n}

view this post on Zulip John Baez (May 11 2025 at 12:25):

@Jade Master generalized these ideas and applied them to study emergence.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 13:04):

Thanks! These ideas sound very interesting, and much more general in nature than what I was thinking with respect to Haefliger GG-paths(inspired from the notion of paths in Lie groupoids The fundamental group(oid) of a Lie groupoid). I am trying to understand Jade's ideas in more detail.

view this post on Zulip Jade Master (May 11 2025 at 13:37):

Thanks @Adittya Chaudhuri and @John Baez I'll take a look at Haefliger G-paths, I promise to explain myself in more detail as well. I hope to convince you that the Grothendieck construction perspective is worth it if you put the time in to understand. More soon!

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 14:13):

Thank you so much @Jade Master

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 14:33):

John Baez said:

And a monoid in this monoidal category is just a category with VV as its set of objects! This is fun to check.

I believe Span(V,V)\mathbf{Span}(V,V) has coproducts and the tensor product (composition of endospans) distributes over coproducts. If so, the free category on a graph XSpan(V,V)X \in \mathbf{Span}(V,V) is

n0Xn \sum_{n \ge 0} X^{\otimes n}

Interesting!! I think the composition law in the category associated to the monoid in the monoidal category associated to the 1-object bicategory is same as the monoidal operation in the monoid. I think this monoidal operation is same as the horizontal compostion law of the one object bicategory. Then, I think "the category about which you said" is precisely the underlying 1-category of the 1-object bicategory whose object set is singleton and morphisms are the horizontal morphisms of the 1-object bicategory.

Hence, as you exaplained, we get free category on a graph XSpan(V,V)X \in \mathbf{Span}(V,V) is same as the free monoid on XX defined as n0Xn \sum_{n \ge 0} X^{\otimes n}.

Although I am yet to see in a rigorous way, but I am more or less convinced that n0(XY)n \sum_{n \ge 0} (X \cup Y)^{\otimes n} is same as my HX,YH_{X,Y} (may be with some little changes)

view this post on Zulip John Baez (May 11 2025 at 14:56):

The framework I outlined was designed for applications where you have a single graph with VV as its set of vertices. In our discussions XX and YY are usually graphs with different (but possibly nondisjoint) sets of vertices, and disjoint sets of edges.

However I believe we can handle that situation as follows. Suppose XX and YY have sets of vertices VXV_X and VYV_Y, and disjoint sets of edges. Throw more vertices into each graph so that now they both have the same vertex set V=VXVYV = V_X \cup V_Y, but still disjoint sets of edges.

Then we have two graphs - I'll still call them XX and YY - both of which are objects in Span(V,V)\mathbf{Span}(V,V). The graph you're calling XYX \cup Y is, I believe, the coproduct X+YX + Y in the category Span(V,V)\mathbf{Span}(V,V). Thus, the free category on this graph is

n0(X+Y)n\sum_{n \ge 0} (X + Y)^{\otimes n}

and we can expand out (X+Y)n(X + Y)^{\otimes n} using the distributive law, but remembering that \otimes is not symmetric. E.g.:

(X+Y)2X2+XY+YX+Y2 (X + Y)^{\otimes 2} \cong X^{\otimes 2} + X \otimes Y + Y \otimes X + Y^{\otimes 2}

view this post on Zulip John Baez (May 11 2025 at 15:04):

From this we can easily see that

n0(X+Y)n=n0Xn+n0Yn+cross-terms \sum_{n \ge 0} (X + Y)^{\otimes n} = \sum_{n \ge 0} X^{\otimes n} + \sum_{n \ge 0} Y^{\otimes n} + \text{cross-terms}

where the cross-terms come from the 'emergent' paths that are not paths in XX and not paths in YY.

view this post on Zulip John Baez (May 11 2025 at 15:07):

All this is very much like material in Jade's thesis. I believe we see here that it's easier to study 'emergent paths' than to only study 'emergent loops'. It's similar to how the fundamental groupoid of a space is easier to understand (in some ways) than the fundamental group.

view this post on Zulip John Baez (May 11 2025 at 15:14):

By the way, from the calculations above I think easy to see that homsets in the category

n0(X+Y)n \sum_{n \ge 0} (X + Y)^{\otimes n}

is graded by the free monoid on two generators xx and yy. For example all the morphisms in XYXXYX Y X X Y have grade xyxxyx y x x y.

view this post on Zulip Jade Master (May 11 2025 at 15:49):

John Baez said:

From this we can easily see that

n0(X+Y)n=n0Xn+n0Yn+cross-terms \sum_{n \ge 0} (X + Y)^{\otimes n} = \sum_{n \ge 0} X^{\otimes n} + \sum_{n \ge 0} Y^{\otimes n} + \text{cross-terms}

where the cross-terms come from the 'emergent' paths that are not paths in XX and not paths in YY.

So if you squint at this formula you can see how the terms in the sum correspond to paths in the graph GG pictured below:
shapegraph.PNG
First define a function A:Edges(G)>Span(V,V)A : \mathbf{Edges}(G) -> \mathbf{Span}(V,V) by

Are you with me so far? So then there is a bijection
n0(X+Y)nfmorFGA(f) \sum_{n \ge 0} (X + Y)^{\otimes n} \cong \sum_{f \in \mathsf{mor} FG} \mathfrak{A}(f)
where if ff is the composite f=f0;;fnf = f_0 ; \ldots ; f_n then A(f)=A(f0)A(fn)\mathfrak{A}(f) = A(f_0) \cdot \ldots \cdot A(f_n).

view this post on Zulip John Baez (May 11 2025 at 15:57):

I'm with you! By the way, I suspect everything I just explained is just a slight variant of stuff in your thesis.

view this post on Zulip Jade Master (May 11 2025 at 16:08):

John Baez said:

I'm with you! By the way, I suspect everything I just explained is just a slight variant of stuff in your thesis.

Well nowadays I think about this stuff a little bit differently from what I wrote back then. I hope my new ways are better, but also I've probably just forgotten some of what I wrote there :laughing: it was a while ago.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 16:24):

John Baez said:

By the way, from the calculations above I think easy to see that homsets in the category

n0(X+Y)n \sum_{n \ge 0} (X + Y)^{\otimes n}

is graded by the free monoid on two generators xx and yy. For example all the morphisms in XYXXYX Y X X Y have grade xyxxyx y x x y.

To me, these ideas look very beautiful, elegant and interesting!! Thanks to both of you!! Regarding this grading, I have one question.

I agree to the grading you defined, however, would it be more interesting if we can somehow(by quotienting or something) we can idenitfy xm1yn1xm2yn2xmk x^{m_1}y^{n_1}x^{m_2}y^{n_2} \cdots x^{m_{k}} with xyxyyxktimes\underbrace{xyxy \cdots yx}_{k- \text{times}} for all mi,nim_i, n_i ? This quotiented grading is precisely what I had in HX,YH_{X, Y}. However, I think this quotiented grading does not behave well with concatenation of paths. I explained it in the claim portion here #theory: applied category theory > Graphs with polarities @ 💬

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 16:40):

Now, I understand the grading by two generators xx and yy is much better than the quotiented grading that I had in HX,YH_{X, Y} because, your grading gives also information on "lengths and sequence of the portions of the path in X+YX+Y that lies in XX and lengths and sequence of the portions of the path that lies in YY". Interesting!!

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 16:45):

Thanks @Jade Master . I am trying to understand your ideas.

view this post on Zulip John Baez (May 11 2025 at 16:48):

Adittya Chaudhuri said:

John Baez said:

By the way, from the calculations above I think easy to see that homsets in the category

n0(X+Y)n \sum_{n \ge 0} (X + Y)^{\otimes n}

is graded by the free monoid on two generators xx and yy. For example all the morphisms in XYXXYX Y X X Y have grade xyxxyx y x x y.

To me, these ideas look very beautiful, elegant and interesting!! Thanks to both of you!! Regarding this grading, I have one question.

I agree to the grading you defined, however, would it be more interesting if we can somehow(by quotienting or something) we can identitfy xm1yn1xm2yn2xmk x^{m_1}y^{n_1}x^{m_2}y^{n_2} \cdots x^{m_{k}} with xyxyyxktimes\underbrace{xyxy \cdots yx}_{k- \text{times}} for all mi,nim_i, n_i ? This quotiented grading is precisely what I had in HX,YH_{X, Y}.

With this quotiented grading, we are treating

n0(X+Y)n \sum_{n \ge 0} (X + Y)^{\otimes n}

as graded over the free monoid on two idempotents xx and yy.

This monoid is a quotient of the free monoid on two generators, where we impose the additional relations x2xx^2 \sim x, y2yy^2 \sim y.

However, I think this quotiented grading does not behave well with concatenation of paths.

I think it does in the situation I'm talking about where the set of edges of XX is disjoint from the set of edges of YY.

view this post on Zulip John Baez (May 11 2025 at 16:51):

Let me check.

view this post on Zulip John Baez (May 11 2025 at 16:55):

Suppose

vevev v \xrightarrow{e} v' \xrightarrow{e'} v''

is an edge path where ee is an edge in XX and ee' is an edge in YY. So this edge path has grade xyx y.

Suppose

vevev v'' \xrightarrow{e''} v''' \xrightarrow{e'''} v''''

is an edge path where ee' is an edge in YY and ee'' is an edge in XX. This edge path has grade yxy x.

Now let's compose them:

vevevevev v \xrightarrow{e} v' \xrightarrow{e'} v'' \xrightarrow{e''} v''' \xrightarrow{e'''} v''''

This edge path should have grade xyyxx y y x. But since we're now treating yy as idempotent, xyyx=xyxx y y x = x y x. So this edge path should have grade xyxx y x. And this corresponds to it starting out in XX, then going into YY, and then going back into XX.

It seems okay.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 17:05):

Thanks!! I agree. I think I made a mistake while grading the elements of P2P_2 in #theory: applied category theory > Graphs with polarities @ 💬, as I graded all the elements of P2P_2 by the same monoid element. But, in your case xyyxxy \neq yx.

view this post on Zulip John Baez (May 11 2025 at 18:20):

I see - yes, my more refined grading works with the composition of paths.

view this post on Zulip Adittya Chaudhuri (May 11 2025 at 22:21):

Thanks. Yes, I agree.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 11:46):

@John Baez and I were discussing in DM about the construction of a rig that captures the idea of necessary stimulation #theory: applied category theory > Graphs with polarities @ 💬 in a directed labeled graph where the labels can be both added (describing the cumulative influences) and multiplied (describing the indirect influences). In this post, I am trying to gather the relevant ideas that we already discussed in this direction.

First, let me recall the constructions of two different monoids by adjoining a single element with an existing monoid:

Now, lets see what happens when we apply (1) on the multiplicative monoid (R,×,1)(R, \times, 1) of a rig (R,×,1,+,0)(R, \times, 1, +,0).

Thus, let me adjoin an absorbing element ϵ\epsilon in the monoid (R,×,1)(R, \times, 1). Now, since in any rig, the additive identity 00 is the only absorbing element, hence, ϵ\epsilon becomes our new additive identity in the (hopefully new rig) (R,×,1,+,0){ϵ}(R, \times, 1, +,0) \cup \lbrace \epsilon \rbrace. But, interestingly, this also means that we implicitly applied (2) on the additive monoid (R,+,0)(R, +, 0). Thus, we now obtain two new monoids (R,×,1){ϵ}(R, \times, 1) \cup \lbrace \epsilon \rbrace and (R,+,0){ϵ}(R, +, 0) \cup \lbrace \epsilon \rbrace which we think that they combine by (distributive law) into a new rig (R,×,1,+,0){ϵ}(R, \times, 1, +,0) \cup \lbrace \epsilon \rbrace.

However, I am yet to verify the distributive law.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 11:51):

For the context, we were mostly discussing for the commutative ring Z/3\mathbb{Z}/3 (which is by default a rig)

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 12:47):

I just verified the distributive law. Hence, (R,×,1,+,0){ϵ}(R, \times, 1, +,0) \cup \lbrace \epsilon \rbrace is a rig.

view this post on Zulip John Baez (May 15 2025 at 13:01):

Great! So now let's see if I can understand the interpretation of this rig when R=Z/3R = \mathbb{Z}/3. I became confused thinking about it a while ago. Our new rig Z/3{ϵ}\mathbb{Z}/3 \cup \{\epsilon\} has four elements, and we are trying to use those as polarities meaning positive effect, zero effect, negative effect and 'necessary stimulus'. Which one is zero effect and which one is necessary stimulus?

view this post on Zulip John Baez (May 15 2025 at 13:03):

I suppose you should tell me names for the 4 elements - they're a bit confusing since the new absorbing element could be called 00 or ϵ\epsilon.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:03):

Thanks!! I think there is an issue (in the context of necessary stimulus) here for the case {+,,0}{ϵ} \lbrace +,-, 0 \rbrace \cup \lbrace \epsilon \rbrace i.e Z/3{ϵ}\mathbb{Z}/3 \cup \lbrace \epsilon \rbrace . Since 0=+(+)0= + (+) -, which in the sense of application means "positive and negative are added to indeterminate (as you exaplined before). However, from the above construction ϵ0=0ϵ=ϵ\epsilon \cdot 0= 0 \cdot \epsilon=\epsilon, which is a bit strange if we think of ϵ\epsilon as necessary stimulus.

view this post on Zulip John Baez (May 15 2025 at 13:04):

I don't think we should be talking about "indeterminate" here at all - right? The 4-element rig containing "indeterminate" is a completely different rig, right?

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:05):

Yes, I agree. But then, how to interprete the addtion of ++ and - in the rig capturing necessary stimulus?

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:08):

But, the general Mathematical construction (construction of a new rig by adjoining an absorbing element to the multiplicative monoid of the old rig) seems correct.

view this post on Zulip John Baez (May 15 2025 at 13:15):

Okay, so you're saying the problem is that with the current interpretation we get

no effect \cdot necessary stimulus = necessary stimulus

This is crazy so we cannot use this rig to handle "necessary stimulus".

view this post on Zulip John Baez (May 15 2025 at 13:18):

Let's think about what's going on. Multiplication is more important than addition in our framework so let's see if there is a reasonable multiplicative monoid

{positive effect, negative effect, no effect, necessary stimulus}

We know how to multiply everything except "necessary stimulus", e.g. we know we want

positive effect \cdot negative effect = negative effect

and so on. So: do you have a theory about what these should be?

necessary stimulus \cdot positive effect =
necessary stimulus \cdot negative effect =
necessary stimulus \cdot no effect =
necessary stimulus \cdot necessary stimulus =

I have good guesses about some but not others!

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:20):

John Baez said:

Okay, so you're saying the problem is that with the current interpretation we get

no effect \cdot necessary stimulus = necessary stimulus

This is crazy so we cannot use this rig to handle "necessary stimulus".

Yes, I completely agree.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:23):

Accorsding to the general construction,

necessary stimulus \cdot positive effect = necessary stimulus
necessary stimulus \cdot negative effect = necessary stimulus
necessary stimulus \cdot no effect = necessary stimulus
necessary stimulus \cdot necessary stimulus = necessary stimulus

As ϵ\epsilon is absorptive w.r.t (R,×,1)(R, \times, 1).

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:24):

Which I think is crazy.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:25):

However, if we consider only monoid (not a rig) like here #theory: applied category theory > Graphs with polarities @ 💬, then I think, the idea makes sense.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:26):

I am now thinking of "if we adjoin ϵ\epsilon to the additive monoid" instead of the multiplicative monoid?

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:28):

I think it is making sense because of the following:

necessary stimulus ++ positive effect = necessary stimulus
necessary stimulus ++ negative effect = necessary stimulus
necessary stimulus ++ no effect = necessary stimulus
necessary stimulus ++ necessary stimulus = necessary stimulus

which I think is not crazy by meaning of "necessary stimulus"

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:31):

I mean, although aa is positvely affecting bb, but unless aa activates (necessary stimulus) bb, there is no existence of the activity of bb (hence, positive effect does not count)

view this post on Zulip John Baez (May 15 2025 at 13:32):

According to the general construction,

necessary stimulus \cdot positive effect = necessary stimulus
necessary stimulus \cdot negative effect = necessary stimulus
necessary stimulus \cdot no effect = necessary stimulus
necessary stimulus \cdot necessary stimulus = necessary stimulus

Which I think is crazy.

I'm not interested in that construction now, because it's giving crazy results. I'm asking about biochemical reality. What would biochemists say the answers to these questions should be?

necessary stimulus \cdot positive effect =
necessary stimulus \cdot negative effect =
necessary stimulus \cdot no effect =
necessary stimulus \cdot necessary stimulus =

That's all I care about now.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:35):

I am not completely sure what biochemists think about it, however, I would expect something like this

necessary stimulus \cdot positive effect = postive
necessary stimulus \cdot negative effect = negative
necessary stimulus \cdot no effect = no effect
necessary stimulus \cdot necessary stimulus = necessary stimulus.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:40):

and I expect this

necessary stimulus ++ positive effect = necessary stimulus
necessary stimulus ++ negative effect = necessary stimulus
necessary stimulus ++ no effect = necessary stimulus
necessary stimulus ++ necessary stimulus = necessary stimulus

Hence, necessary stimulus is the absorptive element of the additive monoid.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:46):

And the necesary stimulus is an almost identity element of the multiplicative monoid.

view this post on Zulip John Baez (May 15 2025 at 13:46):

Adittya Chaudhuri said:

I am not completely sure what biochemists think about it, however, I would expect something like this

necessary stimulus \cdot positive effect = positive
necessary stimulus \cdot negative effect = negative
necessary stimulus \cdot no effect = no effect
necessary stimulus \cdot necessary stimulus = necessary stimulus.

Okay. Two of those match what I expect, but two of them are more tricky and I think I can argue that

necessary stimulus \cdot positive effect = necessary stimulus
necessary stimulus \cdot negative effect = necessary stimulus

Briefly, it may be that in a chain of causation if one step is necessary for the next to occur, it's necessary for the whole chain to occur.

view this post on Zulip John Baez (May 15 2025 at 13:46):

However, I'm afraid biochemists may think differently.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:48):

It sounds reasonable and interesting, but I am also not much sure about biochemists.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:52):

Personally, I am very much convinced (thinking about chain of causations)

necessary stimulus \cdot positive effect = necessary stimulus
necessary stimulus \cdot negative effect = necessary stimulus

I think necessary stimulus is a particular "type of causation", applicable not only in biochemistry but anywhere.

view this post on Zulip John Baez (May 15 2025 at 13:56):

One question about the concept of "necessary stimulus" is whether it's allowed to have two edges

AB A \to B

AB A' \to B

both labeled "necessary stimulus". If we do, does this mean that both A and A' must be present for B to be formed?

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:57):

Yes, it does. Basically in SBGN diagram, they use AND operator to join two biochemical entities like AA and AA'

view this post on Zulip John Baez (May 15 2025 at 13:58):

Of course we don't have an AND operator in our formalism. I guess we're studying a simplified version of the full formalism.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 13:58):

Yes, I also think so.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:02):

Another question:

Regarding the chain of causation, how the logic of "necessary stimulus" differs from an "unknown effect" ? I am not able to see much in the multiplicative monoid.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:04):

except, unknown .necessary=unknown. I think I understood.

view this post on Zulip John Baez (May 15 2025 at 14:06):

Interesting. Of course it's possible for the same monoid to have many different meanings. We would see those only when we chose a semantics for our syntax. This paper is all about syntax - we talk in words about what our labeled graphs might mean, but we never discuss any functors that map labeled graphs to their meanings.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:07):

Thanks. Yes, I agree and realised this now (after your last post)

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:11):

Although homology can be thought as a semantic (if we want)

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:12):

Or the graded emergent paths.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:14):

Also, holonomy

view this post on Zulip John Baez (May 15 2025 at 14:16):

Yes, those are nice semantics for open graphs. But none of them really know about whether one chemical promotes another!

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:16):

I agree.

view this post on Zulip John Baez (May 15 2025 at 14:16):

There could be some ODE semantics, or Markov process semantics, etc.

But I don't want to think about those now... that's another paper.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 14:17):

Yes, I understand and agree to your point.

view this post on Zulip John Baez (May 15 2025 at 14:28):

By the way, maybe someday you could write a paper on semantics for regulatory networks.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 15:07):

Thanks!! I would love to, but I would love much more if we can jointly write the paper on the semantics of graphs with polarities focussing on regulatory networks.

view this post on Zulip Adittya Chaudhuri (May 15 2025 at 20:22):

I think I just discovred an interesting way to describe your four elements rig S={+,0,,i}S= \lbrace +,0, -, i\rbrace (Example 7.3 in the file) with the additive and multiplicative operation `exactly as you defined'. By the word "interesting", I meant a "a formal way to construct the rig SS from some known basic monoid/rig".

Below I am sketching my ideas:

Consider the multiplicative monoid of Z/2\mathbb{Z}/2 i.e according to our notation {+,}\lbrace +, -\rbrace. Then, there is a rig P({+,}):=(P(Z/2),,+,,)P(\lbrace +,- \rbrace):=(P(\mathbb{Z}/2), *, +, \cup, \empty), whose

Now, the fact that P({+,})P(\lbrace +,- \rbrace) is indeed a rig follows from the the fact that Z/2\mathbb{Z}/2 is a (multiplicative)monoid (by the example 1.10 of the book
Jonathan S. Golan (auth.) - Semirings and their Applications-Springer Netherlands (1999).pdf)

Now,

then I verified that the addition and multiplication table of the respetively underlying additive and multiplicative monoids of the rig P({+,})P(\lbrace +,- \rbrace) are precisely "what you described" as addition and multiplication tables for your rig SS in the Example 7.3 in the file.

Hence, I claim your rig SS is same as the rig P({+,})P(\lbrace +,- \rbrace).

It also precisely matches our intuition:

\empty means no effect, {+,}\lbrace +,- \rbrace means indeterminate, {+}\lbrace + \rbrace means positive effect and {}\lbrace - \rbrace means negative effect.

view this post on Zulip John Baez (May 16 2025 at 09:59):

Adittya Chaudhuri said:

Thanks!! I would love to, but I would love much more if we can jointly write the paper on the semantics of graphs with polarities focussing on regulatory networks.

Thanks, but I have too many different things I want or need to do. I have a few books I want to finish writing, I need to get back to work on software for epidemiology, etc. I think our current paper is a great first step and you could easily continue working on this sort of subject if you want to. The big question is: what will be the best thing for you to work on, to develop your career? It may be that more practical work in systems biology is better.

view this post on Zulip John Baez (May 16 2025 at 10:04):

I think I just discovred an interesting way to describe your four elements rig S={+,0,,i}S= \lbrace +,0, -, i\rbrace (Example 7.3 in the file) with the additive and multiplicative operation 'exactly as you defined'.

That's GREAT! I really like this construction because it reminds me of the conjecture, made here, that there's a systematic way to turn any hyperring into a rig, by taking the subset of the power set of the hyperring generated by the singletons and the hyperring operations. It's a different construction but again it uses the power set.

view this post on Zulip John Baez (May 16 2025 at 11:56):

One big difference is that Golan is using union as addition.

view this post on Zulip Adittya Chaudhuri (May 16 2025 at 12:25):

John Baez said:

Thanks, but I have too many different things I want or need to do. I have a few books I want to finish writing, I need to get back to work on software for epidemiology, etc. I think our current paper is a great first step and you could easily continue working on this sort of subject if you want to. The big question is: what will be the best thing for you to work on, to develop your career? It may be that more practical work in systems biology is better.

Yes, I completely understand the point you made about your ongoing/upcoming important academic engagements. For so many years and in so many ways, your works and your visions are inspiring me (your papers in 2-group gauge theory during my PhD), your blog posts, your ACT papers (during my post doctoral phase),  your vision of `green Mathematics’, to name only a few. I am very glad, happy and honored to get an oppurtunity to work with you in building some interesting Math with applications  in  Real world during the last few months. 

Yes, I will try my best to write the paper on menaingful (from the point of Systems Biology) functorial semantics of regulatory networks. I will highly appreciate your feedbacks on my attempt. I really enjoyed the style of working in Public Zulip because of all the reasons you mentioned in your very first post in this thread, and additionally it gives me an oppurtinity to look back at the processs of how an interesting idea emerge from inital preliminary ideas. I am looking forward to continue my future works with this philosophy in mind. 

As of now, my goal, is to develop Math (based on ACT) such that it is interesting to both  Mathematicians and Systems Biologists at the “same time”. To be more precise, from the point of Mathematics, I want to focus on “Mathematical characterisations of various failure of ACT-based compositionality” and  from the point of Real World Applications, I want to expore how “the said study of emergence” can be useful in Systems Biology and as well as some  other important areas of Real world . I think in our paper we did a similar thing for graphs with polaritites. To be very precise, at the moment, I want to write more papers like the one we are writing now.

view this post on Zulip Adittya Chaudhuri (May 16 2025 at 12:33):

John Baez said:

That's GREAT! I really like this construction because it reminds me of the conjecture, made here, that there's a systematic way to turn any hyperring into a rig, by taking the subset of the power set of the hyperring generated by the singletons and the hyperring operations. It's a different construction but again it uses the power set.

Thank you so much!! Yes, I remeber your conjecture on "doubly distributive hyperrings". Yes, I agree Golan's and your construction are similar in spirit but there are some differences. I think, I realised the following special case/ variant of your conjecture by my construction above:

For every non-trivial monoid MM, the power set P(M)P(M) can be seen as a hyperring. Then, by Golan's construction, we can make this hyperring into a rig, whose multiplicative monoid coincides with MM.

view this post on Zulip John Baez (May 16 2025 at 13:03):

That sounds right. (This is true even for the trivial monoid - it's bad to discriminate against trivial objects!)

I noticed that Golan's construction is closely related to the idea of a [[group ring]] or [[group algebra]] or monoid algebra, as follows:

For a monoid MM and commutative rig RR we can define the monoid rig R[M]R[M] to consist of finite linear combinations of elements of MM with coefficients in RR, made into a rig in the obvious way:

(mMrmm)+(mMsmm)=mM(rm+sm)m \displaystyle{\left(\sum_{m \in M} r_m m\right) + \left(\sum_{m \in M} s_m m \right)= \sum_{m \in M} (r_ m+ s_m) m }

(mMrmm)(nMsnn)=m,nM(rmsn)mn \displaystyle{ \left(\sum_{m \in M} r_m m\right) \cdot \left(\sum_{n \in M} s_n n\right) = \sum_{m,n \in M} (r_ m s_n) mn }

view this post on Zulip John Baez (May 16 2025 at 13:08):

When MM is finite and R=BR = \mathbb{B} is the boolean rig, R[M]R[M] is isomorphic to the set of all subsets of MM, made into a rig using Golan's construction!

(For MM infinite we need to use the fact that B\mathbb{B} is not just a rig but a quantale, which means that all infinite sums in B\mathbb{B} converge. Then we can construct a rig of infinite linear combinations of elements of MM with coefficients in B\mathbb{B}, and this matches Golan's construction.)

view this post on Zulip John Baez (May 16 2025 at 13:11):

By the way, we can say this stuff more elegantly: for any commutative rig RR the 'free RR-module' functor

R[]:SetRMod R[-] : \mathsf{Set} \to R \mathsf{Mod}

is symmetric monoidal, sending products in Set\mathsf{Set} to tensor products in RModR \mathsf{Mod}. Thus, it sends monoids in Set\mathsf{Set} to monoids in RModR \mathsf{Mod}, which deserve to be called RR-algebras. This is a nice way to avoid writing these formulas:

(mMrmm)+(mMsmm)=mM(rm+sm)m\displaystyle{\left(\sum_{m \in M} r_m m\right) + \left(\sum_{m \in M} s_m m \right)= \sum_{m \in M} (r_ m+ s_m) m }

(mMrmm)(nMsnn)=m,nM(rmsn)mn\displaystyle{ \left(\sum_{m \in M} r_m m\right) \cdot \left(\sum_{n \in M} s_n n\right) = \sum_{m,n \in M} (r_ m s_n) mn }

but the formulas are hiding in this abstract nonsense.

view this post on Zulip John Baez (May 16 2025 at 13:13):

I'll add some of these ideas to our paper now, but I'll try to resist saying too much.

view this post on Zulip Adittya Chaudhuri (May 16 2025 at 13:13):

Thank you!! I am now trying to understand your ideas you just posted!!

view this post on Zulip Adittya Chaudhuri (May 16 2025 at 13:23):

Thanks!! These ideas look very beautiful and I really find these as very clean way of generalising the Golan's construction. I really like the Boolean rig and finite monoid construction. Indeed the description via the R[]R[-] functor is very elegant. I feel (although I am not sure) instead of Golan's construction, we can mention about R[M]R[M], the functor R[]R[-] evaluated at the monoid MM, where RR is the Boolean rig and MM is the multiplicative monoid of Z/2\mathbb{Z}/2 for example 7.3.

view this post on Zulip Adittya Chaudhuri (May 16 2025 at 13:29):

John Baez said:

That sounds right. (This is true even for the trivial monoid - it's bad to discriminate against trivial objects!)

Thanks. Yes, I agree to your point.

view this post on Zulip Adittya Chaudhuri (May 17 2025 at 05:56):

I was thinking whether there is a way to capture the notion of boolean network model of gene regulatory networks via just choosing a suitable labeling monoid.

More fromally, I am talking about the following set up:

I think the above example provides a temporal description (over discrete time intervals N\mathbb{N}) of which influences (both direct and indirect) are "on" and which influences (both direct and indirect) are `off'. Although the general Boolean network modelling is more compicated than what I described above (as there the states of the vertices are updated over time by a Boolean function which depends on the previous state of all the vertices of the graph). However, my above construction is inspired from the following intuition:

Consider the labeled graph in the attached file
dynamiclabeledgraph.PNG

Now, let we define

f1 ⁣:N{0,1}f_{1 } \colon \mathbb{N} \to \lbrace 0,1 \rbrace,

n1n \mapsto 1 if nn is even, and

n0n \mapsto 0, if nn is odd.

Let f2f_2 and f3f_3 are constant functions defined as n1n \mapsto 1 for all nNn \in \mathbb{N}.

On the otherhand, we define

g ⁣:N{0,1}g \colon \mathbb{N} \to \lbrace 0,1 \rbrace,

n0n \mapsto 0 if nn is even, and

n1n \mapsto 1, if nn is odd.

Then, the Z/2×(B,,1)N\mathbb{Z}/2 \times (\mathbb{B}, \cdot, 1) ^{\mathbb{N}}-labeled graph in the attached file gives a temporal description of the underlying Z/2\mathbb{Z}/2-labeled graph which alternates between a positive feedback loop and a negative feedback loop over time.

An interpretation:

One may also define fif_i and gg in a suitable way so that it models an use and not use of an inhibitor drug on a patient affecting the regulatory network (described in the attached file) over time.

view this post on Zulip John Baez (May 17 2025 at 22:03):

I don't understand this business about switching back and forth between positive and negative feedback depending on whether the time is even or odd. Is this just an assumption?

view this post on Zulip John Baez (May 17 2025 at 22:10):

I'll admit I have trouble thinking about new issues, because I'd like to finish the paper before May 26th when other things start happening. I've added material to Section 8 explaining the symmetric monoidal double category Open(LGph)\mathbb{O}\mathbf{pen}(L\mathsf{Gph}) of open graphs with edges labeled by elements of a set LL. This is an old result that I've proved twice before!

Now, we can say LL is a monoid MM and not change anything at all: we can still use the double category Open(LGph)\mathbb{O}\mathbf{pen}(L\mathsf{Gph}) defined in exactly the same way.

But:

  1. I believe we want to prove there's a double functor sending open monoid-labeled graphs to open monoid-labeled categories, and
  2. We can if we want try to define a double category of open-monoid labeled graphs and Kleisli morphisms between them.

view this post on Zulip John Baez (May 17 2025 at 22:16):

I guess 2 is best done by first doing 1, and 1 relies on

  1. There's a (symmetric monoidal) double category of open monoid-labeled categories.

view this post on Zulip John Baez (May 17 2025 at 22:21):

I don't want to drown in all the possible results we could state - there's too much here - but another good one might be this.

  1. Since for any commutative monoid AA we've shown that A[]{A}[-] preserves colimits, there should be a double functor sending open graphs to their (open) chain complexes.

view this post on Zulip John Baez (May 17 2025 at 22:23):

1 could be a warmup to studying emergent paths (or emergent loops), and 3 could be a warmup to studying emergent cycles and Mayer-Vietoris.

view this post on Zulip John Baez (May 17 2025 at 22:24):

But I want to get to the interesting stuff we've discovered fairly efficiently, without too much boring formalism.

view this post on Zulip John Baez (May 17 2025 at 22:32):

Regarding the future:

To be very precise, at the moment, I want to write more papers like the one we are writing now.

Great! But our paper is quite theoretical, and I suspect biologists won't appreciate it unless it is used to do something more practical. For this you might want to develop and/or use some software in AlgebraicJulia or CatColab. CatColab already has general software for finding motifs in monoid-labeled graphs. So far @Evan Patterson has illustrated for the monoids Z/2\mathbb{Z}/2 here, N×Z/2\mathbb{N} \times \mathbb{Z}/2 here, and Z/3\mathbb{Z}/3 here. However, I don't think it's actually been used to do research in biochemistry. You might talk to your colleagues about that. They may suggest projects I could never imagine, since I'm not a biologist.

view this post on Zulip Evan Patterson (May 17 2025 at 23:00):

FWIW, I'd love to get feedback from your biologist colleagues about what might be useful, even if it's quite far from this stuff!

view this post on Zulip Kevin Carlson (May 18 2025 at 00:26):

Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of nn-fold slow. Jason Browns be Xiaoyan Li are working on a theory with nn-fold delays like you were thinking of right now. Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of Z/3\mathbb Z/3, if that’s what you meant.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:21):

John Baez said:

I don't understand this business about switching back and forth between positive and negative feedback depending on whether the time is even or odd. Is this just an assumption?

I admit that "switching back and forth between positive and negative feedback depending on whether the time is even or odd" sounds crazy from the point of applications, and I think it happened due to my oversimplification. What I meant to say is that we may be able to define suitable functions f ⁣:N{0,1} f \colon \mathbb{N} \to \lbrace 0,1 \rbrace which reflects the fact that over time, a positive feed back loop can change into a negative feedback loop and vice versa, and thus this event may be interesting from the point that over time some influences can "cease to exist" or may begin to "re exist" which affects the overall causal structure of the causal loop diagram. But, I admit that, from the context of our paper, this example may not be that important at the moment.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:25):

John Baez said:

I'll admit I have trouble thinking about new issues, because I'd like to finish the paper before May 26th when other things start happening.

Yes, I fully understand your point.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:27):

John Baez said:

I've added material to Section 8 explaining the symmetric monoidal double category Open(LGph)\mathbb{O}\mathbf{pen}(L\mathsf{Gph}) of open graphs with edges labeled by elements of a set LL. This is an old result that I've proved twice before!

Now, we can say LL is a monoid MM and not change anything at all: we can still use the double category Open(LGph)\mathbb{O}\mathbf{pen}(L\mathsf{Gph}) defined in exactly the same way.

Thank you. Yes, I agree.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:32):

John Baez said:

  1. There's a (symmetric monoidal) double category of open monoid-labeled categories.

Yes, it should follow from the fact that category of LL-labeled categories is cocomplete and we have an adjoint functor SetLCat\mathsf{Set} \to L\mathsf{Cat} defined by the composition of two adjoints Disc ⁣:SetLGph\mathsf{Disc} \colon \mathsf{Set} \to L\mathsf{Gph} and Free ⁣:LGphLCat\mathsf{Free} \colon L\mathsf{Gph} \to L\mathsf{Cat} and I think both of them we already verified.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:39):

John Baez said:

  1. I believe we want to prove there's a double functor sending open monoid-labeled graphs to open monoid-labeled categories, and

I think it should follow from the construction of the functor L ⁣:SetLCatL \colon \mathsf{Set} \to L \mathsf{Cat} as the composition FreeDisc\mathsf{Free} \circ \mathsf{Disc}, and a theorem that you have already proved in your older papers on structured cospans.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:41):

John Baez said:

  1. We can if we want try to define a double category of open-monoid labeled graphs and Kleisli morphisms between them.

Yes, it sounds interesting. I am thinking about it.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:42):

John Baez said:

1 could be a warmup to studying emergent paths (or emergent loops), and 3 could be a warmup to studying emergent cycles and Mayer-Vietoris.

Yes, I agree.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:51):

John Baez said:

I don't want to drown in all the possible results we could state - there's too much here -

It reminds me of a result which we already discuused before about "How the cooresponding symmetric monoidal double categories behave under the change of labeling monoids"

I am attaching a diagram which I already shared with you in DM a couple of months back.
commutative diagram of double functors.png

Can we add this? (As we are working with several labeling monoids through out the paper.)

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:54):

John Baez said:

  1. Since for any commutative monoid AA we've shown that A[]{A}[-] preserves colimits, there should be a double functor sending open graphs to their (open) chain complexes.

Yes, it sounds interesting. Are you defining chain complex of a graph as a globular object in the category of commutative monoids? (as you explained before)

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 08:56):

John Baez said:

But I want to get to the interesting stuff we've discovered fairly efficiently, without too much boring formalism.

Yes, I fully agree to your point.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 09:06):

John Baez said:

Regarding the future:

To be very precise, at the moment, I want to write more papers like the one we are writing now.

Great! But our paper is quite theoretical, and I suspect biologists won't appreciate it unless it is used to do something more practical. For this you might want to develop and/or use some software in AlgebraicJulia or CatColab. CatColab already has general software for finding motifs in monoid-labeled graphs. So far Evan Patterson has illustrated for the monoids Z/2\mathbb{Z}/2 here, N×Z/2\mathbb{N} \times \mathbb{Z}/2 here, and Z/3\mathbb{Z}/3 here. However, I don't think it's actually been used to do research in biochemistry. You might talk to your colleagues about that. They may suggest projects I could never imagine, since I'm not a biologist.

Thank you!! Yes, I fully agree that according to my goal (writing papers interesting to both Mathematicians and Biologists) is a two step process, i.e writing interesting Mathematical papers and then follow it up with the development of user friendly softwares. Yes, it could be of course a very interesting project if the "software based on Algebraic Julia and CatColab" can incorporate all the causalities like necessary stimulus, Logic operators etc.. used in SBGN-diagrams. I am looking forward for such a project. Yes, I will talk to my biologist colleagues about the ideas of such a project.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 09:10):

Evan Patterson said:

FWIW, I'd love to get feedback from your biologist colleagues about what might be useful, even if it's quite far from this stuff!

Thanks. Yes, I will ask them.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 09:12):

Kevin Carlson said:

Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of nn-fold slow. Jason Browns be Xiaoyan Li are working on a theory with nn-fold delays like you were thinking of right now.

Thanks!! That's really interesting!!

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 09:15):

Kevin Carlson said:

Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of Z/3\mathbb Z/3, if that’s what you meant.

Yes, I admit Z/3\mathbb{Z}/3 has the previledge to accomodate an "indeterminate" effect, but I am also thinking about our recent construction of power set rig over Z/2\mathbb{Z}/2 #theory: applied category theory > Graphs with polarities @ 💬 . Also, I would like to see if the causalities like "necessary stimulus" or "logical operators" can be incorporated in causal loop diagrams.

view this post on Zulip John Baez (May 18 2025 at 09:15):

Kevin Carlson said:

Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of nn-fold slow.

Oh, huh - I think Evan told me otherwise, but I could easily be confused.

Jason Brown and Xiaoyan Li are working on a theory with nn-fold delays like you were thinking of right now.

Great! I've sort of heard about that.

Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of Z/3\mathbb Z/3, if that’s what you meant.

That's what I meant - clearly if you just say Z/3\mathbb{Z}/3 people are going to think of the additive group. In my paper with Adittya we sometimes call the multiplicative monoid {+,0,}\{+,0,-\}.

view this post on Zulip John Baez (May 18 2025 at 09:20):

Can we add this? (As we are working with several labeling monoids through out the paper.)

We can. This is the sort of "obvious extension of the basic ideas" that I don't want to dominate the paper - they tend to make the paper a bit dry, like a paper written by category theorists who aren't mainly interested in how graphs with polarities are used in applications. However, it's a nice pushout of two ideas we've already mentioned.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 09:22):

Thanks. Yes, I agree to the point you made.

view this post on Zulip John Baez (May 18 2025 at 09:37):

Yes, I fully agree that according to my goal (writing papers interesting to both Mathematicians and Biologists) is a two step process, i.e. writing interesting mathematical papers and then follow it up with the development of user friendly software.

Great! For me, at least, the second step is much harder than the first. I could write interesting mathematical papers endlessly while locked in a room with nothing but a laptop with a good internet connection - but creating software that's interesting to biologists would require talking to biologists a lot, and teaching them some category theory so they can understand me, and learning biology so I can understand, and either programming myself or (better) finding people who are good that, and who want to work on this project, and working with them.

If you're at all like me, then, you need to start the second step before you finish the first step, because it's much slower, and it may take several tries.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 10:30):

Thank you!! Yes, I think, for me too, the 1st step only requires a bare minimum like “a peaceful room, laptop with a good interenet connection, and pen-paper”, but yes, 2nd step is way harder than this. I understand and agree to the point you made.

view this post on Zulip Adittya Chaudhuri (May 18 2025 at 19:17):

John Baez said:

  1. We can if we want try to define a double category of open-monoid labeled graphs and Kleisli morphisms between them.

If we are thinking about using structured cospan for the purpose, we need to show that the Kleisli category contains finite colimits. I came across a discussion in this direction https://mathoverflow.net/questions/37965/completeness-and-cocompleteness-of-the-kleisli-category. I am not completely sure how to conclude the cocompleteness of our Kleisli categories from the MO question.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 07:41):

According to the Todd Trimble's comment in the MO question, it seems if the monad is idempotent then it ensures cocompleteness. However, I guess our monad is not idempotent as I think U ⁣:Cat/BLGph/GLU \colon \mathsf{Cat}/BL \to \mathsf{Gph}/GL is not a full functor. However, I admit "what I just said" adds nothing to my initial cocompleteness question.

view this post on Zulip Kevin Carlson (May 19 2025 at 08:07):

Even when the labeling monoid is trivial, I'm almost certain the Kleisli category (which in this case is the category of free categories) doesn't have pushouts. Consider the pushout of the categories 0<1<20<1<2 and 0<1<20<1'<2 along the two inclusions of the long arrow 0<2.0<2. In general categories, this would be a commutative square, which is not free; there's more to be said to prove there's no pushout at all (which is why Todd used the split idempotents in his example at your link, but that won't work here), but of the cocones I can list, including the projection 1=11=1' onto 0<1<20<1<2 and the four projections onto the arrow, none is universal.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 08:31):

I see. Thank you.

view this post on Zulip John Baez (May 19 2025 at 08:44):

I guess Adittya is wanting cocompleteness because there's a well-known theorem that there's a symmetric monoidal double category of 'structured cospans' whenever we have categories AA and XX with finite colimits and a functor F:AXF: A \to X that preserves finite colimits. The loose 1-cells in this structured cospan category look like

F(a)xF(b) F(a) \to x \leftarrow F(b)

We compose them using pushouts in XX and tensor them using coproducts in both AA and XX, and the fact that FF preserves them.

But the hypotheses here are stronger than necessary. Kenny Courser's thesis pointed out that it's enough for AA to have finite coproducts, XX to have finite coproducts and pushouts, and LL to preserve finite coproducts (Theorem 3.2.3 here). The pushouts are only needed in XX.

And I believe even this is more than we need. To compose the loose 1-cells we don't need XX to have all pushouts, only pushouts of diagrams like

xF(b)y x \leftarrow F(b) \to y

where the object in the middle is of the form F(b)F(b) for some A \in A.

If we take A=SetA = \mathsf{Set} and X=FreeCatX = \mathsf{FreeCat} (the category of categories that are free on some graph, and functors between these), where F:AXF : A \to X sends any set to the discrete category on that set, do all pushouts of this form exist?

I'm pretty sure that the pushout of

xF(b)y x \leftarrow F(b) \to y

exists if the arrows are monic. This may be good enough for defining a symmetric monoidal double category of 'monic structured cospans', I believe. I still need to check that composing two cospans of the form

xF(b)y x \leftarrow F(b) \to y

with monic arrows, gives another cospan of that form with monic arrows.

If we drop the monicity constraint, I'm afraid we run into problems.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 10:45):

Thanks, you are right, I wanted the cocompleteness of the Kleisli category to apply that "well known theorem" on structured cospans. Thanks for pointing me to the more general version. However, as you said

"we only need to have only pushouts of diagrams like

xF(b)y x \leftarrow F(b) \to y "

and

"I'm pretty sure that the pushout of

xF(b)y x \leftarrow F(b) \to y "

is actually "very interesting" because I think occurances of motifs are characterised by Monic Kleisli morphisms but not general Kleisli morphisms. Here, from the point of applications also, I think the subcategory of monic Kleisli morphisms is more relevant than the whole Kleisli category.

view this post on Zulip John Baez (May 19 2025 at 10:50):

But here I'm just suggesting that it may help to require that the feet of an open free category are monically included in its set of objects.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 10:52):

Yes, I got your point. But, I am saying "from the perspective of motifs", if we just focus on the subcategory of monic Kleisli morphisms (and not the whole Kleisli category), are we loosing much? I admit, we may not need the "monic condition" on the other morphisms. But I was trying to think from the point of motifs. I may be misunderstanding.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 11:11):

In that case, the 2-morphisms in the double category (associated to monic morphisms) will characterise the occurances of motifs.

I think, then the horizontal composition law of 2-morphisms should characterise a compatibility condition between

view this post on Zulip John Baez (May 19 2025 at 12:36):

I don't love the idea of require that motifs be monic Kleisli morphisms between labeled graphs. I understand that to match biologists' intuitive ideas about motifs this requirement may be helpful. Monics are always appealing because a monic f:ABf: A \to B is like a "way of putting a picture of AA inside BB, without squashing down AA at all". But I would be more convinced that monicness is a useful requirement for motifs if I knew some theorem saying monic Kleisli morphisms are better in some way.

view this post on Zulip John Baez (May 19 2025 at 12:36):

Anyway, I have a lot of writing to do and I want to get a lot done today, so I won't think about this much right now!

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 12:45):

John Baez said:

But I would be more convinced that monicness is a useful requirement for motifs if I knew some theorem saying monic Kleisli morphisms are better in some way.

I understand your point. Although I may not be able to convince you "why monic" would be preferred Mathematically, if we consider this paper where I think the concept of defining motifs in terms of Kleisli morphisms first appeared, then "the Definition 2.8" and the discussion thereafter till the Proposition 2.9 assumes the monic condition for describing occurances of motifs.

view this post on Zulip John Baez (May 19 2025 at 15:16):

Good point! They just say it "excludes degenerate instances" of motifs. Being a mathematician I say it's bad to discriminate against degenerate cases.... unless there's a very good reason. Excluding degenerate cases is psychologically appealing - the degenerate cases "look funny" - but it tends to cause trouble later on.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 15:59):

Now I got the point you are making. Thanks !! Yes, it fully makes sense.

view this post on Zulip Kevin Carlson (May 19 2025 at 17:09):

Yes, I agree that pushouts xF(b)yx\leftarrow F(b)\to y of two free categories over a discrete free category exist. In fact, I don't see the need for these arrows to be monic. The pushout in the full category of categories is described by taking the pushout of object sets and generating morphisms by images generating morphisms of xx and of yy, subject to the images of relations from xx and from yy; but there are no such relations in this case, so I believe the inclusion of free categories in categories creates any pushout of this form.

view this post on Zulip John Baez (May 19 2025 at 17:17):

I was afraid that doing a pushout of that sort with a non-monic morphism might take a free category xx and identify two of its objects, making it non-free. But I hadn't actually sat down and checked.

view this post on Zulip Kevin Carlson (May 19 2025 at 17:19):

It can do that, but my claim is that gluing objects of a free category doesn't make it unfree!

view this post on Zulip John Baez (May 19 2025 at 17:46):

Yes, I was foolishly thinking that taking \bullet \to \bullet and identifying two objects would make a bunch of new morphisms spring into existence, which it does, and that this would be bad... which it's not! Freenesss can only be destroyed by identifying morphisms.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 18:36):

Let GbG_{b} be the discrete graph on the set bb, let Free(G)=x\mathsf{Free}(G)=x and Free(G)=y\mathsf{Free}(G')=y, then I think, assiciated to any diagram Free(G)oF(b)iFree(G) \mathsf{Free}(G) \xleftarrow{o} F(b) \xrightarrow{i} \mathsf{Free}(G') in Cat\mathsf{Cat} , we get a diagram GoGbiG G \xleftarrow{o} G_{b} \xrightarrow{i} G' , in Gph\mathsf{Gph} (irrespective of whether i,oi,o is monic or not) as Free(Gb)F(b)\mathsf{Free}(G_{b}) \cong F(b) is a discrete category. Now, pushout of the diagram GoGbiG G \xleftarrow{o} G_{b} \xrightarrow{i} G' exists in Gph\mathsf{Gph}. Now, if we apply the left adjoint functor Free\mathsf{Free} on the diagram GoGbiG G \xleftarrow{o} G_{b} \xrightarrow{i} G', we get "our pushout" i.e Free(G+GbG)Free(G)+F(b)Free(G)\mathsf{Free}(G+_{G_b}G') \cong \mathsf{Free}(G) +_{F(b)} \mathsf{Free}(G'), and hence, the pushout exists and free.

Am I missing something?

view this post on Zulip Kevin Carlson (May 19 2025 at 20:07):

That's roughly a correct argument, but you've written o,io,i for the "same" morphisms in two different categories that obscures the point: the argument works because o,io,i are in the image of Free,\mathsf{Free}, which is the key special property of discrete categories you're using here.

view this post on Zulip Adittya Chaudhuri (May 19 2025 at 21:27):

Thanks. Yes, I agree to your argument. Yes, I agree I should have used different notations to denote the corresponding ii and oo in Cat\mathsf{Cat}.

view this post on Zulip John Baez (May 25 2025 at 21:50):

Adittya Chaudhuri said:

Apart from simple combinatorial examples like "how many possible routes" etc.. I am not able to find much real life examples which model indirect influences with multiplication in RR.

Sterman discusses them in his book Business Dynamics when explaining causal loop diagrams. He says we draw

x+y x \xrightarrow{+} y

when

yx>0\displaystyle{ \frac{\partial y}{\partial x} > 0}

But we could be more quantitative and use (R,×)(\mathbb{R}, \times)-labeled graphs instead of {+,}\{+,-\}-labeled graphs, and write

xay x \xrightarrow{a} y

when

yx=a\displaystyle{\frac{\partial y}{\partial x} = a}

Then when we have

xaybz x \xrightarrow{a} y \xrightarrow{b} z

Sterman says this implies

zx=zyyx=ab \displaystyle{ \frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} \frac{\partial y}{\partial x} = a b }

so we're using the multiplicative monoid of R\mathbb{R}.

All of this is being rather sloppy about which variables we're holding fixed when taking partial derivatives. But there should be some truth to it.

view this post on Zulip Adittya Chaudhuri (May 26 2025 at 06:23):

Thanks very much for explaining!! Yes, now I am realising the practical relevance of using (R,×,1)(\mathbb{R}, \times, 1) as a labeling monoid.

view this post on Zulip Adittya Chaudhuri (May 29 2025 at 14:33):

Hi! I think I observed another interepretation of a 2-element power set monoid. I am writing down my thoughts below:

Let dd means delay, ff means fast, tt means on time and ii means indeterminate. I am writing down the multiplication table below:

Consider the power set monoid generated on the two element set {d,f}\lbrace d,f \rbrace w.r.t the set -theoretic union as the binary operation. Then, if we denote the elements of the power set P({d,f})P(\lbrace d,f \rbrace) as

then, I think we get a monoid describing "delays and quickening" in a qualitative way.

I think the additive monoid (R,+,0)(\mathbb{R}, +, 0) gives the quantitative version of the above qualitiative monoid. However, unlike the multiplicative case, at the moment, I am not able to see a nice way to turn quantitative into qualitative via monoid homomorphism from (R,+,0)(\mathbb{R}, +, 0) to P({d,f})P(\lbrace d,f \rbrace).

view this post on Zulip John Baez (May 29 2025 at 14:38):

That's interesting, thanks! I'm not used to thinking of time qualitatively rather than quantitatively, but I guess it does indeed work.

view this post on Zulip Adittya Chaudhuri (May 29 2025 at 14:39):

Thank you.

view this post on Zulip John Baez (May 29 2025 at 14:46):

In preparing my talk for Monday, I started reading this very nice book:

It studies 7 small labeled graphs which correspond to fundamental problems that can happen in systems. We can think of these as 'motifs', but they call them 'archetypes'.

The edges of these graphs are labeled with elements of the monoid {+,}×Bool\{+,-\} \times \textrm{Bool} , where the second factor describes whether or not there's a 'delay'.

view this post on Zulip John Baez (May 29 2025 at 14:47):

Since these 'archetypes' describe problems, I would like to study gene regulatory networks and see how often these archetypes occur in those networks.

view this post on Zulip John Baez (May 29 2025 at 14:48):

My hope is that they are rare.

view this post on Zulip Adittya Chaudhuri (May 29 2025 at 14:50):

Thanks. These look interesting. I will read about "archetypes" from the book you shared.

view this post on Zulip Adittya Chaudhuri (May 29 2025 at 15:47):

John Baez said:

That's interesting, thanks! I'm not used to thinking of time qualitatively rather than quantitatively, but I guess it does indeed work.

I think the product monoid {1}×P({d,f})\lbrace 1 \rbrace \times P(\lbrace d,f \rbrace) may reasonably describe the "status of trains/flights" between various destinations connected by air routes/ train routes . By status, I mean whether the flight/train is delayed or arriving before time when we fix the source and target destination. By "reasonably" , I mean in an approximate way i.e qualitatively. However, for an accurate information, we may need to use the follwing product monoid {1}×(R,+,0)\lbrace 1 \rbrace \times (\mathbb{R}, +, 0). More precisely, negative real numbers will tell the "delay", positive real numbers will tell the "quickening" and 00 will say "on time".

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 15:54):

John Baez said:

We can think of these as 'motifs', but they call them 'archetypes'.

The edges of these graphs are labeled with elements of the monoid {+,}×Bool\{+,-\} \times \textrm{Bool} , where the second factor describes whether or not there's a 'delay'.

I was reading the book. These pathways are really interesting and so much "thoughts generating" in general. I realised delays play so much pivotal roles in human behaviour. I was wondering whether our immune system also behave "like us" when they fight against diseases. In their cases, may be its not "delay" but may be some "other distracting factor" produced by the disease to decieve our immune system.

view this post on Zulip John Baez (May 30 2025 at 15:57):

I'm wondering how much the troublesome nature of those 7 system archetypes arises from having two paths from one vertex to another, with opposite signs, one with a delay.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 15:59):

Its like a "delayed version" of incoherent feedforward loop ?

view this post on Zulip John Baez (May 30 2025 at 15:59):

Yes!

view this post on Zulip John Baez (May 30 2025 at 16:00):

How does biology use incoherent feedforward loops?

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:01):

As far as I know, for "good" purpose only. But I will read on it to say more precisely.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:02):

I feel "delay" is a distraction. In biology, it may be some biochemical serving the purpose of a delay to "fool our immune system" to fightback against a disease.

view this post on Zulip John Baez (May 30 2025 at 16:02):

Biology uses incoherent feedforward for good, yes - but how is it good?

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:04):

John Baez said:

For good, yes - but how is it good.

At the moment, I can not answer properly. I will read on it and tell.

view this post on Zulip John Baez (May 30 2025 at 16:04):

I need to check, but now I think maybe the 'bad' only happens when one has two loops based at one point, with opposite signs, one with a delay.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:05):

Yes, true, but I think "the bad" is happening, because we (humans) often relate a delay with "non existence". As a result we are focussing on those loops which do not have a delay. But, it is just a thought. I will read on role of incoherent feedforward loops in regulatory networks.

view this post on Zulip John Baez (May 30 2025 at 16:09):

I agree that we humans tend to treat a delayed signal as nonexistent. But I'm hoping this is visible in a purely mathematical way from the fact that we have two feedback loops of opposite sign, one with a delay. So the system starts acting one way and then 'too late' starts acting the opposite way.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:13):

Another question is "why the delayed loop should be always the harmful one" ? Unless we assume an external virus is causing such delayed loop in our immune system response.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 16:18):

John Baez said:

I agree that we humans tend to treat a delayed signal as nonexistent. But I'm hoping this is visible in a purely mathematical way from the fact that we have two feedback loops of opposite sign, one with a delay. So the system starts acting one way and then 'too late' starts acting the opposite way.

Thanks. I think I got your point. We can express the "non-existence" by a "sufficient delay". I agree.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 17:43):

John Baez said:

Biology uses incoherent feedforward for good, yes - but how is it good?

I found some "good" applications of incoherent delayed feedforward loops in the section 3.7 of Uri Alon's book An introduction to Systems Biology. I am explaining one such application.

Let X+ZX \xrightarrow{+} Z and X+YZX \xrightarrow{+} Y \xrightarrow{-} Z. Now, lets look at the concentration level of ZZ. At first the concentation of ZZ is increased due to direct stimulation of XX. Now, at the same time the concentration of YY also increases due to stimulation by XX. Now, once the concentration of YY reaches a threshold value, YY starts strongly repressing ZZ and over time redcuces the concentration of ZZ to 00. As a result it produces a pulse like dynamics for the concentration of ZZ. (Here, "delay in the inhibition of ZZ by XX" is happening because it is necessary that the concentration of YY reaches a threshold value before YY can act as an inhibitor to ZZ).

Uri Alon said that such a pulse can be seen in the system that signals mammalian cells to divide in response to the proliferation signal EGF.

view this post on Zulip Adittya Chaudhuri (May 30 2025 at 18:19):

I am attaching a screenshot from the figure 1 of the paper Functional motifs in Biochemical networks (describing various information-processing systems in a cell), where I marked a portion in red. I think like this marked portion, this figure may conatin some interesting motifs.
motifsincell.PNG

view this post on Zulip David Corfield (May 31 2025 at 07:12):

I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.

view this post on Zulip David Corfield (May 31 2025 at 07:47):

John Baez said:

Here's a different idea. Cancer and autoimmune diseases seem to be 'system failures' of some sort, so maybe they could be detected very abstractly by detecting more 'bad motifs' activated in the gene regulatory network.

You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.

As a biologist thinking along these lines, Michael Levin,

Biological systems employ hierarchical regulatory networks to maintain structure and function at multiple scales, from molecules to whole organisms,

who is no stranger to ACT circles, as with his talk to the Topos Institute, has expressed such hopes.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 13:15):

David Corfield said:

You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.

I find this persspective of "hierarchical systems" very interesting to think about, or in particular, can we use ACT to talk about how inter-molecular interactions gives rise to

I think that dealing with cancer invlolves both (1) and (2) and in an inter-related way.

I have been thinking in the direction of (1) in toy-setups:

For example:

In the attached file let XX represents an interaction between entities aa and bb and YY represent an interaction between entities cc and dd.
communityinteraction.PNG

I was trying to see XX and YY are bigger units than individually aa, bb, cc or dd. Now, a graph represents an inter-connection between the vertices, but a natural question arise how do we represent an interconnection between variuous subgraphs of a graph (when we know that such subgraphs can individually be treated as entities) in a way such that this higher interaction does not forget interaction beween vertices. This question can be be intuitively undertstood if we think aa, bb, cc and dd as individuals, XX, YY as two groups, and ZZ as an interconnection between the group XX and the group YY.

So, in the hierarchical order level 1 graphs are XX, YY and ZZ, but XYZX \cup Y \cup Z represents a level 2-graph.

Now, consider the symmetric monoidal double category DD whose loose 1-cells are open Z/2×Bool\mathbb{Z}/2 \times \text{Bool}-labeled graphs. Now, I define the set G2G^{2} as a subset of C ⁣:={(G,H,G) ⁣:G C \colon = \lbrace (G, H, G') \colon G, HH and GG' are composable triple of loose 1-cels in DD }\rbrace.

Then, I was thinking about defining a tuple (G2,p ⁣:G2M)\big( G^{2}, p \colon G^{2} \to M \big) (for a monoid MM) as a MZ/2×BoolM-\mathbb{Z}/2 \times \text{Bool}-labeled graph, where the level-1 is a Z/2×Bool\mathbb{Z}/2 \times \text{Bool}-labeled graph, but there is another "causal loop diagram kind of structure" (with a different labeling system MM) which corresponds to influences among subgraphs or collective entities.

Most interesting part is a morphism " F ⁣:(G2,p ⁣:G2M)(G2,p ⁣:G2M)F \colon \big( G^{2}, p \colon G^{2} \to M \big) \to \big( G'^{2}, p' \colon G'^{2} \to M \big)" which I want to define as a sequence of horizontally composable triples of 2-morphisms in DD which is compatible with new labeling system p ⁣:G2Mp \colon G^{2} \to M and p ⁣:G2Mp' \colon G'^{2} \to M.

I was thinking about these stuffs (in several directions) for some weeks.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 13:30):

David Corfield said:

As a biologist thinking along these lines, Michael Levin,

Biological systems employ hierarchical regulatory networks to maintain structure and function at multiple scales, from molecules to whole organisms,

who is no stranger to ACT circles, as with his talk to the Topos Institute, has expressed such hopes.

Thanks for sharing the talk. I watched the whole talk today. It is really very interesting!! I am very curious about a possible way to explore the concept of "self" (the way to incorporate properties in a system that can represent future states of the systems, and interestingly, every state trasition will be dependent on these set of representations) in a mathematical way.

view this post on Zulip John Baez (May 31 2025 at 16:44):

David Corfield said:

You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.

People do work with 'hierarchical' Petri nets and hierarchical stock-flow models. Their ideas should be clarified, developed more generally, and applied to other kind of networks, like graphs with polarities. This should let us take a system and view it as a hierarchy, or view it as made of separate 'agents'.

But it fascinates me that in molecular biology the molecules don't seem to know which side they're on. So we could also take a gene regulatory network and try to figure out what it signifies, just by staring at it and thinking.

This could be harder. But right now I'm excited by the fact that the monoid-labeled graphs in System Hierarchy Basics are supposed to be intrinsic signs of trouble. Some are supposed to be signs of a system that's taking the easy way out and pursuing short-term solutions that cause trouble in the long run. One is supposed to be a sign of a system that's divided into two subsystems engaged in an 'arms race'.

It may well be overoptimistic to think these simple monoid-labeled graphs always have such clear and interesting meanings. But I think it would be good to go out on a limb and test hypotheses like this.

view this post on Zulip John Baez (May 31 2025 at 17:00):

David Corfield said:

I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.

I agree: someone should try to apply System Archetype Basics to psychodynamics, and in a way I'd be surprised if nobody has. Another relevant archetype is the one called "escalation", which describes a scenario where two agents keep upping their level of some quantity to win some sort of arms race:

view this post on Zulip John Baez (May 31 2025 at 17:04):

Adittya Chaudhuri said:

I found some "good" applications of incoherent delayed feedforward loops in the section 3.7 of Uri Alon's book An introduction to Systems Biology. I am explaining one such application.

Let X+ZX \xrightarrow{+} Z and X+YZX \xrightarrow{+} Y \xrightarrow{-} Z. Now, lets look at the concentration level of ZZ. At first the concentation of ZZ is increased due to direct stimulation of XX. Now, at the same time the concentration of YY also increases due to stimulation by XX. Now, once the concentration of YY reaches a threshold value, YY starts strongly repressing ZZ and over time reduces the concentration of ZZ to 00. As a result it produces a pulse like dynamics for the concentration of ZZ. (Here, "delay in the inhibition of ZZ by XX" is happening because it is necessary that the concentration of YY reaches a threshold value before YY can act as an inhibitor to ZZ).

Nice! That makes sense.

view this post on Zulip Nathaniel Osgood (May 31 2025 at 17:21):

John Baez said:

David Corfield said:

I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.

I agree: someone should try to apply System Archetype Basics to psychodynamics, and in a way I'd be surprised if nobody has. Another relevant archetype is the one called "escalation", which describes a scenario where two agents keep upping their level of some quantity to win some sort of arms race:

I would caution that "System Archetype Basics" is just one part of a broad literature in System Dynamics that routinely refer to or build on ideas of system archetypes contributed over decades. I recall prominent attention to system archetypes starting at least as far back as 1990 (indeed, they were an central and well-developed part of the theory articulated in Peter Senge's exceptionally popular book "The Fifth Discipline", which was published 1990). Similarly, just off the top of my head, I recall a number of System Dynamics models focus on relationship dynamics and psychodynamics from that era and beyond. A broad and expanding set of system archetypes -- going well beyond those in "System Archetype Basics" -- are so tightly tied in with System Dynamics practice and so much part of the routine body of knowledge of that area that a very large literature weaves them into System Dynamics analyses of dynamics of systems in diverse application areas. From my recollection of some teaching material in the 1990s, I'm very confident that this includes attention psychodynamics -- although, as normal, such contributions would commonly be made without explicitly mentioning the phrase "system archetype" within the published work.

view this post on Zulip Nathaniel Osgood (May 31 2025 at 17:30):

For those interested in exploring additional system archetypes visually, I'd suggest considering exploring the collection at the InsightMaker platform: https://insightmaker.com/tag/archetype?page=1. An important subset of these are contributed by my close colleague and longtime collaborator Dr. Geoff McDonnell MD, who over the years has been prolific contributor to InsightMaker models involving Health Care (https://insightmaker.com/tag/health-care), and -- critically -- Health more broadly https://insightmaker.com/tag/health; many of these include system archetypes of relevance to my work in computational public health and health care. Geoff's Insights -- which are often accompanied by unfolding stories -- are accessible at https://insightmaker.com/user/3xqay3rAaMCoKDZTgE670H.

This site is a great testimonial to the storytelling power of causal loop diagrams and system structure diagrams.

view this post on Zulip John Baez (May 31 2025 at 17:32):

Is there someplace one can find, neatly listed, a bunch of causal loop diagrams that people have considered "system archetypes"? That's what I'd really like to see! (Oh: while I was writing this you seem to have provided a location.)

I will mention Senge's book in my talk, but personally I found it quite verbose, full of somewhat gaseous wisdom like "You can have your cake and eat it too - but not all at once" - and I couldn't even locate the causal loop diagrams.

System Archetype Basics is nice because it's crisp, the way a mathematician might like. Kim's 31-page Systems archetypes I is even more distilled.

view this post on Zulip Nathaniel Osgood (May 31 2025 at 17:41):

John Baez said:

Is there someplace one can find, neatly listed, a bunch of causal loop diagrams that people have considered "system archetypes"? That's what I'd really like to see! (Oh: while I was writing this you seem to have provided a location.)

I will mention Senge's book in my talk, but personally I found it quite verbose, full of somewhat gaseous advice like "You can have your cake and eat it too - but not all at once" - and I couldn't even locate the causal loop diagrams.

Because archetypes are contributed over time to the literature, I'm not aware of any collection that has aspired -- and succeeded! -- in maintaining an up-to-date collection. A reasonable place to start seems to me to browse the collection at InsightMaker https://insightmaker.com/tag/archetype. That being said, that is cluttered with clones & elaborations of various insights, and I'd be surprised if that collection included more than 20-30% of those contributed in the literature, as part of teaching System Dynamics, etc.

Sadly, I agree with your assessment of Senge's work. Alas, based on my purely anecdotal experience, it's my perception is that same shortcoming afflicts many books aimed at management/business audiences.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:08):

John Baez said:

People do work with 'hierarchical' Petri nets and hierarchical stock-flow models. Their ideas should be clarified, developed more generally, and applied to other kind of networks, like graphs with polarities. This should let us take a system and view it as a hierarchy, or view it as made of separate 'agents'.

This sounds very interesting. I will read the paper.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:37):

John Baez said:

But it fascinates me that in molecular biology the molecules don't seem to know which side they're on. So we could also take a gene regulatory network and try to figure out what it signifies, just by staring at it and thinking.

I also find it fascinating. I am trying to write down (from my basic understanding) how "uncontrolled proliferation may take place"

Step 1:
A Ligand carrying a signal binds to the receptor moelcule in the cell boundary.

Step 2:

The receptor changes its configuration to activate signal transduction pathways in the cell (sequence of intra-cellular molecular events). Interestingly, initial signal is often amplified or repressed during the process by appropriate inhbitor or stimulator. Interestingly, there are also interconnections between pathways by crosstalk.

Interesting thing to note here is that in the signal transduction pathway, when a signal is passed from a node aa to a node bb it activates bb (or switch on the function of bb). For example, in MAPK/ERK pathway, the "switch on" phenomena happens via adding phoshphate ion. In general, phosphorylation is temporary. To flip proteins back into their non-phosphorylated state (switch off state), cells have enzymes called phosphatases, which remove a phosphate group from their targets. However, "due to a mutation", some switch may not turn off and remain turn on", and hence some pathway remains active all through

Step 3:
Signal transduction network ends in producing a function of a cell like "cell proliferation", etc. However, due to the mutation, some path way remains active, and profileration continues and results in an uncontrolled proliferation, untill externally we can inhibit the pathway, by "chemotherapy", targeted therapy etc.

What I think that these mutation coupled with crosstalks between singnal transduction pathways create certain "bad motifs" over the time (as you conjectured), resulting in system failure.

What I just described is only for 11 cell. But for cancer we may have to deal with thousands and thousands of cells.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:41):

John Baez said:

It may well be overoptimistic to think these simple monoid-labeled graphs always have such clear and interesting meanings. But I think it would be good to go out on a limb and test hypotheses like this.

I agree.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:51):

John Baez said:

This could be harder. But right now I'm excited by the fact that the monoid-labeled graphs in System Hierarchy Basics are supposed to be intrinsic signs of trouble. Some are supposed to be signs of a system that's taking the easy way out and pursuing short-term solutions that cause trouble in the long run. One is supposed to be a sign of a system that's divided into two subsystems engaged in an 'arms race'.

I find this point of view very interesting. I think Systems dynamics literature is written from the point of how "human made systems remain stable or unstable", but Systems Biology literature is written from the point "how natural biological systems remain stable". I think this is one of the many reasons why we find only "good motifs" in literature. May be, unlike Systems dynamics, people have not put much effort on finding out "bad motifs in biological systems", and explaining a system failure story via the presence of such bad motifs in our signal transduction networks. I feel (as you said) may be one has to create a list of Biological System Archetypes representing harmful motifs in biological systems. And , then explain systems failures like cancer via these Biological System Archetypes.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:54):

John Baez said:

Nice! That makes sense.

Thank you.

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:57):

Nathaniel Osgood said:

This site is a great testimonial to the storytelling power of causal loop diagrams and system structure diagrams.

Thanks very much for sharing the site. It looks super interesting!!

view this post on Zulip Adittya Chaudhuri (May 31 2025 at 19:58):

John Baez said:

System Archetype Basics is nice because it's crisp, the way a mathematician might like. Kim's 31-page Systems archetypes I is even more distilled.

Nice. I will read.

view this post on Zulip David Corfield (Jun 02 2025 at 11:21):

Listening now to @Nathaniel Osgood's very interesting talk at TACT has me wonder whether there's anything to be said about how patterns, such as the system archetypes we've been talking about, translate across different logical frameworks.

Perhaps to consider things in the other direction, is it possible that archetypes emerge in the richer setting of stock and flow diagrams that we wouldn't see with mere causal loop diagrams?

view this post on Zulip Adittya Chaudhuri (Jun 02 2025 at 12:45):

David Corfield said:

Listening now to Nathaniel Osgood's very interesting talk at TACT has me wonder whether there's anything to be said about how patterns, such as the system archetypes we've been talking about, translate across different logical frameworks.

I am trying to understand your question in a very simple context i.e if we change the labeling system:

Let ϕ ⁣:Z/2×BoolM\phi \colon \mathbb{Z}/2 \times \mathsf{Bool} \to M be a monoid homomorphism. Then, there is a functor FF from the category of Z/2×Bool\mathbb{Z}/2 \times \mathsf{Bool}-labeled graphs to the category of MM-labeled graphs. Now, due to the change in labeling systems, I think the logical structures of causal loop diagrams in these two categories should be distinct. Now, if XX is an archetype in a Z/2×Bool\mathbb{Z}/2 \times \mathsf{Bool}-labeled graph GG, then one may ask

"Is F(X)F(X) also an interesting archetype for logical structure represented by the MM-labeled graphs?"

In general, I do not know the answer. However, in a particular case, it may be interesting as I disccused before here #theory: applied category theory > Graphs with polarities @ 💬 in the context of semiautomata.

view this post on Zulip David Corfield (Jun 02 2025 at 12:57):

Right. The general phenomenon of entities transmuting across different settings is utterly widespread in mathematics, such as the splitting of a prime on change of ring. E.g., the prime 2Z2 \in \mathbb{Z}, splits as the primes (1+i)(1+i) and (1i)(1-i) in Z[i]\mathbb{Z}[i].

view this post on Zulip David Corfield (Jun 02 2025 at 14:05):

So @John Baez just now shows my original question and your response are close, where the shift from causal loop diagrams to causal loop diagrams with delay is just changing the monoid that labels the graph. That fairly clearly introduces new archetypes.

view this post on Zulip David Corfield (Jun 02 2025 at 14:10):

Back to psychotherapy meets systems archetypes, Success to the successful
image.png
is precisely what causes the Jungian mid-life need to rebalance. One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.

view this post on Zulip Adittya Chaudhuri (Jun 02 2025 at 14:17):

David Corfield said:

So John Baez just now shows my original question and your response are close, where the shift from causal loop diagrams to causal loop diagrams with delay is just changing the monoid that labels the graph. That fairly clearly introduces new archetypes.

Thanks. Yes, precisely!! Such "change" produced all those 7 harmul archetypes in that book, as `delay' played a vital role in each of those archetype.

view this post on Zulip Adittya Chaudhuri (Jun 02 2025 at 19:36):

David Corfield said:

John Baez said:

Here's a different idea. Cancer and autoimmune diseases seem to be 'system failures' of some sort, so maybe they could be detected very abstractly by detecting more 'bad motifs' activated in the gene regulatory network.

You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.

I was wondering whether a generalised version of the archetype tragedy of the common
tragedyofthecommon.png would be relevant here.

Regulatory network describe how various biomolecules influence each other. However, from the causal loop diagram structure we usually do not study "when two or more regulatory networks are non-cooperating or coperating" . For example, in the attached file cumulative.PNG, the regulatory networks XX and YY are peforming great from their respective perspective. However, the way the network XX is interconnected to network YY via the network ZZ represents a non-cooperation between XX and YY. In a way, I feel the network ZZ represents an incompatibility between XX and YY. We may represent it as XZYX \xrightarrow{Z} Y. Alternatively, we may also think that ZZ is affecting negatively to both XX and YY, i.e. XZYX \xleftarrow{-}Z \xrightarrow{-}Y.

I was reading the paper Cancer across the tree of life: cooperation and cheating in multicellularity, where the authors very interestingly argued about the following:

Multicellularity is characterized by cooperation among cells for the development, maintenance and reproduction of the multicellular organism. Cancer can be viewed as cheating within this cooperative multicellular system.

So, in the context of my attached diagram, one may interprete that the network ZZ represents a "cheating" .

view this post on Zulip Adittya Chaudhuri (Jun 02 2025 at 19:55):

Another question: What are some archetypes other than "tragedy of the common" where the success of individual subunits leads to a failure of the whole unit"?

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 06:22):

Below, I am trying to understand and compare the effect of delay in causal loop diagrams in system dynamics and system biology.

From the point of systems dynamics:

According to my basic understandings on those 7 archetypes, it seems to me that human behaviour secretly prefers directed edges which are not labeled by the "delay" element in the monoid Bool\mathsf{Bool}. I think this idea is misused by many organisations for their profits. In particular, delay may misguide us to think "delay" is same as "non-existence". However, I think "temptations to obtain something very precious after long period of time" oftens leads people to prefer delay (example: investment in insurance companies, etc.). I wander whether there exists any archetype which shows human behaviour overall prefers delay naturally .

From the point of systems biology:

I think may times delay are naturally preferred for producing desired outcomes like creating pulse like dynamics, which acts as a signal for cell division (as I explained here #theory: applied category theory > Graphs with polarities @ 💬 ). Thus, here it seems, delayed signals are integral part of the natural preferences in biological systems. However, in a faulty systems like cancer, autoimmune diseases etc. these "delays" often can be interpreted as "ways to ruin" our natural healthy biological systems, which I think may be similar to the perspective of systems dyanamics.

view this post on Zulip David Corfield (Jun 03 2025 at 06:26):

Adittya Chaudhuri said:

Alternatively, we may also think that ZZ is affecting negatively to both XX and YY, i.e. XZYX \xleftarrow{-}Z \xrightarrow{-}Y.

This puts me in mind of when I was bringing up my kids, and the effect of another child coming to visit on their interpersonal dynamics. Some seemed to be able to bring out the best in them and left them harmonious; others left them completely disgruntled and at odds with each other.

There must be countless examples of such structures in economics too.

view this post on Zulip David Corfield (Jun 03 2025 at 08:30):

Presumably, following John's talk yesterday, there's a [[Mayer-Vietoris]] homological story to tell about how the introduction of a new graph component can adversely affect existing components.

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 09:42):

David Corfield said:

This puts me in mind of when I was bringing up my kids, and the effect of another child coming to visit on their interpersonal dynamics. Some seemed to be able to bring out the best in them and left them harmonious; others left them completely disgruntled and at odds with each other.

There must be countless examples of such structures in economics too.

Thanks. This sounds very exciting. It sounds like "we can have various cospans like structure from XX to YY". More precisely, there can be a set of graphs Z1,Z2,,ZnZ_1, Z_2, \cdots ,Z_n which represents various influences (with types divided into cooprative and non-cooperative) between the graph XX and the graph YY.

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 09:54):

David Corfield said:

Presumably, following John's talk yesterday, there's a [[Mayer-Vietoris]] homological story to tell about how the introduction of a new graph component can adversely affect existing components.

Yes, there is indeed an interesting story sorrounding Mayer-Vietoris. Gluing two presheaf graphs XX and YY along vertices gives us a commutative monoid version of Mayer-Vietoris sequence, whose suitablly modified boundary map would gives us the information of "new emergent feedback loops" produced as "a consequence of such gluing".

However, I think in the diagram I drawn (attached
cumulative.PNG
),

here, graph XX and graph YY does not interesect along vertices but "along another graph namely ZZ"

view this post on Zulip David Corfield (Jun 03 2025 at 10:17):

You can consider joining more than two spaces, such as discussed here.

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 10:49):

Thanks very much!! These look very much relevant to the case I was talking about.

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 11:12):

I was actually thinking about the following direction:

Step 1:

By experimental and other studies we have constructed the regulatory network XX and the regulatory network YY. We realised every thing is going fine with them.

Step 2:

After some time we realise, something is wrong as both XX and YY are mysteriously malfunctioning.

Step 3:

Again, by experinmental and other studies we realise that both the regulatory netoworks are malfunctioning because of the presence of another regulatory network namely ZZ. To actually understand this step we may use "generalised Mayer-Vietoris" as you suggested, to discover emergent feedback loops.

Step 4: (This is where I am emphasizing)

We may create another causal loop diagram kind of structure whose vertices are causal loop diagrams themselves. I am calling it 2-level graphs with polaritites. A simple instance looks like this XZYX \xleftarrow{-}Z \xrightarrow{-}Y. Observe that we actually transformed the initial causal loop diagram G=XZYG= X \cup Z \cup Y into another causal loop diagram H=XZY H=X \xleftarrow{-}Z \xrightarrow{-}Y (by additional experimental and other studies). Now, I was wondering whether such transformation GHG \to H would give us a meaningful functorial semantic which tells (in a toy setup), how interactions between biomoleclues gives rise to interactions between regulatory networks, which may indicate a system failure in some way. For example, we may simply say GG represents a system failure because the causal loop diagram structure of HH is XZYX \xleftarrow{-}Z \xrightarrow{-}Y(in a way it forgets the detail of the tragedy of the common diagram, and only remembers the "basic cause" for the visible system failure ).

view this post on Zulip John Baez (Jun 03 2025 at 12:35):

Adittya Chaudhuri said:

Yes, precisely!! Such a "change" produced all those 7 harmful archetypes in that book, as 'delay' played a vital role in each of those archetypes.

Interestingly 'delay' does not appear in the archetype 'success to the successful'. (David kindly shared the picture of this one.) I believe that's the only one where it doesn't appear.

But David wrote:

One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.

This suggests either some explicit 'delay', or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.

view this post on Zulip David Corfield (Jun 03 2025 at 13:07):

Yes. It's like the pay-off from continuing to allocate to A decreases (e.g., already sufficiently wealthy from investment banking), while the call to B increases (e.g., the neglected artistic talent), until a tipping point flips things.

Of course, in your case, you've wisely balanced your mathematical and musical talents!

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 14:24):

John Baez said:

Interestingly 'delay' does not appear in the archetype 'success to the successful'. (David kindly shared the picture of this one.) I believe that's the only one where it doesn't appear.

But David wrote:

One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.

This suggests either some explicit 'delay', or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.

Thanks. Yes, I agree.

view this post on Zulip Adittya Chaudhuri (Jun 03 2025 at 14:46):

John Baez said:

or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.

This is interesting!! Say, both AA's success and BB's success are needed in a bigger causal loop daigram(as you said) for the system to perform well. However, due to the presence of a success to succesful archetype, the whole system is collapsing after a delay (the time needed for the BB's success to fall below a threshold due to the archetype). Thus, in an indirect way, the delay feauture is present in the bigger causal loop diagram in the disguise of success to successful archetype.

view this post on Zulip John Baez (Jun 05 2025 at 22:27):

I've been distracted for a while, but I did a tiny bit of work on the paper today, as a warmup for more: I changed Example 6.7 so that it gives an example of a graph whose 1st homology monoid is not a free commutative monoid. We'd already seen that it was not free on its set of minimal elements, so the only extra thing to do is note that any free commutative monoid is free on its set of minimal elements.

view this post on Zulip Adittya Chaudhuri (Jun 06 2025 at 06:29):

Thats completely fine!! Thank you. I just checked the example 6.7. Yes, it looks great.

view this post on Zulip John Baez (Jun 06 2025 at 11:27):

Good! Someone at my talk in TACT said our homology monoid of a graph was an example of 'functor homology', and they pointed me to this paper:

But it actually doesn't seem to have our construction as a special case, even though it contains some of the same words.

view this post on Zulip David Corfield (Jun 06 2025 at 12:10):

Is there any intention to carry over your generalized sign monoids, these polarities, to the Petri nets with signed links of

view this post on Zulip David Corfield (Jun 06 2025 at 12:12):

In your TACT talk you brought up the electric circuit analysis of Kirchoff's laws. This is often done in terms of cohomology. Is it there a reason to be looking at homology?

view this post on Zulip John Baez (Jun 06 2025 at 12:49):

The best analysis of Kirchoff's laws, going by to Weyl but later Smale and others, uses both homology and cohomology:

It's good to think of current as a 1-cycle and voltage as a 1-cocycle; the pairing of 1-cycles and 1-cocycles applied to current and voltage gives power, i.e. the rate at which energy is getting turned into heat.

view this post on Zulip John Baez (Jun 06 2025 at 12:55):

David Corfield said:

Is there any intention to carry over your generalized sign monoids, these polarities, to the Petri nets with signed links of

I hadn't thought about that. Right now I'm just struggling to finish the paper on monoid-labeled graphs, which has grown larger than I'd like.

At TACT I talked to my former student @Jade Master, who had worked on open Petri nets, and is currently developing a formalism for agent-based models using colored Petri nets. Since I'm really wanting to figure out the best category-based framework for agent-based models, I hope she comes to Edinburgh while our ICMS group is working on agent-based models here at Maxwell's house!

view this post on Zulip John Baez (Jun 06 2025 at 12:56):

(There may be no one best category-based framework for agent-based models (ABMs), since ABMs use such a diverse crowd of modeling techniques. Still, we can try to bring a little order to the wild west.)

view this post on Zulip David Corfield (Jun 06 2025 at 13:16):

Interesting. And you have a 14-part series on ABMs!

view this post on Zulip John Baez (Jun 06 2025 at 13:25):

Right. The first 8 posts were preliminary floundering:

The second batch of stuff was all very nice (in my humble opinion), but alas not sufficiently general for the models we want to create.

Then @Kris Brown came up with a more general framework here:

and I explained this starting in Part 9 of my series.

At our meeting we are trying to combine ABMs as described using Kris' framework with dynamical systems. Our goal is to use this to create ABMs for gestational type II diabetes and shocks in the labor market. Kris has an approach described here:

and we have been trying to follow that. But our economist @Owen Haaga has already bumped his head against the limitations of this approach.

view this post on Zulip David Corfield (Jun 06 2025 at 13:50):

Great, thanks! I have an idea I'd like to see if I can make some such formalism model psychodynamics, along the lines of

Screenshot 2025-06-06 14.07.29.png

view this post on Zulip John Baez (Jun 06 2025 at 13:58):

Cool! Depending on how complicated your model is, we might have the software available for you to program it in and run it already, or it might take some months. As usual I recommend starting with a ridiculously oversimplified toy model, and then gradually, judiciously adding bells and whistles.

view this post on Zulip Adittya Chaudhuri (Jun 06 2025 at 18:38):

John Baez said:

Good! Someone at my talk in TACT said our homology monoid of a graph was an example of 'functor homology', and they pointed me to this paper:

But it actually doesn't seem to have our construction as a special case, even though it contains some of the same words.

Although I have not read the paper in detail, if I am not mistaking, the main difference between them and us is that they focus only on directed acyclic graphs and our focus lies on the directed graphs which contain cycles. I think Theorem 3.8 is their main theorem, where they computed their homology `groups' under the assumption of acyclic quivers. I have a feeling that their paper is more inclined towards an application on Topological ordering as a topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. On the contrary, our paper is inclined towards an application on causal loop diagrams and regulatory networks, where I think in all non-trivial cases we usually have cycles. So, it seems to me that the purpose of their homology groups and our homology monoids are very distinct.

view this post on Zulip Adittya Chaudhuri (Jun 06 2025 at 19:02):

David Corfield said:

Great, thanks! I have an idea I'd like to see if I can make some such formalism model psychodynamics, along the lines of

Screenshot 2025-06-06 14.07.29.png

Very interesting!! I feel that there is an autonomy in each hierarchical layers like molecules ---> cells---> tissues----> organs-----> so on... which let them retain their layerwise idenities. From this perspective, I think a homeostatic system may possibly model such autonomy, and then such homeostatic system becomes a collective whole. Now, for integration , maybe a controller is needed (as Mark Solms commented in your post). This reminds me of the notion of hyperstructures by Baas (Please see page 3, the point (d)), where I think Baas explained such a controller in the context of "elections in democracies". I think Baas termed such a controler as globalizer (point (IV) in page 2).

view this post on Zulip John Baez (Jun 06 2025 at 19:28):

So, it seems to me that the purpose of their homology groups and our homology monoids are very distinct.

Good. Some young mathematician rather confidently told me that this paper was doing the same thing as us, or that our work was a subset of these ideas.

view this post on Zulip Adittya Chaudhuri (Jun 06 2025 at 19:36):

Thanks! I do not see "how the results in that paper justify their claim".

view this post on Zulip John Baez (Jun 06 2025 at 20:03):

I think they were being a bit overconfident.

view this post on Zulip Adittya Chaudhuri (Jun 06 2025 at 20:29):

I also feel so.

view this post on Zulip Adittya Chaudhuri (Jun 07 2025 at 04:43):

Hi! I observed something which I am discussing below:

We have used the left adjoints LG ⁣:SetGphL_{\mathsf{G}} \colon \mathsf{Set} \to \mathsf{Gph} and LC ⁣:SetCatL_{\mathsf{C}} \colon \mathsf{Set} \to \mathsf{Cat} (which produces discrete graphs and discrete categories respectively) to glue respectively, open graphs and open categories along respectively vertices and objects. Now, in the context of graphs, I think it is natural to ask whether we can glue graphs along edges and paths . I realised we may already have the necessary technical materials in our paper to accomodate such gluing. In particular, we have the functor Free ⁣:GphCat\mathsf{Free} \colon \mathsf{Gph} \to \mathsf{Cat}, which is left adjoint to Und ⁣:CatGph\mathsf{Und} \colon \mathsf{Cat} \to \mathsf{Gph}, and both Gph\mathsf{Gph} and Cat\mathsf{Cat} are finitely cocomplete.

Thus, if XX and YY are graphs of the form vewv \xrightarrow{e} w or v1e1v2e2v3e4envnv_1\xrightarrow{e_1} v_2 \xrightarrow{e_2} v_3 \xrightarrow{e_4} \cdots \xrightarrow{e_n} v_{n}, then a cospan of the form Free(X)iFree(G)oFree(Y)\mathsf{Free}(X) \xrightarrow{i} \mathsf{Free}(G) \xleftarrow{o} \mathsf{Free}(Y) in Cat\mathsf{Cat} (where we may asssume ii and oo are monic) may possibly allow us to glue the graph GG along edges and paths, and of course along vertices in the special case.

view this post on Zulip John Baez (Jun 07 2025 at 07:32):

Yes, we can do such more general pushouts. The structured cospan philosophy is to restrict the allowed pushouts to get pushouts that are simpler and easier to understand; for example, we will show that when we push out two graphs along monomorphisms of graphs that have no edges, the homology behaves in a fairly simple way - but still complicated enough to write a section all about it.

view this post on Zulip Adittya Chaudhuri (Jun 07 2025 at 11:17):

Thank you. Yes, I got your point.

view this post on Zulip David Corfield (Jun 07 2025 at 17:04):

John Baez said:

Cool! Depending on how complicated your model is, we might have the software available for you to program it in and run it already, or it might take some months. As usual I recommend starting with a ridiculously oversimplified toy model, and then gradually, judiciously adding bells and whistles.

I could imagine requiring plenty of bells and whistles. For one thing, can DPO-rewriting cope with a certain amount of latitude in its pattern matching? An agent might not find an exact match for its LL component.

But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?

In the case of psychodynamics, there's also the powerful effect of projection, where one forces the external world into a pattern, e.g., sees something to get irate about when unnecessary. And then the further stage, projective identification, where one behaves in such a way as to generate the sought-for pattern in another.

But, as you say, start simple!

view this post on Zulip Adittya Chaudhuri (Jun 07 2025 at 19:32):

David Corfield said:

But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?

Yes, it is true. The relationship between ligand and binding partner is a function of charge, hydrophobicity and molecular structure as mentioned here.

view this post on Zulip Adittya Chaudhuri (Jun 07 2025 at 19:43):

David Corfield said:

But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?

From your description, I am now starting to imagine intracellular signal transduction as a string diagram of a nice monoidal category (which at the sametime can be represented as a sort of regulatory network), where the wires outside on the left of the big box represent the signals entering the cell, and the wires on the right side of the `big box' represent the signals transmitted by the "cell" ( modelled as the 'big box') to its environment. May be I am oversimplifying the setup. Although, I think equivalently, a cell can also be seen a structured cospan/decorated cospan, if we can define an appropriate nice category whose every object aa models the whole intra-cellular signal transduction pathways inside a cell aˉ\bar{a}.

view this post on Zulip John Baez (Jun 07 2025 at 22:53):

@David Corfield wrote:

I could imagine requiring plenty of bells and whistles. For one thing, can DPO-rewriting cope with a certain amount of latitude in its pattern matching?

In DPO rewriting a pattern either matches some part of the state of the world or it does not. In our system, if it matches, it has a certain probability of being applied, where this probability is an arbitrarily specified function of time.

But you can write a lot of rules for a lot of different patterns, specifying different probabilities for each of these rules to be applied. You can create the effect of "latitude" this way. And if you have general ideas about which patterns count as similar, you can write code that automates this process of creating lots of rules for similar patterns, and assigning probabilities to each of them.

Of course the code will run more slowly if it has to look through hundreds or thousands of rules.

In general people find that fairly small and fairly simple collection of rules are enough to create surprising and thought-provoking effects. So it's always good to start with simple models before diving into something complicated.

view this post on Zulip Adittya Chaudhuri (Jun 08 2025 at 17:44):

Below I am discussing some thoughts on my perception of "good or bad motifs":

I think in causal loop diagrams used in Systems dynamics , the idea of "good and bad" comes from the "humanly meaning of the nodes" and "how that good or bad motifs" are afffecting another node/nodes of a bigger causal diagram of which that motif is a sub-causal loop diagram.

However, I think in the regulatory networks in biological systems, the idea of a "good or bad motif" is mostly related to functions of a higher level structural organisations like "functions of biological cells".

view this post on Zulip Adittya Chaudhuri (Jun 08 2025 at 18:43):

I think the idea of good or bad motif in a regulatory network (G,)(G, \ell) is relative to cellular functions, and may be seen as a function f ⁣:M×C{+,,0}f \colon M \times C \to \lbrace +,- , 0 \rbrace , where MM is the set of all motifs in (G,)(G, \ell) and CC is the set of all cellular functions. If f(m,c)=+f(m,c)=+, then, mm is good with respect to cc, if f(m,c)=f(m,c)=-, then mm is bad with respect to cc and f(m,c)=0f(m,c)=0 if the effect of mm on cc is unknown.

view this post on Zulip John Baez (Jun 08 2025 at 22:01):

Where does the function ff come from - do we just have to figure it out ourselves? To me the interest of "good or bad motifs" is highest when we have an algorithm for spotting them in a causal loop diagram without adding extra information to that diagram.

The simplest example of what I mean would be this: we define the 7 causal loop diagrams in the book Systems Archetype Basics to be a 'bad motif'. In fact I'd like to do better and find common features of these causal loop diagrams which make them count as 'bad', and I think that should be possible. But at least we can write a program that takes any causal loop diagram XX and seek Kleisli morphisms from these 7 causal loop diagrams to XX: this does not require that we add additional information to XX 'by hand'.

view this post on Zulip John Baez (Jun 08 2025 at 22:06):

Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.

One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 04:33):

John Baez said:

Where does the function ff come from - do we just have to figure it out ourselves? To me the interest of "good or bad motifs" is highest when we have an algorithm for spotting them in a causal loop diagram without adding extra information to that diagram.

Yes, in my defnition of ff, the function ff (at the moment) has to be figured out by ourselves by experiments and other studies. However, I am also equally excited about "the possibility of producing an algorithm" that takes in causal loop diagrams as inputs and gives out good, bad and indeterminate motifs as outputs with respect to a particular cellular function.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 04:40):

John Baez said:

The simplest example of what I mean would be this: we define the 7 causal loop diagrams in the book Systems Archetype Basics to be a 'bad motif'. In fact I'd like to do better and find common features of these causal loop diagrams which make them count as 'bad', and I think that should be possible. But at least we can write a program that takes any causal loop diagram XX and seek Kleisli morphisms from these 7 causal loop diagrams to XX: this does not require that we add additional information to XX 'by hand'.

This sounds very interesting and a concrete approach to the problem, and an exciting project towards characterising "bad motifs" in graphs with polaritites. At least with respect to systems dynamics, we are aware that those 7 causal loop diagrams do mostly bad. So, finding such motifs in regulatory networks in biological systems by building the software (about which you mentioned) and then communicating with the biologists for the possible interpretations of those motifs from the point of Systems Biology. I am feeling, we may come up with some interesting conclusions in this direction, if we manage to find a single such causal loop diagram in regulatory networks or pathways.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 04:56):

So, at the moment, are you conjecturing something like this?

Any motif which System dynamics community consider `bad' will also possibly be considered 'bad' by Systems Biology community with respect to some cellular function like Apoptosis, proliferation, etc. However, evolution acts as an external force to keep the occurances of such motifs in minimal numbers in regulatory networks of biological systems.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 05:09):

John Baez said:

Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.

One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.

This sounds super exciting!! It makes me ask "whether you are expecting something like this"?

For any `important system' (like biological system, social system, ecological system, etc.) represented as graphs with polarities (talking in terms of general labeling monoids), there exist a collection of motifs whose occurrences indicate that something bad is going to happen to the the system over time.

view this post on Zulip James Deikun (Jun 09 2025 at 07:09):

I don't think you'll really find something like "inherently bad motifs" and here's why: if "bad" elements of a system like diseases or cancer cells are afflicted by "bad" motifs, then that is good for the system as a whole. In particular, biological systems are probably riddled with "traps" that lie in wait for cell lines that start to escape the more basic restrictions on proliferation, starting with telomeres.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 07:22):

Please correct me if I am wrong:

Now, I am also starting to believe that many failures of large systems (including biological systems) may possibly be characterised by various patterns of interaction of its sub-systems. Similary, many success of large systems (including biological systems) may possibly be characterised by various patterns of interaction of its sub-systems.

So, I am now feeling motifs and System archetypes are more like tools to Systems Biology and Systems dynamics respectively, to characterise faulty systems, stable systems, etc.

view this post on Zulip David Corfield (Jun 09 2025 at 07:42):

John Baez said:

Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.

One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.

I guess there's an obvious way for ecologists to consider healthy systems and how bad motifs may feature in, say, trophic cascades. But I wonder whether the patterns themselves are intrinsically 'bad'.

When a headteacher institutes school policies to encourage and reward kind and considerate behaviour over rudeness, bullying, etc., there might have been a dynamic mix of good and bad behaviour before, but then the system is steered to a 'monoculture' of considerateness, like the moderators have achieved on this Zulip chat. :slight_smile:

Couldn't we consider this in terms of the 'success to the successful' archetype?

view this post on Zulip David Corfield (Jun 09 2025 at 07:46):

The same point here as @James Deikun is making.

Do we look on any event or process in the history of life on Earth as intrinsically bad, at least until humans come into the mix, but then that's surely telling us something? But we surely don't lament, say, the spread of grasslands millions of years ago, even if it was bad news for many species.

view this post on Zulip John Baez (Jun 09 2025 at 08:59):

David Corfield said:

Couldn't we consider this in terms of the 'success to the successful' archetype?

I've always been suspicious of this being on the the list of 'bad' motifs because it's not like most of the others. Most of the others seem to involve 2 feedback loops with opposite sign, one with a delay. That seems to give a system that's in conflict with itself, or indecisive.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 09:11):

Although I am not sure, 'success to the successful' archetype is also reminding me the way one trains neural networks (which is not bad I think): Reward/gain if it idenitifies correctly and punishment/loss if it idenitifies incorrectly.

view this post on Zulip David Corfield (Jun 09 2025 at 09:24):

Interesting. So here,

image.png

the circuit on the left has 3 +s, while the circuit on the right is -+-. So both form positive feedback loops.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 09:26):

John Baez said:

That seems to give a system that's in conflict with itself, or indecisive.

I also feel so. I agree that "a human made system may become indecisive because of human's nature/character/etc".

My question is "why a natural system like "regularoty networks" becomes indecisive"? Can we think of the occurrence of some mutations, etc, as a system that's in conflict with itself, or indecisive?

Or rather, in general "such bad things happen" only when things go in an unanticipated way? Human made causal loop diagrms are fine (becaue we can think of ourselves as capable of anticipation). However, it is also making me think whether biological systems like cells are also anticipatory systems and usual regulatory networks are "a sort of manifestations" of such anticipations. In this context , when certain mutations are not anticiapated by system of cells, then we get uncontrolled prolioferations leading to cancer, etc.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 09:34):

David Corfield said:

Interesting. So here,

image.png

the circuit on the left has 3 +s, while the circuit on the right is -+-. So both form positive feedback loops.

I am feeling the key factor is "the limited pool of resources to be shared among many, and thus one needs to remove some". I think the ideology of Apoptosis is also like that (killing unwanted cells). Here, Apostosis is used for good.

view this post on Zulip David Corfield (Jun 09 2025 at 09:34):

I only just notices the little 's's and 'o's! Continuing to

image.png

these are both negative loops. Surely this is common in many self-regulating systems.

Hmm, it seems like some of the descriptive phrases on these archetypes are loaded with positive/negative connotations to encourage us to interpret them in a certain way.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 09:40):

I am feeling that the archetypes like Escalations describe a way to remove certain entities from systems with limited resources. Sometimes, its consequence is good and sometimes it is bad with some context.

view this post on Zulip David Corfield (Jun 09 2025 at 10:04):

We seem to be led to think of things like arms races or the outbreak of war, but isn't the same pattern there in the rivalry that makes us work or train harder, or the tech company that improves its product to keep ahead of the field? The term 'threat' is doing a lot of work.

view this post on Zulip Adittya Chaudhuri (Jun 09 2025 at 10:49):

David Corfield said:

We seem to be led to think of things like arms races or the outbreak of war, but isn't the same pattern there in the rivalry that makes us work or train harder, or the tech company that improves its product to keep ahead of the field? The term 'threat' is doing a lot of work.

I agree. It seems these "archetypes" have certain specific functional roles with respect to some context. If that specific functional role seems benefical to the context, we may say it is a good archetype, otherwise it is bad. In biology, delays are also used to do something good like "creating pulse like dynamics to signal for a cell division", but all the archetypes with delay I know in systems dynamics are mostly bad I think.

However, in biology, such a delay was anticipated by the cell, but in the systems dynamics such a delay was not anticipated (please see #theory: applied category theory > Graphs with polarities @ 💬 ).

view this post on Zulip John Baez (Jun 09 2025 at 12:19):

David Corfield said:

I only just noticed the little 's's and 'o's!

s means "same" which means +.
o means "opposite" which means -.

Apparently a lot of non-mathematician practitioners of system dynamics get confused by the symbols + and - and think that, for example, an edge xyx \xrightarrow{-} y means that yy always gets decreased. In fact it means something more like y/x<0\partial y / \partial x < 0: if xx gets bigger yy gets smaller, but if xx gets smaller yy gets bigger. So, to make life easier (?), some system dynamicists have switched to using s and o.

("Something more like", because this is just one possible semantics: the quantities involved do need to be real number: a more qualitative semantics is also good.)

view this post on Zulip John Baez (Jun 09 2025 at 12:20):

TL;DR: s means +, o means -

view this post on Zulip David Corfield (Jun 10 2025 at 07:17):

Adittya Chaudhuri said:

However, in biology, such a delay was anticipated by the cell, but in the systems dynamics such a delay was not anticipated

This is a very interesting topic, no doubt involving great complexity. In one sense, I take it that this anticipation has something to do with the idea of design, what something is designed to do and to cope with. So we might say of an engineered product when it fails that it was not anticipated that the product would be exposed to such conditions.

Biology increases the complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning. Then add in all the compromises of "design" arising from its evolutionary history.

view this post on Zulip Adittya Chaudhuri (Jun 10 2025 at 07:54):

David Corfield said:

This is a very interesting topic, no doubt involving great complexity. In one sense, I take it that this anticipation has something to do with the idea of design, what something is designed to do and to cope with. So we might say of an engineered product when it fails that it was not anticipated that the product would be exposed to such conditions.

Thanks!! I find your idea of "relating anticipation with designing" a very interesting and natural way of looking at these things !!

view this post on Zulip Adittya Chaudhuri (Jun 10 2025 at 08:02):

I am not completely sure I understand the `evolution part'. Are you saying that "evolution" is a kind of mechanism to upgrade the existing biological design to fix the unanticipated problems it is currently showing?".

I find it relatable to the way, the phone companies upgrade their softwares in phones which debug unanticiapated problems of the previous version in the current version. Thus those unanticipated problems of the previous version become anticipated problems of the current version. However, our experaince shows there is never a "final version" and we always need to upgrade after a period of time. Interestingly, if we choose to not upgrade the version, then even normal phone feautures will stop to perform well eventually, which I am relating with species extinction, etc.

view this post on Zulip Adittya Chaudhuri (Jun 10 2025 at 08:21):

David Corfield said:

Biology increases the complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning. Then add in all the compromises of "design" arising from its evolutionary history.

I think I understood your idea of of relating "Increment of complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning" via evolution. Very interesting!! Thank you.

view this post on Zulip David Corfield (Jun 10 2025 at 08:23):

I was hinting at the following, but your response is interesting too.

I was thinking of the difference between a designed entity and an evolved one, where component parts and their functioning are far from optimised. Easier to think in the former case of harmonious functioning of parts according to a design and a clear notion of anticipation, whereas in the latter case there will be built-in competition of parts and competing anticipations.

But, now I come to think of, that feature of the latter, which one might expect would lead to a vulnerability when novel conditions arise, is seen as a strength in the Michael Levin article I mentioned above: The struggle of the parts: how competition among organs in the body contributes to morphogenetic robustness.

(It was that article that prompted me to wonder whether competition between Solms's 7 emotional drives could lead to a special mental robustness in humans here.)

view this post on Zulip John Baez (Jun 10 2025 at 11:01):

I'm very sympathetic to all these speculations. I'm very eager to test them with data, and @Evan Patterson has created CatColab software for finding motifs in causal loop diagrams with delays. But I don't know databases of these - with delays, that is!

view this post on Zulip Adittya Chaudhuri (Jun 10 2025 at 11:23):

David Corfield said:

I was hinting at the following, but your response is interesting too.

I was thinking of the difference between a designed entity and an evolved one, where component parts and their functioning are far from optimised. Easier to think in the former case of harmonious functioning of parts according to a design and a clear notion of anticipation, whereas in the latter case there will be built-in competition of parts and competing anticipations.

But, now I come to think of, that feature of the latter, which one might expect would lead to a vulnerability when novel conditions arise, is seen as a strength in the Michael Levin article I mentioned above: The struggle of the parts: how competition among organs in the body contributes to morphogenetic robustness.

(It was that article that prompted me to wonder whether competition between Solms's 7 emotional drives could lead to a special mental robustness in humans here.)

Intersting point of view!! If we relate how "a big company works with thousands of workers performing various tasks in different hierarchies" which is ultimately beneficial for the whole company, then your point of view seems a natural way to think about it.

view this post on Zulip David Corfield (Jun 10 2025 at 16:17):

There's a lot of ACT work out there on cellular sheaves and graph Laplacians, which seems to get close to some ideas from this thread, such as first and second [edit: Zeroth] (co)homology of networks. E.g.,

But I guess these are exploring more how stable distributions can be achieved, rather than locating certain forms of loop. Even if the former considers directed paths, these are in quivers. Is it that in causal loop diagrams, there's no role for diffusion?

view this post on Zulip John Baez (Jun 10 2025 at 16:38):

You can do diffusion on a graph, e.g. writing a version of the heat equation using a 'graph Laplacian'.

view this post on Zulip David Corfield (Jun 11 2025 at 05:06):

Yes, but then does this mark a difference between the work I mentioned and yours, where they're after heat equations that converge to steady state solutions via positive semi-definite linear operators, and you're interested in unstable network patterns?

view this post on Zulip John Baez (Jun 11 2025 at 05:41):

By the way, in our paper we're interested primarily in the syntax of causal loop diagrams, i.e. various (double) categories of (open) graphs with edges labeled by elements of some monoid, not the functorial semantics of these diagrams, i.e. (double) functors from these (double) categories to others. We discuss semantics informally by discussing in words what the labels might mean, but we don't go into the functorial semantics.

view this post on Zulip John Baez (Jun 11 2025 at 05:48):

So, we don't at all study any sort of 'heat equation semantics'. But this would be quite interesting to do, so thanks for the idea!

If I did study this, I would not only study the steady state solutions of these heat equations, but the time-dependent solutions and the equations themselves. It seems to be a general rule that the equations describing open dynamical systems are nicely compositional when one describes them using decorated cospans - while the solutions themselves are not, except for steady-state solutions with given boundary conditions.

I realized this the hard way, by writing a couple of papers on open Markov processes with Brendan Fong, Blake Pollard and Kenny Courser.

view this post on Zulip John Baez (Jun 11 2025 at 05:56):

But anyway: yeah, there could be some linear differential equations with more lively dynamics that one could define using a signed graph. There's something a bit depressing about studying the heat equation and Markov processes, where solutions tend to converge to an equilibrium as time passes. While the math is quite pretty, something in me objects to a world that becomes ever more boring with the passage of time. (This is presumably part of why people get so worked up about the 'heat death' of the universe, and possible ways out, like the Poincare recurrence time.)

view this post on Zulip David Corfield (Jun 11 2025 at 08:04):

You might imagine that Nature would exploit this livelier dynamics, like aeronautical engineers reducing stability for faster control on fighter jets.

view this post on Zulip John Baez (Jun 11 2025 at 08:20):

Sure, heat equations and diffusion equations only describe the pathetically dull approach to stasis. Real-world physics and biology are vastly more peppy.

view this post on Zulip David Corfield (Jun 11 2025 at 08:30):

But then isn't it extremely likely that these so-called "bad motifs" will appear as necessary components of well-functioning systems?

view this post on Zulip John Baez (Jun 11 2025 at 17:03):

Note that System Dynamics Archetypes also concerns systems that are dynamic, not necessarily approaching equilibrium. If equilibrium were one's goal, one should never allow any positive feedback loop, since that causes exponential blowup unless it's counteracted by enough negative feedback. You'd expect a vertex with a + edge from itself to itself to be in that book.

view this post on Zulip John Baez (Jun 11 2025 at 17:05):

But that book isn't telling us to avoid exponential blowup. It seems to be telling us to avoid "wishy-washy" behavior where on short time scales we have a feedback loop of one polarity, and with a delay we have a feedback loop of the opposite polarity.

view this post on Zulip John Baez (Jun 11 2025 at 17:08):

But I really want to analyze some data and see if biology agrees or disagrees with that book. I won't be shocked if biosystems exploit some causal loop diagrams that the book regards as bad... but I'll be interested.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 17:33):

I think archetypes in that book involves "some meanings" associated to the nodes of the causal loop diagrams. I feel these meanings are general in nature but special enough for us to idenitify similar situations in various contexts in our life by using our inbuilt causality.

However, nodes in regulatory networks are just biochemicals/genes. The only data that is associated to these nodes are its concentration levels or expression levels. Inhibitors/stimulators often reduce/stimulate the concentration/expression levels in the nodes, which causes often delays or "faster than usual" situation. But, naturally I am not able to see a human causal structure in biological networks. So, there may be something else that says biomolecule aa should stimulate/inhibit the expresssion of gene gg. I think "that something else" is performing a cellular function like cell death etc in a controlled way. In a way, I think the cellular functions form the underlying causal structure of the regulatory networks like the human minds' causal structure lies underneath the causal loop diagrams in systems dynamics.

The above are just my thoughts. I may be copmpletely wrong.

view this post on Zulip Kevin Carlson (Jun 11 2025 at 18:36):

John Baez said:

But anyway: yeah, there could be some linear differential equations with more lively dynamics that one could define using a signed graph. There's something a bit depressing about studying the heat equation and Markov processes, where solutions tend to converge to an equilibrium as time passes. While the math is quite pretty, something in me objects to a world that becomes ever more boring with the passage of time. (This is presumably part of why people get so worked up about the 'heat death' of the universe, and possible ways out, like the Poincare recurrence time.)

We have linear ODE semantics for CLDs and stock-flow diagrams in CatColab right now! They’re based on the well known mass action kinetics for Petri nets. Maybe that’s interesting to y’all.

view this post on Zulip John Baez (Jun 11 2025 at 18:50):

It's definitely interesting. I suppose we could try to run the linear ODE semantics for CLDs containing 'bad motifs', and those without, and compare the qualitative features of the dynamics, and see if there's anything to say.

view this post on Zulip John Baez (Jun 11 2025 at 18:55):

Adittya Chaudhuri said:

I think archetypes in that book involves "some meanings" associated to the nodes of the causal loop diagrams. I feel these meanings are general in nature but special enough for us to identify similar situations in various contexts in our life by using our inbuilt causality.

Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.

The above are just my thoughts. I may be completely wrong.

My guess that the 'bad motifs' in System Archetype Basics are actually objectively bad in some way seems inherently unlikely - nothing involving the word 'bad' should ever be that simple - but at least there's a chance to test this hypothesis. I really just want to look at some regulatory networks and see what they have to say.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:21):

John Baez said:

Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.

Thanks. I find your point of view very interesting.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:26):

John Baez said:

My guess that the 'bad motifs' in System Archetype Basics are actually objectively bad in some way seems inherently unlikely - nothing involving the word 'bad' should ever be that simple - but at least there's a chance to test this hypothesis. I really just want to look at some regulatory networks and see what they have to say.

True. I got your point.

view this post on Zulip John Baez (Jun 11 2025 at 19:30):

I will actually start looking at some regulatory networks when I get a bit of free time. Right now I want to finish up our paper, and I have to run this ICMS workshop.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:33):

Thanks. That would be really nice. I am thinking of looking at the pathways in KEGG from "the point of crosstalk" i.e "how about finding a feedback loop" by combining different pathways. Since, pathways are a bit "acyclic" in nature it may be hard to find a loop unless we go for crosstalk. Although I am not sure.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:35):

John Baez said:

Right now I want to finish up our paper, and I have to run this ICMS workshop.

Yes, I got your point.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:39):

John Baez said:

Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.

I think "the point of crucial extra information" is actually true in regulatory networks in biology. I think "the labeling of every edge" in regulatory network has "an experimental study" or "an analytical study" in backdrop, which is telling us about whether the influence is indeed positive or negative, etc. I feel in biology often such experimental studies (crucial extra information) are non-trivial.

view this post on Zulip Adittya Chaudhuri (Jun 11 2025 at 19:47):

I think this can be explained in a nice way as follows:

To answer the validity of the satement Effort +\xrightarrow{+} quality of work , we just use our "common sense"/experiance" which manifests as our intuition. However, if I say, chemical A +\xrightarrow{+} chemical B, my intuition here is likely to be insufficient , and we may need extra experimental/analytical study to verify this.

view this post on Zulip Adittya Chaudhuri (Jun 12 2025 at 06:42):

I was thinking about the "extra information on nodes" about which you said:

In Systems Dynamics , a general prototype statement may look like this

Symptomatic solutions \xrightarrow{-} Problem symptoms. Then, this statement is general enough for many modelers to choose his/her/their own cutomised problems in the nodes to model their problems using the same causal loop diagram : Symptomatic solutions \xrightarrow{-} Problem symptoms.

Now, in Regulatory Networks:

These prototypes may be constructed using "various classes of biochemicals which are in someway similar". Then, a biologist may use any biochemical as a node (suited to the purpose of investigation) which belongs to that class. However, it may happen that a biochemical can be present in more than one classes.

In our context, if your hypothesis is true then, instead of human biologists, I think it is "evolution/Nature" which treats certain classes of biochemicals as similar, and build archetypes accordingly.

view this post on Zulip Adittya Chaudhuri (Jun 12 2025 at 07:06):

Somehow, it is giving me a feeling that our usual regulatory networks are kind of "stratified version" of some other set of regulatory networks that are yet to be discovered" , and to find the appropriate archetypes for regulatory networks in biological systems which may then actually be considered as good or bad in a resonable way (as you conjectured), we may need to first "unstratify" the existing regulatory networks in a suitable sense. By unstratification, I meant to say at the "level of nodes".

view this post on Zulip Adittya Chaudhuri (Jun 16 2025 at 15:49):

@John Baez and I were discussing on the construction of Mayer-Vietoris for directed graphs with coefficients in a commutative monoid CC. We realised certain issues in the construction of the boundary \partial in our earlier proposed method for the Mayer-Vietoris construction #theory: applied category theory > Graphs with polarities @ 💬 . @John Baez pointed out that although one way to resolve the issue is by passing to the Grothendieck group construction of the commutative monoids, he said that "doing so" is not favourable , and should be opted as our last option if we can not manage to construct the Mayer Vietoris otherwise.

In this context, below I am proposing an alternative way to construct the Mayer-vietoris in certain special cases. My construction is a bit from the point of seeing "the minimal elements of our first homology monoids with coeefficent in N\mathbb{N}" as "homology class of simple loops" about which we already discussed thoroughly here #theory: applied category theory > Graphs with polarities @ 💬 .

Let us consider two graphs XX and YY which

Let Hsloops(G)H_{sloops}(G) denote the set of homology classes of simple loops in GG.

Let τ ⁣:Hsloops(G)H0(disc(B),N)\tau \colon H_{sloops}(G) \to H_0(\text{disc}(B), \mathbb{N}) be a function defined as

τ([c=a1e1a2e2a3e3anena1]):=ai+1\tau\bigg([c= a_1 \xrightarrow{e_1} a_2 \xrightarrow{e_2} a_3 \xrightarrow{e_3} \cdots a_{n} \xrightarrow{e_n} a_1 ] \bigg):= a_{i+1}, where ai+1a_{i+1} is a vertex in disc(B)\text{disc}(B) such that the edges aieiai+1a_{i} \xrightarrow{e_{i}} a_{i+1} and ai+1ei+1ai+2a_{i+1} \xrightarrow{e_{i+1}} a_{i+2} lie in respectively XX and YY or, YY and XX, but not in respectively, XX and XX or, YY and YY. If such ai+1a_{i+1} does not exist, then τ(c):=0\tau(c):=0. [In the definition of τ\tau, I have used Axiom of choice]

Now, we know Hsloops(G)Min(H1(G,N))H_{sloops}(G) \cong Min \big( H_1(G, \mathbb{N}) \big), where Min(H1(G,N))Min \big( H_1(G, \mathbb{N}) \big) denotes the set of minimal elements of H1(G,N)H_1(G, \mathbb{N}).

Using (2), it is clear that any map

X,Y ⁣:Min(H1(G,N))H0(disc(B),N)\partial_{X,Y} \colon Min \big( H_1(G, \mathbb{N}) \big) \to H_0(\text{disc}(B), \mathbb{N}) can be uniquely extended to a morphism of commutative monoids :

δ ⁣:H1(G,N)H0(disc(B),N)\delta \colon H_1(G, \mathbb{N}) \to H_0(\text{disc}(B), \mathbb{N}).

Hence, in particular, the map τ ⁣:Hsloops(G,N)H0(disc(B),N)\tau \colon H_{sloops}(G, \mathbb{N}) \to H_0(\text{disc}(B), \mathbb{N}) defines a unique morphism of commutative monoids δτ ⁣:H1(G,N)H0(disc(B),N)\delta_{\tau} \colon H_1(G, \mathbb{N}) \to H_0(\text{disc}(B), \mathbb{N}).

Then, I think the following is true:

Claim:
cKer(τ)c \in Ker(\partial_{\tau}) if and only if cH1(X,N)H1(Y,N)c \in H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N})

If my claim is true, then I propose a commutative monoid analogue of the Mayer-Vietoris sequence in the special cases when

as the following:

H1(X,N)H1(Y,N)iH1(X+disc(B)Y,N)δτH0(disc(B),N)H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \xrightarrow{i} H_1(X +_{\text{disc(B)}}Y, \mathbb{N}) \xrightarrow{\delta_{\tau}} H_0(\text{disc}(B), \mathbb{N}),

where i ⁣:H1(X,N)H1(Y,N)H1(X+disc(B)Y,N)i \colon H_1(X, \mathbb{N}) \oplus H_1(Y, \mathbb{N}) \to H_1(X +_{\text{disc(B)}}Y, \mathbb{N}) is given by i=iX+iYi= i_X + i_{Y}, induced from the inclusions XX+disc(B)YX \to X +_{\text{disc(B)}}Y and YX+disc(B)YY \to X +_{\text{disc(B)}}Y.

view this post on Zulip John Baez (Jun 16 2025 at 19:06):

Thanks for raising this issue and a potential solution, @Adittya Chaudhuri! I keep getting this stuff wrong. Let me try another approach. I want this approach to work whenever we use coefficients from a cancellative commutative monoid CC, i.e. one with

x+z=y+z    x=y x + z = y + z \implies x = y

Of course this includes the case C=NC = \mathbb{N}, which is the most important case for us.

First some review of notation:

For any graph GG we write C1(G,C)C_1(G,C) for the commutative monoid of CC-linear combinations of edges of the graph GG, and C0(G,C)C_0(G,C) for the commutative monoid of CC-linear combinations of vertices. We call Ci(X,C)C_i(X,C) the commutative monoid of i-chains. We have source and target homomorphisms

s,t:C1(G,C)C0(G,C)s, t: C_1(G,C) \to C_0(G,C)

We call a 1-chain cc with

s(c)=t(c) s(c) = t(c)

a 1-cycle and we write H1(G,C)H_1(G,C) for the commutative monoid of 1-cycles (which are the same as 1st cohomology classes in this context).

In other words, H1(G,C)H_1(G,C) is the equalizer of s,t:C1(G,C)C0(G,C)s,t: C_1(G,C) \to C_0(G,C). Similarly, we define H0(G,C)H_0(G,C) to be their coequalizer.

Any map of graphs induces homomorphisms on 0-chains and 1-chains, and these homomorphisms commute with ss and tt, so they also induce maps on 1-cycles.

Now consider two graphs XX and YY and monomorphisms from a discrete graph disc(B)\text{disc}(B) into XX and YY. Let me make up some intuitive notation:

XY=X+disc(B)YX \cup Y =X +_{\text{disc(B)}}Y
XY=disc(B) X \cap Y = \text{disc(B)}

Ci(X,C)Ci(Y,C) C_i(X,C) \oplus C_i(Y,C) is the [[biproduct]] of the commutative monoids Ci(X,C)C_i(X,C) and Ci(Y,C) C_i(Y,C), so we have inclusions

iX:Ci(X,C)Ci(X,C)Ci(Y,C) i_X : C_i(X,C) \to C_i(X,C) \oplus C_i(Y,C)

iY:Ci(Y,C)Ci(X,C)Ci(Y,C) i_Y : C_i(Y,C) \to C_i(X,C) \oplus C_i(Y,C)

and projections

pX:Ci(X,C)Ci(Y,C)Ci(X,C) p_X : C_i(X,C) \oplus C_i(Y,C) \to C_i(X,C)

pY:Ci(X,C)Ci(Y,C)Ci(Y,C) p_Y : C_i(X,C) \oplus C_i(Y,C) \to C_i(Y,C)

obeying the usual biproduct axioms. All 4 of these maps are compatible with the source and target maps, e.g. pXs=spX p_X \circ s = s \circ p_X . So, they induce maps on H1H_1 which we call by the same names:

iX:H1(X,C)H1(X,C)H1(Y,C) i_X : H_1(X,C) \to H_1(X,C) \oplus H_1(Y,C)

iY:H1(Y,C)H1(X,C)H1(Y,C) i_Y : H_1(Y,C) \to H_1(X,C) \oplus H_1(Y,C)

and projections

pX:H1(X,C)H1(Y,C)H1(X,C) p_X : H_1(X,C) \oplus H_1(Y,C) \to H_1(X,C)

pY:H1(X,C)H1(Y,C)H1(Y,C) p_Y : H_1(X,C) \oplus H_1(Y,C) \to H_1(Y,C)

These to obey the usual biproduct axioms

view this post on Zulip John Baez (Jun 16 2025 at 19:13):

Let AA be the Grothendieck group of the cancellative monoid AA. I want to describe a map

:H1(XY,C)H0(XY,A) \partial: H_1(X \cup Y, C) \to H_0(X \cap Y, A)

whose kernel is the map

ι:H1(X,C)H1(Y,C)H1(XY,C) \iota : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C)

induced by the map of graphs

X+YXY X + Y \to X \cup Y

(the canonical map from the coproduct to the pushout). This will be a baby version of the Mayer-Vietoris theorem, restricted to the case of graphs, but generalized to commutative monoid coefficients.

view this post on Zulip John Baez (Jun 16 2025 at 19:38):

After the warmup here's how the construction goes (if I'm correct). First note that

C1(XY,C)C1(X,C)C1(Y,C) C_1(X \cup Y, C) \cong C_1(X,C) \oplus C_1(Y,C)

since by our assumptions every edge of XYX \cup Y is either an edge of XX or an edge of YY, but not both. I will identify C1(XY,C) C_1(X \cup Y, C) with C1(X,C)C1(Y,C) C_1(X,C) \oplus C_1(Y,C) using this isomorphism.

We thus have maps

pX:C1(XY,C)C1(X,C) p_X : C_1(X \cup Y,C) \to C_1(X,C)
pY:C1(XY,C)C1(Y,C) p_Y : C_1(X \cup Y,C) \to C_1(Y,C)

I now define preliminary versions of the maps σ\sigma and τ\tau. We can take

spX:C1(XY,C)C0(X,C) s \circ p_X : C_1(X \cup Y,C) \to C_0(X,C)

tpX:C1(XY,C)C0(X,C) t \circ p_X : C_1(X \cup Y,C) \to C_0(X,C)

and restrict these to H1C1H_1 \subseteq C_1 to get maps I'll call

S:H1(XY,C)C0(X,C) S : H_1(X \cup Y,C) \to C_0(X,C)

T:H1(XY,C)C0(X,C) T: H_1(X \cup Y, C) \to C_0(X,C)

view this post on Zulip John Baez (Jun 16 2025 at 19:41):

Claim 1. The equalizer of SS and TT is

ι:H1(X,C)H1(Y,C)H1(XY,C) \iota : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C)

(Here we need CC to be cancellative.)

view this post on Zulip John Baez (Jun 16 2025 at 19:45):

Claim 2. The range of

ST:H1(XY,C)C0(X,A)S - T : H_1(X \cup Y, C) \to C_0(X,A)

is contained in

C0(XY,A)C0(X,A)C_0(X \cap Y, A) \subseteq C_0(X,A)

where AA is the free abelian group on the commutative monoid CC, sometimes called its Grothendieck group, and the inclusion is the obvious one.

I'll try to prove these later. But given these, notice:

By claim 2, STS - T gives a map I'll abusively call

ST:H1(XY,C)C0(XY,A)S - T: H_1(X \cup Y, C) \to C_0(X \cap Y, A)

But since XYX \cap Y is a discrete graph, the quotient map

C0(XY,A)H0(XY,A)C_0(X \cap Y, A) \to H_0(X \cap Y, A)

is an isomorphism. So, composing STS - T with this isomorphism we get a map

:H1(XY,C)H0(XY,A)\partial : H_1(X \cup Y,C) \to H_0(X \cap Y,A)

with the same coequalizer as STS - T.

view this post on Zulip John Baez (Jun 16 2025 at 19:48):

So, we're done... if we can prove the two claims!

view this post on Zulip John Baez (Jun 16 2025 at 20:07):

I think I'll quit here for now: I hadn't realized how long the warmup would be, and my own calculations are very intuitive and informal so it will take even longer to formalize them. I'll try that later.

view this post on Zulip John Baez (Jun 16 2025 at 20:30):

Well, let me do a bit more stage-setting. We have

C1(XY,C)C1(X,C)C1(Y,C)C_1(X \cup Y, C) \cong C_1(X,C) \oplus C_1(Y,C)

so we get maps

πX=iXpX:C1(XY,C)C1(XY,C) \pi_X = i_X \circ p_X : C_1(X \cup Y, C) \to C_1(X \cup Y, C)

πY=iYpY:C1(XY,C)C1(XY,C) \pi_Y = i_Y \circ p_Y : C_1(X \cup Y, C) \to C_1(X \cup Y, C)

I hope it's clear what they do: πX\pi_X takes any linear combination of edges in XYX \cup Y and kills off all the edges in YY while leaving those in XX alone, and similarly πY\pi_Y kills off all the edges in XX. Given any

cC1(XY,C) c \in C_1(X \cup Y, C)

define

cX=πX(c) c_X = \pi_X (c)

cY=πY(c) c_Y = \pi_Y (c)

Claim 3. cC1(XY,C)c \in C_1(X \cup Y, C) is in the image of

ι:H1(X,C)H1(Y,C)H1(XY,C)\iota : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C)

if and only if

scX=tcX  and  scY=tcY s c_X = t c_X \; \text{and} \; s c_Y = t c_Y

Temporarily assuming this, we get

Claim 4. cH1(XY,C)c \in H_1(X \cup Y, C) is in the image of

ι:H1(X,C)H1(Y,C)H1(XY,C)\iota : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C)

if and only if

scX=tcX s c_X = t c_X

Proof of Claim 4 from Claim 3. Since cH1(XY,C)c \in H_1(X \cup Y, C) we have

sc=tc s c = t c

or in other words

scX+scY=tcX+tcY s c_X + s c_Y = t c_X + t c_Y

Now, if

scX=tcX s c_X = t c_X

then by cancellativity in H1(XY,C)H_1(X \cup Y, C), which follows from cancellativity in CC, we also get

scY=tcY s c_Y = t c_Y

so by Claim 3 we see cc is in the image of ι\iota.

Conversely, if cc is in the image of ι\iota, Claim 3 says

scX=tcX s c_X = t c_X

and

scY=tcY s c_Y = t c_Y \qquad \qquad \qquad \blacksquare

view this post on Zulip John Baez (Jun 16 2025 at 21:45):

I made a bunch of mistakes which I've tried to fix. I should probably work this out on paper and then write it up somewhere else. The idea is supposed to be simple, but it's not looking simple.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 03:52):

Thanks very much!! I am now trying to understand your ideas.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 05:22):

John Baez said:

the map

ι:H1(X,C)H1(Y,C)H1(XY,C) \iota : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C)

induced by the map of graphs

X+YXY X + Y \to X \cup Y

(the canonical map from the coproduct to the pushout).

A small doubt here:

By definition, H1(XY,C)H_1(X \cup Y, C) is the equalizer of the maps

C1[s]C1[t] ⁣:C1(XY,C)C0(XY,C)C_1[s]C_1[t] \colon C_1(X \cup Y, C) \to C_0(X \cup Y, C).

Thus, by the universal property of the equalizer, there is a unique map

i:H1(X,C)H1(Y,C)H1(XY,C)i : H_1(X,C) \oplus H_1(Y,C) \to H_1(X \cup Y, C) such that the necessary diagram commutes.

Are you saying i=ιi = \iota ?

view this post on Zulip John Baez (Jun 17 2025 at 07:29):

I think so. In general, we expect H1(,C)H_1(-,C) should be a functor from graphs to commutative monoids, so any map of graphs f:GHf: G \to H induces a map on first homology H1(f,C):H1(G,C)H1(H,C)H_1(f,C) : H_1(G,C) \to H_1(H,C), and it sounds like you're describing how that functor works. It comes from the universal property of the equalizer, along with functoriality of C0(,C)C_0(-,C) and C1(,C)C_1(-,C).

view this post on Zulip John Baez (Jun 17 2025 at 07:43):

Btw, if you look at my outline, you'll see I wound up introducing the Grothendieck group, or group of differences, AA, of the cancellative commutative monoid CC, so that I could form the difference of maps STS - T. I didn't do that in the first draft of my posts; the first draft had a serious mistake, so you may need to reread the posts to see what I mean.

I don't really love this, and I think I see a way to avoid it, but as long as we need the hypothesis that CC is cancellative we might as well use AA and the difference SYS - Y.

I didn't actually explain the ideas in my argument, so let me try doing that now!

Claim 1 says is that S(c)S(c) and T(c)T(c) are equal iff the 1-cycle cH1(XY,C)c \in H_1(X \cup Y, C) comes from an element of H1(X,C)H1(Y,C)H_1(X,C) \oplus H_1(Y,C). The basic idea is that 1-cycles coming from H1(X,C)H1(Y,C)H_1(X,C) \oplus H_1(Y,C) don't "cross over from XX to YY" - they're the sum of a 1-cycle that lives in XX and a 1-cycle that lives in YY. So, when in this case, when we take the part of cc that's in XX by forming pX(c)p_X(c), we still get a 1-cycle.

We can then use cancellativity to show that pY(c)p_Y(c) is also a 1-cycle, since c=pX(c)+pY(c)c = p_X(c) + p_Y(c).

Claim 2 says that the maps

S,T:H1(XY,C)C0(X,A)S,T : H_1(X \cup Y, C) \to C_0(X,A)

produce 0-chains that differ only on the subgraph XYX \cap Y. That is, for any
cH1(XY,C)c \in H_1(X \cup Y, C), the 0-chains S(c)S(c) and T(c)T(c) are linear combinations of vertices of XX, but the coefficients can only be different for vertices in XYX \cap Y.

Maybe you can draw an example of how this works for a choice of cc that comes from H1(X,C)H1(Y,C)H_1(X,C) \oplus H_1(Y,C), and for one that doesn't. Or maybe I'll do it! There's no way I could invent these arguments without having a picture in my mind.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 09:03):

John Baez said:

I think so. In general, we expect H1(,C)H_1(-,C) should be a functor from graphs to commutative monoids, so any map of graphs f:GHf: G \to H induces a map on first homology H1(f,C):H1(G,C)H1(H,C)H_1(f,C) : H_1(G,C) \to H_1(H,C), and it sounds like you're describing how that functor works. It comes from the universal property of the equalizer, along with functoriality of C0(,C)C_0(-,C) and C1(,C)C_1(-,C).

Thank you. Yes, I understand your argument!!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 09:06):

Thanks very much for the explanation of your ideas. Overall, I understood your approach. I need a little more time to realise and understand the details of your ideas.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 09:08):

John Baez said:

Maybe you can draw an example of how this works for a choice of cc that comes from H1(X,C)H1(Y,C)H_1(X,C) \oplus H_1(Y,C), and for one that doesn't. Or maybe I'll do it! There's no way I could invent these arguments without having a picture in my mind.

Thanks. Yes, I will draw some examples!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 09:09):

John Baez said:

I didn't do that in the first draft of my posts; the first draft had a serious mistake, so you may need to reread the posts to see what I mean.

Thanks!! Yes, I will read/reread the whole thing in detail.

view this post on Zulip John Baez (Jun 17 2025 at 14:02):

Okay, I've finally proved something. As often the case with homological algebra, most of the work is just setting up the framework in the right way. Since we're dealing with commutative monoids rather than abelian groups we need to be a bit careful.

As before we have a two graphs XX and YY that are subgraphs of the graph XYX \cup Y, whose intersection XYX \cap Y is a discrete graph - i.e. a graph with no edges. And as before we have a cancellative commutative monoid CC.

I figured out how to lessen the amount of boring notation a bit. Unfortunately, the process of lessening the amount of boring notation is itself boring. But it's worthwhile! Here goes:

We'll treat H1(X,C)C1(X,C),H1(Y,C)C1(Y,C)H_1(X,C) \subseteq C_1(X,C), H_1(Y,C) \subseteq C_1(Y,C) and H1(XY,C)H_1(X \cup Y,C) as submonoids of C1(XY,C)C_1(X \cup Y,C). Since every edge of XYX \cup Y is either an edge of XX or of YY, but not both, we have

C1(XY,C)=C1(X,C)C1(Y,C) C_1(X \cup Y,C) = C_1(X,C) \oplus C_1(Y,C)

where we write an equals sign because this is an 'internal direct sum': every element cC1(XY,C)c \in C_1(X \cup Y,C) can be uniquely written as a sum of elements cXC1(X,C)c_X \in C_1(X,C) and cYC1(Y,C)c_Y \in C_1(Y,C).

Let's define monoid homomorphisms

pX,pY:C1(XY,C)C1(XY,C)p_X, p_Y :C_1(X \cup Y, C) \to C_1(X \cup Y,C)

by

pX(c)=cX,pY(c)=cY. p_X(c) = c_X, \qquad p_Y(c) = c_Y .

It is also convenient to treat C0(X,C),C0(Y,C)C_0(X,C), C_0(Y,C) and C0(XY,C)C_0(X \cap Y,C) as submonoids of C0(XY,C)C_0(X \cup Y,C).

For all our graphs we have source and target maps ss and tt sending edges to vertices, and lets abbreviate their actions on CC-linear combinations of edges

C[s],C[t]:C1(XY,C)C0(XY,C) C[s], C[t] : C_1(X \cup Y, C) \to C_0(X \cup Y,C)

simply as ss and tt. We can also use these notations for the maps

C[s],C[t]:C1(X)C0(Y) C[s], C[t] : C_1(X) \to C_0(Y)

and

C[s],C[t]:C1(Y)C0(Y) C[s], C[t] : C_1(Y) \to C_0(Y)

without confusion, since these are restrictions of the maps ss and tt defined on all of C1(XY,C)C_1(X \cup Y,C).

view this post on Zulip John Baez (Jun 17 2025 at 14:13):

Okay, now for the actual work! "You can wake up now", as an incredibly rude friend of mine once announced to the audience before giving his talk at a conference.

The natural map from the disjoint union X+YX + Y to the union (really pushout) XYX \cup Y induces a map on homology

ι ⁣:H1(X,C)H1(Y,C)H1(XY,C). \iota \colon H_1(X , C) \oplus H_1(Y,C) \to H_1(X \cup Y, C) .

This map ι\iota sends any pair (cX,cY)(c_X, c_Y) to the sum cX+cYc_X + c_Y, but it is not an isomorphism since there may be 'emergent cycles'. The following Mayer--Vietoris-like lemma clarifies the situation:

Lemma. If CC is a cancellative commutative monoid and X,YX,Y are subgraphs of a graph XYX \cup Y whose intersection XYX \cap Y is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:

H1(X,C)H1(Y,C)ιH1(XY,C)C0(X,C) H_1(X,C) \oplus H_1(Y,C) \xrightarrow{\iota} H_1(X \cup Y, C) \stackrel{\to}{\to} C_0(X, C)

where the two arrows I'm unable to label are spX s p_X and tpX.t p_X.

Proof. First we show that spX=tpXsp_X = tp_X on the image of ι\iota. Any element in the image of ι\iota is of the form cX+cYc_X + c_Y with cXH1(X,C)c_X \in H_1(X,C) and cYH1(Y,C).c_Y \in H_1(Y,C). Since pXcX=cXp_X c_X = c_X and pXcY=0p_X c_Y = 0, we have

spX(cX+cY)=scX=tcX=tpX(cX+cY). s p_X(c_X + c_Y) = s c_X = tc_X = t p_X(c_X + c_Y).

Next we show that any cH1(XY,C)c \in H_1(X \cup Y, C) with spXc=tpXcs p_X c = t p_X c is in the image of ι\iota. This equation says that pXcH1(X,C)p_X c \in H_1(X, C) . Since cc is a 1-cycle we have sc=tcs c = t c and thus

spXc+spYc=tpXc+tpYc. s p_X c + s p_Y c = t p_X c + t p_Y c .

Since CC is cancellative so is C0(XY,C)C_0(X \cup Y,C), so we can subtract the equation spXc=tpXcs p_X c = t p_X c from the above equation and conclude

spYc=tpYc. s p_Y c = t p_Y c .

Thus pYcH1(Y,C)p_Y c \in H_1(Y,C), so c=pXc+pYc=ι(pXc,pYc)c = p_X c + p_Y c = \iota(p_X c, p_Y c), and cc is in the image of ι\iota. \qquad \qquad \blacksquare

view this post on Zulip John Baez (Jun 17 2025 at 14:21):

I would really like to see a counterexample when CC is not cancellative - or even better, a proof that there's no counterexample! A counterexample would amount to this: a cycle cXC1(X,C)c_X \in C_1(X,C) and a chain cYC1(Y,C)c_Y \in C_1(Y,C) that is not a cycle, such that cX+cYC1(XY,C)c_X + c_Y \in C_1(X \cup Y, C) is a cycle.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 14:32):

John Baez said:

Lemma. If CC is a cancellative commutative monoid and X,YX,Y are subgraphs of a graph XYX \cup Y whose intersection XYX \cap Y is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:

Thanks!! I find the proof very nice!! I am trying to construct a counterexample/ to prove that there is no such!!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 14:34):

John Baez said:

Okay, now for the actual work! "You can wake up now", as an incredibly rude friend of mine once announced to the audience before giving his talk at a conference.

Interesting!! :)

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 14:43):

John Baez said:

Claim 2. The range of

ST:H1(XY,C)C0(X,A)S - T : H_1(X \cup Y, C) \to C_0(X,A)

is contained in

C0(XY,A)C0(X,A)C_0(X \cap Y, A) \subseteq C_0(X,A)

where AA is the free abelian group on the commutative monoid CC, sometimes called its Grothendieck group, and the inclusion is the obvious one.

I find this claim very interesting!! I was trying to prove this. Although I am yet to prove the general statement, I find this statement true in all the examples I worked out today. I like the idea that whenever, there is something like this aebfca \xrightarrow{e}b \xrightarrow{f}c in the graph XX, we have S(b)T(b)S(b) - T(b) of the form [α,α](b)=[0,0][\alpha, \alpha](b)=[0,0] in the Grothendieck group. I feel in all the vertices which do not lie in the intersection XYX \cap Y needs to have this property when we start with an element in H1(XY,C)H_1(X \cup Y, C).

When pXcp_Xc is itself a cycle in XX, then, trivially it becomes [0,0][0,0], and hence lie in C0(XY,A)C_0(X \cap Y, A).

view this post on Zulip John Baez (Jun 17 2025 at 14:52):

I'm trying to prove this Claim 2 now, and also state it in a way that avoids subtraction.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 14:54):

Ok. Then, I am trying to find the counterexample/ to prove there is no such. [Lemma]

view this post on Zulip John Baez (Jun 17 2025 at 15:04):

Good, that's the main mystery. Start with a simple non-cancellative monoid like {T,F}\{T,F\} with "xx or yy" as the monoid operation, or maybe {0,1,2}\{0,1,2\} with (x+y)min2(x + y) \text{min} 2.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:04):

Thanks!! Yes, I am trying.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:14):

John Baez said:

All 4 of these maps are compatible with the source and target maps, e.g. pXs=spX p_X \circ s = s \circ p_X . So, they induce maps on H1H_1 which we call by the same names:

Somehow, I am not able to see why the property pXs=spX p_X \circ s = s \circ p_X is true. However, I did not find it's use in inducing the map to H1H_1. I am not sure whether you have used this property or not anywhere in your construction. I tried to construct a counter example.
IMG_0350.PNG

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:24):

I think I might have wrongly interpreted!! It would be actually pXs(a+b+c+b)=pX((a+b)+(c+b))=(a+b)p_{X} s (a+b +c+b)= p_{X}((a+b)+ (c+b))= (a+b) ?

view this post on Zulip John Baez (Jun 17 2025 at 15:35):

I will look at your counterexample in a little while. First:

I worried about spX=pXss \circ p_X = p_X \circ s. In the text you quoted I was thinking of both of these as maps

C1(X,C)C1(Y,C)C0(X,C) C_1(X ,C) \oplus C_1(Y, C) \to C_0(X, C)

Beware: I'm using the same notation pXp_X in a different way in the paper now.

Since spXs \circ p_X and pXsp_X \circ s are monoid homomorphisms it suffices to check they're equal on an element of C1(X,C)C1(Y,C)C_1(X,C) \oplus C_1(Y,C) that's either of the form

or

All elements are sums of these two kinds.

Consider the first case:

spX(ae,0)=s(ae)=as(e) s p_X (a e, 0) = s(a e) = a s(e)

On the other hand

pXs(ae,0)=pX(as(e),0)=as(e) p_X s (ae, 0) = p_X(a s(e), 0) = a s(e)

Next consider the second case:

spX(0,af)=0 s p_X (0, a f) = 0

On the other hand

pXs(0,af)=pX(0,af)=0 p_X s (0, a f) = p_X(0, a f) = 0

So I think this is fine. But I don't think I'm using it in the paper now.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:38):

Thanks !! I got your argument. Somehow, I got confused. My counterexample does not makes much sense!!

view this post on Zulip John Baez (Jun 17 2025 at 15:39):

I think in your counterexample you are not treating pXp_X as a map

pX:Ci(X)Ci(Y)Ci(X) p_X: C_i(X) \oplus C_i(Y) \to C_i(X)

as I was when I made that claim pXs=spXp_X \circ s = s \circ p_X. You seem to be treating it as a map

pX:Ci(XY)Ci(X) p_X : C_i(X \cup Y) \to C_i(X)

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:41):

Thanks!! Yes!!

view this post on Zulip John Baez (Jun 17 2025 at 15:41):

If you look at our paper now, or the place where I actually proved something, you'll see I defined pXp_X differently than I did earlier. So this is potentially confusing. But the new approach is more useful.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:42):

Thanks!! I understand your point. I am checking the paper.

view this post on Zulip John Baez (Jun 17 2025 at 15:44):

I'm hoping you can find a non-cancellative commutative monoid and a 1-cycle on XX and a 1-chain on YY that's not a 1-cycle, whose sum is a 1-cycle on XYX \cup Y. The \infty-shaped graph you just drew may be useful here.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 15:44):

Thanks!! I am trying!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 17:04):

I think I have a counter example with the Boolean monoid (attached).
counterexample.PNG

view this post on Zulip John Baez (Jun 17 2025 at 17:44):

WOW, THAT'S EXCELLENT!

I mean, it's sad that certain results depend on taking a cancellative commutative monoid, but it's great that you've figured out what can go wrong otherwise.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 17:51):

Thanks very much!!! Your suggestion for Boolean monoid works here.. I would not have tried with Boolean monoid if you would not have suggested about it!! So, thanks a lot to you !!

view this post on Zulip John Baez (Jun 17 2025 at 17:55):

I've made some more progress. First, recall that I proved this:

Lemma. If CC is a cancellative commutative monoid and X,YX,Y are subgraphs of a graph XYX \cup Y whose intersection XYX \cap Y is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:

H1(X,C)H1(Y,C)ιH1(XY,C)C0(X,C)H_1(X,C) \oplus H_1(Y,C) \xrightarrow{\iota} H_1(X \cup Y, C) \stackrel{\to}{\to} C_0(X, C)

where the two arrows I'm unable to label are spXs p_X and tpX.t p_X.

The annoying thing about this is that it uses C0(X,C)C_0(X,C) where the usual Mayer-Vietoris sequence would use H0(XY,C)H_0(X \cap Y, C). So that's what I will now fix.

view this post on Zulip John Baez (Jun 17 2025 at 18:00):

First, note that there is a monoid homomorphism

q ⁣:C0(X,C)C0(XY,C) q \colon C_0(X,C) \to C_0(X \cap Y,C)

given by

q(v{vertices of X}cvv)=v{vertices of XY}cvv. q \Big( \sum_{v \in \{\text{vertices of } X \}} c_v \, v \Big) = \sum_{v \in \{\text{vertices of } X \cap Y\}} c_v \, v .

That is, qq kills off all vertices of XX that are not also in YY.

Now, a priori H0(XY,C)H_0(X \cap Y,C) is a quotient of C0(XY,C)C_0(X \cap Y, C), but XYX \cap Y has no edges so the quotient map is an isomorphism C0(XY,C)H0(XY,C)C_0(X \cup Y,C) \stackrel{\sim}{\to} H_0(X \cap Y, C) . So let's use this isomorphism to identify these two monoids, and treat qq as a monoid homomorphism

q ⁣:C0(X,C)H0(XY,C). q \colon C_0(X,C) \to H_0(X \cap Y,C).

This allows us to state the Mayer--Vietoris lemma in a nicer way:

Theorem. If CC is a cancellative commutative monoid and X,YX,Y are subgraphs of a graph XYX \cup Y whose intersection XYX \cap Y is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:

H1(X,C)H1(Y,C)ιH1(XY,C)H0(XY,C)H_1(X,C) \oplus H_1(Y,C) \xrightarrow{\iota} H_1(X \cup Y, C) \stackrel{\to}{\to} H_0(X \cap Y, C)

where now the two arrows I'm unable to label here are qspXq s p_X and qspYq s p_Y.

Proof. By the Lemma it suffices to show that a cycle cH1(XY,C)c \in H_1(X \cup Y,C) has spXc=tpXcs p_X c = t p_X c if and only if qspXc=qspXcq s p_X c = q s p_X c. One direction of the implication is obvious, so we suppose qspXc=qtpXcq s p_X c = q t p_X c and aim to show that spXc=tpXcs p_X c = t p_X c.

We let

c=e{edges of XY}cee. c = \sum_{e \in \{\text{edges of } X \cup Y\}} c_e e .

Since cc is a cycle we have

e{edges of XY}ces(e)=e{edges of XY}cet(e). \sum_{e \in \{\text{edges of } X \cup Y\}} c_e s(e) = \sum_{e \in \{\text{edges of } X \cup Y\}} c_e t(e) .

There are three mutually exclusive choices for a vertex in XYX \cup Y: it is either

1) in XX but not YY,
2) in XYX \cap Y or
3) YY but not in XX.

In case 1) we say the vertex is in XYX - Y and in case 3) we say the vertex is in YXY - X, merely by way of abbreviation. The above equation thus implies three equations:

e{edges of XY whose source is in XY}ces(e)= \displaystyle{ \sum_{e \in \{ \text{edges of } X \cup Y \text{ whose source is in } X - Y\} } c_e s(e) \qquad = }
e{edges of XY whose target is in XY}cet(e) \displaystyle{ \qquad \sum_{e \in \{\text{edges of } X \cup Y \text{ whose target is in } X - Y\}} c_e t(e) }

e{edges of XY whose source is in XY}ces(e)= \displaystyle{ \sum_{e \in \{ \text{edges of } X \cup Y \text{ whose source is in } X \cap Y \}} c_e s(e) \qquad = }
e{edges of XY whose target is in XY}cet(e) \displaystyle{ \qquad \sum_{e \in \{\text{edges of } X \cup Y \text{ whose target is in } X \cap Y\}} c_e t(e) }

e{edges of XY whose source is in YX}ces(e)= \displaystyle{ \sum_{e \in \{ \text{edges of } X \cup Y \text{ whose source is in } Y - X \}} c_e s(e) \qquad = }
e{edges of XY whose target is in YX}cet(e). \displaystyle{ \qquad \sum_{e \in \{\text{edges of } X \cup Y \text{ whose target is in } Y - X\}} c_e t(e) . }

Since an edge of XYX \cup Y whose source is in XYX - Y must be an edge of XX, the first equation is equivalent to this:

e{edges of X whose source is in XY}ces(e)= \displaystyle{ \sum_{e \in \{ \text{edges of } X \text{ whose source is in } X - Y\} } c_e s(e) \qquad = }
e{edges of X whose target is in XY}cet(e). \displaystyle{ \qquad \sum_{e \in \{\text{edges of } X \text{ whose target is in } X - Y\}} c_e t(e) .}

Since qspXc=qtpXcq s p_X c = q t p_X c we also know that

e{edges of X whose source is in XY}ces(e)= \displaystyle{ \sum_{e \in \{\text{edges of } X \text{ whose source is in } X \cap Y\}} c_e s(e) = }
e{edges of X whose target is in XY}cet(e). \displaystyle{ \sum_{e \in \{\text{edges of } X \text{ whose target is in } X \cap Y\}} c_e t(e) .}

Adding the last two equations we get

e{edges of X}ces(e)=e{edges of X}cet(e). \displaystyle{ \sum_{e \in \{\text{edges of } X\}} c_e s(e) =} \displaystyle{ \sum_{e \in \{\text{edges of } X\}} c_e t(e) .}

This says spXc=tpXcs p_X c = t p_X c, as desired! \qquad \qquad \blacksquare

view this post on Zulip John Baez (Jun 17 2025 at 18:05):

Please carefully check my logic here!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 18:06):

Thanks!! I am trying to understand your ideas.

view this post on Zulip John Baez (Jun 17 2025 at 18:28):

I just fixed some typos in those 3 huge sums, which would have made those equations completely false.

view this post on Zulip John Baez (Jun 17 2025 at 18:31):

It's funny how my purely intuitive understanding of what's going on, based on mental pictures, became rather complicated looking when I finally wrote it up precisely (after a month of mistakes.) I hope it's finally correct.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 18:32):

As of now the proof looks great !! I am reading !!

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 19:24):

I just read the proof. I found it really great!!! It looks correct to me. I really love the idea of dividing the vertex set in mutually exclusive classes. I find the ending argument (XY)(XY)=X(X-Y) \cup (X \cap Y) = X really crisp and very beautiful!! I also very much like the idea of using the previous lemma to boil down the complicated statement of the theorem to "a simple statement".

I found only one point a little odd: If I understand the proof correctly, then I could not find the use of "2nd and 3rd equations" i.e for the case of XYX \cap Y and YXY -X. Although I agree that the case of XYX \cap Y is addressed when you applied qq.

view this post on Zulip John Baez (Jun 17 2025 at 19:40):

Great, I'm glad you like this argument. In the actual paper I left out the 2nd and 3rd equations, though I still mention that 3 equations exist.

view this post on Zulip John Baez (Jun 17 2025 at 19:41):

I could shorten the argument even more by not mentioning all 3 cases.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 19:47):

John Baez said:

Great, I'm glad you like this argument. In the actual paper I left out the 2nd and 3rd equations, though I still mention that 3 equations exist.

Thanks!! I see! I will read the portion from the paper.

view this post on Zulip Adittya Chaudhuri (Jun 17 2025 at 19:54):

John Baez said:

I could shorten the argument even more by not mentioning all 3 cases.

I just read the relevant portion from the paper. I find it great!! (You mentioned there that 1 is the important one). I feel mentioning the 3 cases may be a better idea (from the perspective of readers) as it is clarifying the situation in a more vivid way. So, I think "the current version" in the paper is great!!

view this post on Zulip James Deikun (Jun 17 2025 at 20:10):

H1(X,C)H1(Y,C)ιH1(XY,C)qspXqspYH0(XY,C)H_1(X,C) \oplus H_1(Y,C) \xrightarrow{\iota} H_1(X \cup Y, C) {{\displaystyle\xrightarrow{qsp_X}} \atop {\displaystyle \xrightarrow[qsp_Y]{}}} H_0(X \cap Y, C)

view this post on Zulip John Baez (Jun 17 2025 at 20:12):

:mind-blown:

Thanks!

Now I just need to remember that this:

$${{\displaystyle\xrightarrow{f}} \atop {\displaystyle \xrightarrow[g]{}}} $$

gives this:

fg{{\displaystyle\xrightarrow{f}} \atop {\displaystyle \xrightarrow[g]{}}}

Luckily I can 'star' this message and save it for later.

view this post on Zulip John Baez (Jun 17 2025 at 21:02):

@Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.

view this post on Zulip John Baez (Jun 17 2025 at 21:04):

I want to finish a draft of this paper very soon and show it to the world, like tomorrow or the next day. After some feedback we can put it on the arXiv. Later, after some more feedback, we can submit it for publication.

I'm starting to work on the Conclusions. So far I've listed a couple of math problems I hope someone solves.

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 04:05):

John Baez said:

Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.

Thank you. Yes, I will check "whether Example 9.3 gives a counterexample to Lemma 9.3 or Theorem 9.4".

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 04:08):

John Baez said:

I want to finish a draft of this paper very soon and show it to the world, like tomorrow or the next day. After some feedback we can put it on the arXiv. Later, after some more feedback, we can submit it for publication.

That sounds great!!

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 04:10):

John Baez said:

I'm starting to work on the Conclusions. So far I've listed a couple of math problems I hope someone solves.

Thanks!! I will read the conclusion portion.

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 05:24):

John Baez said:

Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.

I think the counter example in Example 9.3 should work for Theorem 9.4 also because spXc=tpXcsp_Xc=tp_Xc imply qspXc=qtpXcqsp_{X}c=qtp_{X}c. Rest of the things remain same.

view this post on Zulip John Baez (Jun 18 2025 at 07:46):

I'm not awake enough to see why the counterexample in Example 9.3 should also give a counterexample to Theorem 9.4, but I'll try to figure that out.

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 09:57):

In the attcached file I tried to explain my argument
CounterexampleforTheorem 9.4.PNG

Note that the only additional change in the construction is the "presence of the map qq". This minor change is working because XX and YY are intersecting at both aa and bb and there are only two vertices in the whole diagram namely again aa and bb.

view this post on Zulip Adittya Chaudhuri (Jun 18 2025 at 18:43):

Feedback loops are important in both regulatory networks in Systems Biology and causal loop diagrams in Systems dynamics. We explored how feedback loops can be understood as elements of our 1st homology monoids.

Now, a question came to my mind:
In regulatory networks, Systems biologist not only find feedback loops interesting, they also find feed-forward loops interesting. From their definition, we can not think of them as elements of our 1st homology monoids. However, they are very important from the point of their roles in Biology. So, my question:

What would be a suitable mathematical theory "in the spirit of our homology monoids" which can capture feedforward loops in directed graphs?

view this post on Zulip John Baez (Jun 18 2025 at 22:23):

This is a great question, but I think you should study it in your next paper. Right now I'm solely focused on finishing this paper.

Please don't forget this question.

view this post on Zulip John Baez (Jun 18 2025 at 22:27):

I expanded Lemma 9.2 to include a new version that doesn't require cancellativity, while keeping the old version. The new version says

H1(X,C)H1(Y,C)ιH1(XY,C)(spX,spY)(tpX,tpY)C0(X,C)C0(Y,C)H_1(X,C) \oplus H_1(Y,C) \stackrel{\iota}{\to} H_1(X \cup Y, C) {{\displaystyle\xrightarrow{(sp_X,sp_Y)}} \atop {\displaystyle \xrightarrow[(tp_X,tp_Y)]{}}} C_0(X,C) \oplus C_0(Y,C)

is an equalizer even if CC isn't cancellative.

view this post on Zulip Adittya Chaudhuri (Jun 19 2025 at 06:14):

John Baez said:

I expanded Lemma 9.2 to include a new version that doesn't require cancellativity, while keeping the old version. The new version says

H1(X,C)H1(Y,C)ιH1(XY,C)(spX,spY)(tpX,tpY)C0(X,C)C0(Y,C)H_1(X,C) \oplus H_1(Y,C) \stackrel{\iota}{\to} H_1(X \cup Y, C) {{\displaystyle\xrightarrow{(sp_X,sp_Y)}} \atop {\displaystyle \xrightarrow[(tp_X,tp_Y)]{}}} C_0(X,C) \oplus C_0(Y,C)

is an equalizer even if CC isn't cancellative.

I read through the portion you suggested. New Lemma 9.2 looks interesting!!

If I understand correctly, the first equalizer (when the coefficients are coming from not necessarily a commutative monoid CC) says that if a cycle cc in the pushout graph (XY)(X \cup Y) is made up of a cycle cxc_x in the graph XX and a cycle cyc_y in the graph YY, then cc must lie in the image of ι\iota and vice versa, which is true both mathematically (as you showed) and also intuitively.

While the second equalizer (when CC is assumed to be cancelative) says that if a cycle cc in the pushout graph (XY)(X \cup Y) is made up of a cycle cxc_x in the graph XX and a chain cyc_y in the graph YY then, cyc_{y} must be a cycle in YY. (Similar statement holds if we consider the graph YY instead of graph XX). This is also intuitive, because I think when we draw an example of a directed graph, we secretly assume that coefficients are from N\mathbb{N}, which is a cancelative monoid.

However, as we see in Example 9.3 that our intuition for the second equalizer is not correct as we saw a counter example for the non-cancelative monoid B\mathbb{B}, which says (when coefficents are from a non-cancelative monoid) there can exist a cycle cc in the pushout graph (XY)(X \cup Y) which is made up of a cycle cXc_{X} in the graph XX and a chain (which is not a cycle) in YY.

view this post on Zulip Adittya Chaudhuri (Jun 19 2025 at 06:30):

John Baez said:

This is a great question, but I think you should study it in your next paper. Right now I'm solely focused on finishing this paper.

Please don't forget this question.

Thanks very much!! I am very glad that you find my question interesting!! I will definitely work on this idea.

view this post on Zulip John Baez (Jun 19 2025 at 09:50):

Adittya Chaudhuri said:

If I understand correctly, the first equalizer (when the coefficients are coming from not necessarily a commutative monoid CC) says that if a cycle cc in the pushout graph (XY)(X \cup Y) is made up of a cycle cxc_x in the graph XX and a cycle cyc_y in the graph YY, then cc must lie in the image of ι\iota and vice versa, which is true both mathematically (as you showed) and also intuitively.

Right. This result is very simple so I skipped it at first. But then I decided it was bad to skip the only result I know that holds for non-cancellative commutative monoids. In a sense working with cancellative commutative monoids is "cheating" because they are almost like abelian groups: they embed in their Grothendieck group.

While the second equalizer (when CC is assumed to be cancellative) says that if a cycle cc in the pushout graph (XY)(X \cup Y) is made up of a cycle cxc_x in the graph XX and a chain cyc_y in the graph YY then, cyc_{y} must be a cycle in YY. (Similar statement holds if we consider the graph YY instead of graph XX). This is also intuitive, because I think when we draw an example of a directed graph, we secretly assume that coefficients are from N\mathbb{N}, which is a cancellative monoid.

However, as we see in Example 9.3 that our intuition for the second equalizer is not correct as we saw a counter example for the non-cancellative monoid B\mathbb{B}, which says (when coefficents are from a non-cancellative monoid) there can exist a cycle cc in the pushout graph (XY)(X \cup Y) which is made up of a cycle cXc_{X} in the graph XX and a chain (which is not a cycle) in YY.

Right. I'd say our intuition is correct if we're working with a cancellative monoid, but not otherwise.

view this post on Zulip Adittya Chaudhuri (Jun 19 2025 at 10:07):

John Baez said:

Right. This result is very simple so I skipped it at first. But then I decided it was bad to skip the only result I know that holds for non-cancellative commutative monoids. In a sense working with cancellative commutative monoids is "cheating" because they are almost like abelian groups: they embed in their Grothendieck group.

Thanks!! Yes, I agree!!

view this post on Zulip Adittya Chaudhuri (Jun 19 2025 at 10:10):

John Baez said:

Right. I'd say our intuition is correct if we're working with a cancellative monoid, but not otherwise.

Yes, I agree. May be "when we draw something" our brain has a default coefficent system, which is a kind of cancellative commutative monoid. I know what I just said may not make any sense from the point of biology.

view this post on Zulip John Baez (Jun 20 2025 at 09:00):

Our paper is done - we look forward to comments and corrections!

Abstract. In fields ranging from business to systems biology, directed graphs with edges labeled by signs are used to model systems in a simple way: the nodes represent entities of some sort, and an edge indicates that one entity directly affects another either positively or negatively. Multiplying the signs along a directed path of edges lets us determine indirect positive or negative effects, and if the path is a loop we call this a positive or negative feedback loop. Here we generalize this to graphs with edges labeled by a monoid, whose elements represent 'polarities' possibly more general than simply 'positive' or 'negative'. We study three notions of morphism between graphs with labeled edges, each with its own distinctive application: to refine a simple graph into a complicated one, to transform a complicated graph into a simple one, and to find recurring patterns called 'motifs'. We construct three corresponding symmetric monoidal double categories of 'open' graphs. We study feedback loops using a generalization of the homology of a graph to homology with coefficients in a commutative monoid. In particular, we describe the emergence of new feedback loops when we compose open graphs using a variant of the Mayer-Vietoris exact sequence for homology with coefficients in a commutative monoid.

view this post on Zulip David Corfield (Jun 23 2025 at 14:02):

I've started the article. A few typos:

nonnegataive (p. 9); Note however that it also give (p. 9); affects the vertex νn\nu_n (p. 10, should be νm\nu_m)

Also on p. 11, that γ\gamma should be ff, or the other way.

positve (p. 12); seein (p. 27)

Might there be something approximating the universal coefficient theorem in the case of commutative monoids? I guess what you're doing through pp. 33-35 comes closest.

view this post on Zulip David Corfield (Jun 23 2025 at 14:11):

Above, John said about System Archetypes theory,

It seems to be telling us to avoid "wishy-washy" behavior where on short time scales we have a feedback loop of one polarity, and with a delay we have a feedback loop of the opposite polarity.

I'm still dubious as to whether this structure is intrinsically bad, but leaving that aside, using the resources of the new paper, how do we pick out such wishy-washiness?

Presumably, we are working with some monoid of delays, say {+1,1}×{T,F}\{+1, -1\} \times \{T, F\}. Then find all feedback cycles through each point using homology calculations, and then see if any pair of cycles has the appropriate conflicting labels, e.g., (+1,T)(+1, T) and (1,F)(-1, F)?

view this post on Zulip Adittya Chaudhuri (Jun 24 2025 at 16:56):

David Corfield said:

I've started the article. A few typos:

nonnegataive (p. 9); Note however that it also give (p. 9); affects the vertex νn\nu_n (p. 10, should be νm\nu_m)

Also on p. 11, that γ\gamma should be ff, or the other way.

positve (p. 12); seein (p. 27)

Thanks very much!! Yes we need to fix these typos.

view this post on Zulip Adittya Chaudhuri (Jun 24 2025 at 17:02):

David Corfield said:

Then find all feedback cycles through each point using homology calculations, and then see if any pair of cycles has the appropriate conflicting labels,

Thanks!! I find this perspective super interesting!! I am thinking about it !!

view this post on Zulip Adittya Chaudhuri (Jun 24 2025 at 17:08):

David Corfield said:

I'm still dubious as to whether this structure is intrinsically bad

I also feel the same. That is why I want to broaden the question a bit:

What kind of patterns (other than the ones we already know) in a causal loop diagram should be worth studying? In this regard , I find your view of of considering feedback cycles locally i.e "Then find all feedback cycles through each point using homology calculations" very relevant. I will think on this perspective.

view this post on Zulip David Corfield (Jun 25 2025 at 09:04):

Adittya Chaudhuri said:

What kind of patterns (other than the ones we already know) in a causal loop diagram should be worth studying?

This is an important question. It seems that the systems archetypes approach is to take instances of known dysfunctional organisations and then to extract a common set of problematic patterns. I'm dubious that taking the latter in some uninterpreted, structural way and locating them inside the causal loop diagram representation of some other kind of organisation will guarantee locating dysfunction in the latter. Still worth exploring though.

I'm reminded of debates as to the value of different kinds of evidence in medical decision-making. On the face of it, the Evidence-Based Medicine (EBM) movement was looking to replace anecdote and common experience of the sense of how things go in patients' bodies, often informed by a story-like account of biological mechanisms, by pristine gold standard meta-analyses of well-run double-blinded randomized controlled trials. E.g., here

image.png

Of course, all kinds of mechanistic understanding seep into any reasonable approach to working with such trials, but what I want to draw attention to is one of the arguments given by EBM-ers for distrust of mechanistic knowledge. The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system.

It's possible then that something looking locally dysfunctional may play a useful role in the larger system.

view this post on Zulip Adittya Chaudhuri (Jun 25 2025 at 18:36):

David Corfield said:

This is an important question. It seems that the systems archetypes approach is to take instances of known dysfunctional organisations and then to extract a common set of problematic patterns. I'm dubious that taking the latter in some uninterpreted, structural way and locating them inside the causal loop diagram representation of some other kind of organisation will guarantee locating dysfunction in the latter. Still worth exploring though.

Thank you!! Yes, I agree to your point of view!!

view this post on Zulip Adittya Chaudhuri (Jun 25 2025 at 18:45):

David Corfield said:

but what I want to draw attention to is one of the arguments given by EBM-ers for distrust of mechanistic knowledge. The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system.

Thanks!! Yes!! I got your point!!

view this post on Zulip Adittya Chaudhuri (Jun 25 2025 at 19:18):

David Corfield said:

It's possible then that something looking locally dysfunctional may play a useful role in the larger system.

I find this perspective not only interesting from the point of intuition that I developed after seeing those 7 archetypes from that book, I find this perspective interesting from the point of Mathematics too. I am trying to write down my thoughts. I may be misunderstanding many many things, so please correct me if I am making any mistake!!

I will be using similar notation style as in our paper.

For a commutative monoid CC, let (G, ⁣:EC)(G, \ell \colon E \to C) be a CC-labeled graph. Now, I am defining the following:

Now, I want to see your idea

"It's possible then that something looking locally dysfunctional may play a useful role in the larger system."

in terms of l~v ⁣:Aut(v)C\tilde{l}|_{v} \colon \rm{Aut}(v) \to C for each vv.

Next question, if we know l~v ⁣:Aut(v)C\tilde{l}|_{v} \colon \rm{Aut}(v) \to C for each vv, what can we tell about ~ ⁣:H1(G,N)C\tilde{\ell} \colon H_1(G, \mathbb{N}) \to C? Is it fully determined, or, is it determined upto something?

view this post on Zulip Adittya Chaudhuri (Jun 25 2025 at 20:05):

Now, note that the homology classes of simple loops passing through vv is contained in Aut(v){\rm{Aut}}(v). Thus, by our already established bijection between homology classes of simple loops and minimal elements of H1(G,N)H_1(G, \mathbb{N}), we have a map lˉv ⁣:Minv(H1(G,N))C\bar{l}|_{v} \colon \rm{Min}_{v}(H_1(G, \mathbb{N})) \to C defined by the restriction of ~v\tilde{\ell}|_{v} on the set of minimal elements of H1(G,N)H_1(G, \mathbb{N}) which corresponds to the homology classes of simple loops passing through vv. Since, minimal elements generate H1(G,N)H_1(G, \mathbb{N}), we can recover the whole ~ ⁣:H1(G,N)N\tilde{\ell} \colon H_1(G, \mathbb{N}) \to \mathbb{N}.

Hence, if I am not making any mistakes, then, yes, @David Corfield I think your claim

"It's possible then that something looking locally dysfunctional may play a useful role in the larger system"

makes sense (from the point of Mathematics too), and thus probably, it will be sufficent for us to focus our attention locally to understand the global behaviour.

view this post on Zulip Adittya Chaudhuri (Jun 25 2025 at 20:18):

So, from the above arguments it seems (if I am not making any mistakes), then, I have an answer for this question:

Next question, if we know l~v ⁣:Aut(v)C\tilde{l}|_{v} \colon \rm{Aut}(v) \to C for each vv, what can we tell about ~ ⁣:H1(G,N)C\tilde{\ell} \colon H_1(G, \mathbb{N}) \to C? Is it fully determined, or, is it determined upto something?

Ans: Yes, it is fully deteremined locally.

view this post on Zulip David Corfield (Jun 26 2025 at 06:38):

Interesting.

Adittya Chaudhuri said:

Ans: Yes, it is fully deteremined locally.

So maybe a better way to frame the EBM concern I mentioned

David Corfield said:

The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system

is that we often don't have full local information at a vertex. If we only have a sub-causal loop diagram of the "true" diagram, we can be misled.

But then in this form, it hardly seems surprising.

view this post on Zulip David Corfield (Jun 26 2025 at 06:46):

I guess then it could be the case that at a vertex vv we have all of its connecting edges from the "true" diagram but not all of its loops. We could lack knowledge of a distant edge that partakes in a crucial loop at vv.

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 13:23):

David Corfield said:

Interesting.

Thank you!!

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 13:26):

David Corfield said:

I guess then it could be the case that at a vertex vv we have all of its connecting edges from the "true" diagram but not all of its loops. We could lack knowledge of a distant edge that partakes in a crucial loop at vv.

Thanks!! This is interesting!! I tried to imagine what you said. Please see the attached diagram.
missinglink.png

view this post on Zulip David Corfield (Jun 26 2025 at 13:55):

Right. And the missing link could be a long way away from vv. I wonder whether people have looked into the stability of networks under rewiring at the periphery.

I'm reminded of work I read years ago done at the Santa Fé Institute by Stuart Kauffman. There was a rationale for why gene networks shouldn't be too inter-connected.

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 14:03):

David Corfield said:

Right. And the missing link could be a long way away from vv. I wonder whether people have looked into the stability of networks under rewiring at the periphery.

I'm reminded of work I read years ago done at the Santa Fé Institute by Stuart Kauffman. There was a rationale for why gene networks shouldn't be too inter-connected.

Thanks!! This is very interesting!! Do you have a reference for the Kauffman work you mentioned?

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 14:05):

David Corfield said:

And the missing link could be a long way away from vv. I wonder whether people have looked into the stability of networks under rewiring at the periphery.

I find this situation and the question very important and natural.

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 14:17):

From your point of view, I am now imagining the situation as a mathematical question:

For a commutative monoid CC, let (G, ⁣:EC)(G, \ell \colon E \to C) and (G, ⁣:E{e}C)(G', \ell' \colon E - \lbrace e \rbrace \to C) are two CC-labeled graphs such that GG and GG' differ only by the existence/non-existence of an edge ee.

Question: how ~ ⁣:H1(G,N)C\tilde{ \ell} \colon H_1(G, \mathbb{N}) \to C differs from ~ ⁣:H1(G,N)C \tilde {\ell'} \colon H_1(G', \mathbb{N}) \to C?

I know that the above question is too general.

So, I want to know "what type of edge we can remove from GG so that ~ ⁣:H1(G,N)C \tilde{\ell} \colon H_1(G, \mathbb{N}) \to C and ~ ⁣:H1(G,N)C \tilde{\ell}' \colon H_1(G', \mathbb{N}) \to C are in some way similar"?

view this post on Zulip David Corfield (Jun 26 2025 at 14:25):

Adittya Chaudhuri said:

Do you have a reference for the Kauffman work you mentioned?

It was from a seriously long time ago, maybe 30 years, when the "edge of chaos" was in vogue. Let's see. In this article from 2005 they're considering Kauffman networks, and on the first page they mention the phase between frozen and chaotic phases. It requires a certain degree of connectivity.

But there must have been so much more done since then.

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 15:54):

Thanks a lot!! I will read the paper.

view this post on Zulip Adittya Chaudhuri (Jun 26 2025 at 15:55):

David Corfield said:

But there must have been so much more done since then.

I see. Thanks!! I will search for the recent works in this direction.

view this post on Zulip David Corfield (Jun 27 2025 at 06:54):

Adittya Chaudhuri said:

So, I want to know "what type of edge we can remove from GG so that ~ ⁣:H1(G,N)C\tilde{\ell} \colon H_1(G, \mathbb{N}) \to C and ~ ⁣:H1(G,N)C\tilde{\ell}' \colon H_1(G', \mathbb{N}) \to C are in some way similar"?

Presumably this rests on how many minimal loops of GG contain ee.