You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
@Adittya Chaudhuri and I are writing a paper on "Graphs with polarities", and I thought it would be nice to have our conversations about this paper here. I've always liked doing research in public forums, like blogs. It's a good way to make sure we're not missing big ideas or making dumb mistakes, since people can comment. It's a good way to publicize the research. And for others, it's entertaining to watch - and helpful for students who are just starting research and haven't seen how it's done.
Anyone wanting to get the basic idea of what we'll be discussing can check out my blog series
(This links to part 5, but you can easily click back to earlier articles.)
But I'll just dive in and start talking to @Adittya Chaudhuri.
Any graph freely generates a category .
In the paper we talk about graphs whose edges are labeled by elements of a monoid . We show that any such -labeled graph freely generates a category over the one-object category .
I want to make some expository changes but also I have a more substantial idea.
In terms of exposition, we currently define an -labeled graph to be a graph with a functor . This makes the concept seem too complicated. It's just a graph with edges labeled by elements of . We'd defined graphs with edges labeled by elements of a set in Definition 2.1, so we can just apply that here.
It should be a little proposition, not a definition, that when is a monoid an -labeled graph gives a functor (and the -labeled graph can be recovered from this).
I'll make this change - I won't list all the expository changes I want to make; it's quicker just to make them - but I want to emphasize my philosophy: definitions should be simple and easy to understand whenever possible. They should not be impressive, especially in applied category theory where non-category-theorists will be trying to understand them.
Just to clarify, did you refer to Definition 2.1 as in the current overleaf file?
(As Defn 2.1 is the definition of -labeled graph).
Yes: that's where we define graphs with edges labeled by elements of a set, and labeling edges by elements of a monoid works the exact same way.
But here's the more substantial idea. We're calling elements of 'polarities' and using them to describe different ways in which one thing affects another. For example if is the group , an edge labeled by + means one vertex positively affects another, while an edge labeled by - means one vertex negatively affects another.
But we've seen that the absence of an edge is also a kind of polarity, meaning 'no effect'.
This has been confusing me for a long time, but I think I've figured it out.
Are you saying about multiplicative monoid of ?
Let me take my time and explain things... it will take a while but it'll become clear.
Sure.
So, we want a formalism where the absence of an edge is on the same footing as a labeled edge. And I realized such a formalism already exists! If is a monoidal category people talk about categories enriched in , or -categories for short, which have a set of objects and for each pair of objects an object , and composition and identity-assigning maps obeying the usual properties.
But some people (who? I've seen it somewhere!) also define -graphs, which are like -categories without the composition and identity-assigning maps. A -graph is just a set of vertices and for each pair of vertices an object . That's all.
People do this because there's something like a monad on the category of -graphs whose algebras are -categories: the 'free enriched category on an enriched graph' monad.
But if we take to be a mere monoid, thought of as a monoidal category with only identity morphisms, then a -graph is like a graph with polarities!
If we take to be the boolean monoid $$\{T,F}$$ then a -graph is just a graph, where means the presence of an edge and means the absence of an edge.
But if we take to be the multiplicative monoid of , then a -graph is the same as a -labeled graph where - just as you said.
I'm now thinking that -graphs are fundamentally more important than -labeled graphs, in part because they connect so nicely to enriched category theory.
Any monoid gives a monoid with an extra element that's 'absorptive': anything times is - and this lets us turn -labeled graphs into -graphs.
However, there are also monoids that don't have an absorptive element, so the theory of -graphs is strictly more general than the theory of -labeled graphs. I have not thought of any examples of how this could be useful in the study of polarities.
Right now, my reason for getting interested in -graphs is not so much the greater generality, but the fact that it wouldn't be good to be studying a special case of enriched category theory without even noticing it!
Okay, now I'm fired up... I will go to the gym and later today I will start working on the exposition of the paper. (The other reason I like doing research in public is that it's more exciting and it makes me work faster!)
Thanks !! I am thinking on the perspective of -graphs.
Great! Maybe you can find some aspects of our paper that would be clearer from this perspective. Notice that the 0 in the monoid is naturally the additive identity when we give a ring structure. I.e., adding a non-edge to a labeled edge goes nothing to the labeled edge.
Do people think about categories enriched in a ring or rig?
My question is "why do we need the full rig structure in the hom Set?" Is'nt the commutative monoid structure not sufficient?
I meant categories enriched in commutative monoids?
I think we're getting a bit mixed up here, since doesn't need to be a commutative monoid, and we should think of its operation as multiplication, and an -labeled graph gives a -graph where is a not necessarily commutative monoid.
The rig structure plays no role in any of this.
It comes later, and I probably shouldn't have mentioned it yet.
Yes, now I understand your point.
Ignore me if this is distracting, but I thought of you two sometime in the last couple of weeks when I was doing some -enriched calculation and I found I wanted to understand maps for varying and an enriched category I found myself drawing such maps as arrows from to labeled by a little Also, you can compose such a “-arrow” with a “-arrow” to get a -arrow ! You just compose with in
So, it feels like there’s some nice functor from -categories into categories labeled by the monoid of objects of , up to truth since the objects of aren’t quite a monoid in general. Just an idle thought!
The concept of a -graded category captures this intuition precisely (every -enriched category induces a -graded category where the -graded morphisms from to are given by the morphisms ). Rory Lucyshyn-Wright's recent paper V-graded categories and V-W-bigraded categories: Functor categories and bifunctors over non-symmetric bases is a nice introduction to these ideas.
Ah, lovely, I'm glad it's a known thing!
(deleted)
One problem is our is in the form of , for a monoid . So, only arrows are identity morphisms.
An -labeled graph is , which means for every there can multiple labeled edges between and .
How the notion of -graph is capturing "labelled multiple edges" between two vertices ?
I feel like for a monoid , there should be a functor from the category of -labeled graphs to the category of -graphs.. ? (where is the graph with one vertex and an edge for each in ). Of course, to make such an association, we may need additional algebraic structure on (like a rig..etc.).
John Baez said:
Adittya Chaudhuri and I are writing a paper on "Graphs with polarities", and I thought it would be nice to have our conversations about this paper here. I've always liked doing research in public forums, like blogs. It's a good way to make sure we're not missing big ideas or making dumb mistakes, since people can comment. It's a good way to publicize the research. And for others, it's entertaining to watch - and helpful for students who are just starting research and haven't seen how it's done.
Nice idea! A very naive question: isn't this (labeling edges of a graph by elements in a monoid)motivated - among other things - by path integrals in discrete integral calculus in dimensions? The label associated to an edge from to could be the potential difference . When integrating, you add the weights associated to the edges which constitute your path. But you may have contexts where it is sensible to multiply those weights... You also have cases where you deal with (directed) graphs whose edges are labelled by linear operators, which you can compose.
For me, a motivation for multiplication of weight can be from regulatory networks in biological systems (for eg: the paper https://arxiv.org/pdf/2301.01445, where the case = is considered), and they are called signed graphs (signed categories): "Say stimulates and is inhibiting , then composing, is inhibiting .
However, I think if we consider the multiplicative monoid of the tropical semiring https://ncatlab.org/nlab/show/tropical+semiring, then a product is actually an addition.
According to the definition of -graph as @John Baez defined above, it seems it is like a monoid valued matrix , where is the set of vertices of the graph, which may be like the "adjacency matrices of such -graphs.. ? I feel like the concept of -matrices in @Jade Master 's thesis https://arxiv.org/pdf/2105.12905 is the quantale analog of -graphs that you defined.
But, is this notion capturing "multiple labeled edges" like our definition of -labeled graphs i.e objects of the slice category ?
However, I completely agree that the absence of edges is nicely captured by -graphs unlike -labeled graphs.
Kevin Carlson said:
So, it feels like there’s some nice functor from -categories into categories labeled by the monoid of objects of , up to truth since the objects of aren’t quite a monoid in general. Just an idle thought!
Nice! And of course every monoidal category is equivalent to a strict one so you can have a monoid of objects if you want... at the cost of making it obscenely large. (This is probably not a good idea, but you can do it.)
So far in our work on 'polarities' @Adittya Chaudhuri and I are only thinking about the case where actually is a monoid, i.e. a discrete monoidal category. We haven't thought about 'morphisms between polarities'. But it might be a natural thing to do - we could dip our toe in this water by letting be a monoidal poset.
Jorge Soto-Andrade said:
A very naive question: isn't this (labeling edges of a graph by elements in a monoid) motivated - among other things - by path integrals in discrete integral calculus in dimensions? The label associated to an edge from to could be the potential difference . When integrating, you add the weights associated to the edges which constitute your path. But you may have contexts where it is sensible to multiply those weights... You also have cases where you deal with (directed) graphs whose edges are labelled by linear operators, which you can compose.
We are motivated by work that people are already doing on 'regulatory networks' in biology (as @Adittya Chaudhuri explained) and also 'causal loop diagrams' in the modeling discipline known as System Dynamics. I explained the latter here:
and here is an example:
Causal loop diagram for factory productivity
In all that work, one only considers a monoid of labels. But if your monoid is a rig, you can try to sum labels over all paths from one vertex to another, where the labels for paths are obtained by multiplying labels of edges making up the path... and the summation is where we use a rig. So then we get a discrete path integral, similar to what you're describing.
There are difficulties with this idea as soon as our graph has directed cycles, because then the sum can become infinite. And the graphs usually do have directed cycles, as in the picture above. That's why they're called 'causal loop diagrams': people are interested in studying the 'feedback loops'.
There are ways around these difficulties... but anyway I should be working on the paper!
This sounds cool and I’m interested to see if there’s a connection to the matrix exponential. When the edge weights are real or complex, you can look at the matrix exponential to summarize all paths between a pair of nodes, discounting by length.
There is definitely a connection to the matrix exponential, and yes, the 'discounting by length' implicit in the formula for the exponential is exactly what solves the divergence issue that occurs in a naive 'sum over paths'. I've written a bit about this in a note to myself, and I should figure out a good place to put that writing. I'm not sure this paper is the place! We'll see. Anyway, thanks, you've revived a certain interesting line of thought.
@Adittya Chaudhuri - I'm trying to prove and polish up Proposition 2.3. So people can understand what I'm talking about: we're studying the category of graphs with edges labeled by elements of some set , and we call this category because it's equivalent to the category of graphs over : the graph with one vertex and one edge for each element of .
I've removed the part of this proposition that's useful for structured cospans, and moved it to the section on structured cospans, where people will understand why we care about this.
So what's left is this:
Proposition. For any set , we have the following:
(a) The category is a presheaf topos.
(b) The forgetful functor is a discrete fibration.
(c) The presheaf associated to is representable by .
(d) The category is equivalent to the category of elements .
All of this should be a special case of completely general facts that show up whenever you have a presheaf category (here ) and an object in that category (here ). So I think our proof should say this, and briefly explain the completely general facts, with references to proofs, in a way that people who don't already know them have a chance of understanding.
Here are the general facts, if I understand correctly:
General facts. For any object in a presheaf category :
(a) The category is a presheaf topos.
(b) The forgetful functor is a discrete fibration.
(c) The presheaf associated to is representable by .
(d) The category is equivalent to the category of elements .
So, one question is: where are these general facts proved - preferably in a textbook that will help readers understand them?
I also think for (a) it's good to note that
where is our presheaf on . I think this the usual proof that a slice category of a presheaf category is again a presheaf category. I'm confused about how this is related to fact (d).
I think (d) should follow from the fact that is the presheaf associated to (via Grothendieck correspondence). Hence, should be equivalent to by the fact that the category of Presheaves over is equivalent to the category of discrete fibrations over .
Or , am I misunderstanding something here?
(deleted)
I don't follow the logic here:
Hence, should be equivalent to by the fact that the category of presheaves on is equivalent to the category of discrete fibrations over .
You're probably just skipping some steps, but the part " should be equivalent to " mentions and , but the reason "by the fact that the category of presheaves on is equivalent to the category of discrete fibrations over " doesn't mention either of those things.
Another question: where do you plan to use these facts:
(c) The presheaf associated to is representable by .
(d) The category is equivalent to the category of elements .
?
John Baez said:
I don't follow the logic here:
Hence, should be equivalent to by the fact that the category of presheaves on is equivalent to the category of discrete fibrations over .
You're probably just skipping some steps, but the part " should be equivalent to " mentions and , but the reason "by the fact that the category of presheaves on is equivalent to the category of discrete fibrations over " doesn't mention either of those things.
Let denote the category of presheaves over and denote the category of discrete fibrations over . Since is equivalent to , there is a pair of functors and such that and . However, these and arise from the Grothendieck correspondence i.e in our context and . Hence, translates to as fibrations. Now, from the definition of isomorphism of fibrations, we have .
John Baez said:
Another question: where do you plan to use these facts:
(c) The presheaf associated to is representable by .
(d) The category is equivalent to the category of elements .
?
The free-forgetful adjunction between and allows us to define a natural isomorphism between the presheaves and . Hence, by Grothendieck correspondence, . However, is same as , and thus . On the other hand is the category of graphs with -valued polarities" as defined in Definition 3.10. Thus the category of graphs with - valued polarities is same as the category of -labeled graphs. I used it in the proof of Theorem 3.12.
Thanks.
@John Baez and I were discussing about the "stratification" as in the Section 5 of https://arxiv.org/pdf/2211.01290 and its possible way of incorporating it in the framework of graphs with polarities. I was thinking in the following way:
Say we assume "Students influence University in " ways". We see it as a graph with two vertices (Students and University) and -number of directed labelled edges from Students to University. Out of these influences, some are positive and some are negative. We call this graph . Now consider another graph with two vertex but only two directed edges from to labeled by and . We call this graph . Now, let us divide the Students into "PhD and Undergrads" and divide the University into "Public and Private". Then, if we construct the corresponding graph, we would obtain (by incorporating the appropriate edges coming from ) a graph which we call . Now, there are two functions and , which maps Students, PhD, Undergrads to and University, public, private to , and the positive edges to and negative edges to . Now, we construct the pullback in the category . We will call the pullback graph .
However, @John Baez said that "the notion of multiple stratifications" i.e Students into PhD's and Undergads and Universitites into Public and Private may lead to some unwanted errors. He said the standard technique is to do in a "non-multiple" way like "classifying people based on the "age difference" or "sex difference" (Separately) as in the Figure 14 of https://arxiv.org/pdf/2211.01290.
But, I am thinking about this :
Is there a way to incorporate "multiple stratification" in the framework of labeled graphs which can bypass the "errors about which @John Baez mentioned"? I think like this because, for example, influence positively and say, influence negatively. Now, say based on certain characteristics, and can be classified into , , respectively. Then, shouldn't the concept of multiple stratifications may naturally arise ? Am I missing something?
From the Mathematical point, we are trying to understand how the property "products distribute over colimits in a presheaf topos" translate into the language of "gluing and stratifying labeled graphs/causal loop diagrams".
In the paper A categorical framework for modeling with stock and flow models we explain how to do stratification of models using pullbacks.
If I wanted to do multiple stratifications, like taking a model with students and universities and 1) subdividing students into undergraduates and PhD students and 2) subdividing universities into public and private, I would do one at a time, not both at once. This means doing first one pullback of stock and flow models that involves students, and then another that involves universities.
Of course, taking one pullback and then another is still an example of a limit. So we are still just taking a limit of stock and flow models. But it's less confusing to do it in stages.
Thanks. I got your point.
@John Baez I was thinking about an alternative (a bit relaxed) formalism of stratification for labeled graphs, which I am describing below:
For a set , consider a -labeled graph . I am defining a stratification of as a functor from the free category on to the category of relations https://ncatlab.org/nlab/show/Rel.
For every vertex of , lets call the elements of the set as the stratification of . Now, I construct a -labeled graph defined by the following data:
, where .
, where .
, defined as .
Although to match our intuition (with the kind of stratification discussed in https://arxiv.org/pdf/2211.01290), we may add an extra condition i.e we may define a stratification of as a functor from the free category on to the category of relations such that for each morphism in , we have . However, I am illustrating the general case by an example below:
Let us consider a graph with two vertices and , and an edge labeled by the phrase by considering it as a graph whose edges are labeled by free monoid on the set of English words. Say, represents medicines used in treating headache and represents people with headache. Now, define as follows:
" maps to the set of all medicines used in curing headaches, and maps to the set of collection of people with different kinds of allergies. In this case does not cause any allergic effects among people in . Finally each edge of is labeled by the phrase ''.
Maybe the whole discussion can be simplified/or maybe I am misunderstanding some perspectives.
Can you use this construction to obtain a new labeled graph? The idea of stratification has traditionally been that you start with a model of some sort (e.g. a labeled graph), and then build a new more complicated model that has an epimorphism .
I'm in touch with an ecologist who has been using causal loop diagrams. These are simply directed graphs with edges labeled by signs) to study ecosystems. They're called 'causal loop diagrams' because people who use them are especially interested in finding feedback loops. This is a standard technique in the field called 'systems ecology'. She wrote this paper about it:
In this paper she introduced what she called "Type II loop analysis":
Loop Analysis for aquatic ecosystems has been used in two ways: (1) construction of hypothetical–intuitive models buttressed by the general knowledge of the investigator and reports of species coexistence and interactions in the literature (Type I). This is not unlike how Systems Ecologists first conceptualize their quantitative models. To date, most applications of LA have been Type I, with 10–12 or fewer nodes, which ignores a great deal of biological realism. Often, a model or set of models is drawn by hand in a ‘back of the envelope’ manner, and the LA calculation is completed to obtain predictions of changes in each node once a parameter input or driving force is assumed to perturb the network at one or more locations. In Type II LA, time series data of species field densities and nutrient concentrations are used to identify nodes and links, and node number is not limited, making larger, more biologically reasonable models possible.
She writes:
Hi Dr. Baez,
I considered your preference in doing Category Theory with decorated cospans a serious challenge. I came up with a way to decorate the links in my Type II Loop Analysis models for marine plankton ecosystems. Usually, I translate the quantitative field-lab data into qualitative signs as directed changes (in abundances from one sampling period to the next) in graph nodes that are sets of functional species in an ecological network. This leaves the networks completely qualitative, and quantitative values are never used again. However, calculations in the LA methodology allow one to predict how the nodes will change qualitatively when the network is perturbed from the outside (parameter input=PI). I construct the models and automatically test for all positive and negative PIs to all nodes, then select the simplest model and input node with the best agreement with the data in the 90-100% range. In the new work, I took the best agreement PI for the 640 individual models (20 years of about 17.5 networks/year for 2 stations) and listed the paths and complements. A complement is a set of disjunct loops of nodes not on the path. Then, I counted how many times each link appeared in the paths, complements, and total (P + C) calculations, so I have individual 640 quantified sets of links x 3 with associated graphs. A total of 1.36 links were counted for the Halifax station. I view these values as frequencies of causality in the networks. I believe this causal structure is more important than just flows of carbon or Kcal of energy, which are the common ways of quantifying ecosystem links. No two networks repeat over hundreds of networks, but they are about 75% similar. All networks are ranged around a central lattice containing several hundred species individually arranged in the nodes. The link frequency values are quite disparate, with the lattice links generally the most common, which was expected. As an example, I have attached a summary table of the counts and percentages of link frequencies for the summary graph for about 374 individual models for the Halifax station. (On the attached graph, the asterisks mean the links were present in loop calculations but <1% of all involved links).
I want to determine if there is a deeper underlying, even emergent, causal structure with Category Theory than summarized by the current set of largely collective network properties usually calculated for signed digraphs. In addition, is there any way to illustrate the networks are CLEF (closed to efficient causality a la’ Aristotle and a main characteristic of self-organizing living systems). Robert Rosen demonstrated ‘CLEFness’ with his M-R systems for cells using Category Theory models, and Jan-Hendrik Hofmeyr of Stellenbosch University is currently expanding Rosen’s work. I think Bob was the first person to use Category Theory in biology in the 1950s in his PH.D. thesis at the University of Chicago and subsequent publications. He knew the two Category Founders (Professors Eilenberg and MacLane). I have identified these lattice structures at different locations over 1000 km of ocean. It appears the upper ocean is a teaming 3D lattice of interactions. The loop models are a main part of my work on ecosystem chimeras-nodes that trade functions mutualistically in networks that individual nodes cannot provide for themselves throughout evolution. I do not think anything like this (using Category Theory for loop links) has been attempted before at the ecosystem level.
Thus, my questions are:
- Are the cospans decorated sufficiently for Category Theory? Is there something better I could do?
- Is this a project you would be interested in? I would be very interested to work with you on it.
I have also attached two papers I wrote for Mathematics in December, one on Rosen’s approach to algorithms for living systems (he didn’t think they could capture life itself), and one on how I do Type II loop analysis for plankton communities, which is the most detailed description of the methodology to date. There is a YouTube video (https://www.youtube.com/watch?v=QFQNTv8lGFw) for a seminar about ecosystem chimeras I did for Michael Levin’s lab at Tufts last week using Loop Analysis. They make laboratory chimeras at the organism level.
She's a great example of someone who may benefit from wisely done research on graphs with polarities.
John Baez said:
Can you use this construction to obtain a new labeled graph? The idea of stratification has traditionally been that you start with a model of some sort (e.g. a labeled graph), and then build a new more complicated model that has an epimorphism .
I think yes. The correspondence about which you asked is
and the epimorphism that you asked is defined as
and
.
Now, the fact is a morphism of -labeled graphs follows from the construction of the labelling map , defined as .
Here is more complicated than .
John Baez said:
She's a great example of someone who may benefit from wisely done research on graphs with polarities.
Definitely!! I agree completely.
I am going through the Patricia A. Lane's paper that you shared. I had always been super interested in Rosen's approach (especially, anticipatory systems https://link.springer.com/book/10.1007/978-1-4614-1269-4)
Section 3b of this paper also talks about Stratified Models with pullbacks
https://royalsocietypublishing.org/doi/10.1098/rsta.2021.0309
It’s presented there in an elementary way because the audience was modelers, not category theorists. So it might be a helpful presentation for communicating with collaborators.
In some code I wrote for the Regulatory Networks project I computed some pullbacks of signed graphs with Catlab and it did what I expected. You could make stratified models that way too. They would be stratified at the presentation level rather than at the free signed category level.
Yes, that paper seems to be the original "stratification using pullbacks" paper. I should have mentioned it!
@Adittya Chaudhuri - can you try writing something for the proof of Prop. 2.3? The statements are:
If we write for the category of presheaves on a category , and choose a presheaf , then:
(a) The category is a presheaf topos.
(b) The forgetful functor is a discrete fibration.
(c) The functor associated to is representable by .
(d) The category is equivalent to the category of elements .
We don't really want to prove them: we want to crisply explain them and give references to proofs. If you write something I can try to make it nicer.
Also: I want to make the math section of the paper more focused on topics that will actually help people who study causal loop diagrams. Here's one example:
When is a monoid, we already discuss the free -labeled category on an -labeled graph . One reason is obvious: this lets us assign labels not just to edges of but to edge paths (which are the morphisms in the free category on ), thus describing 'indirect effects'.
But here's another reason: if we can show -labeled categories are monadic over -labeled graphs, we can define a Kleisli category. This has -labeled graphs as objects, but maps that send an edge in one -labeled graph to a composite of edges in another.
This is a useful notion of morphism. For example we can use one of these morphisms to map
to
I think quite often we may want to make a model of a system more complicated in this way: by adding new vertices, and 'factoring' -labeled edges into edge paths!
So I have a question:
Question. It's widely known that reflexive graphs are monadic over sets, and categories are monadic over reflexive graphs. Are categories monadic over graphs? I.e. is monadic over where is the category of presheaves on the category
?
I think it is....
If true, I feel it must be true that -labeled categories are monadic over -labeled graphs when is a monoid. Then we can set up the Kleisli category I mentioned, and know it really is a Kleisli category.
I believe you can define a Kleisli-like category starting with any adjunction , . The objects are objects , the morphisms from to are morphisms , and we compose and by taking and following it with where is the counit.
Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?
John Baez said:
I think it is....
It is (it's easy to check directly).
John Baez said:
I believe you can define a Kleisli-like category starting with any adjunction , . The objects are objects , the morphisms from to are morphisms , and we compose and by taking and following it with where is the counit.
Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?
is the monad induced by the adjunction, so this is exactly the Kleisli category of the monad. Monadicity is irrelevant.
John Baez said:
Adittya Chaudhuri - can you try writing something for the proof of Prop. 2.3? The statements are:
If we write for the category of presheaves on a category , and choose a presheaf , then:
(a) The category is a presheaf topos.
(b) The forgetful functor is a discrete fibration.
(c) The functor associated to is representable by .
(d) The category is equivalent to the category of elements .We don't really want to prove them: we want to crisply explain them and give references to proofs. If you write something I can try to make it nicer.
Thanks. I will add .
(deleted)
This is an aside, but in case you guys haven’t caught this vibe yet, the kind of Kleisli category you’re working on is a big inspiration for double Lawvere theories in the double categorical setting. The natural morphisms of models of double Lawvere theories are, in effect, Kleisli in this way.
John Baez said:
But here's another reason: if we can show -labeled categories are monadic over -labeled graphs, we can define a Kleisli category. This has -labeled graphs as objects, but maps that send an edge in one -labeled graph to a composite of edges in another.
This is a useful notion of morphism. For example we can use one of these morphisms to map
to
I am thinking of it as a zooming-in (refining) details operation on a -labeled graph by adding new vertices to it and factoring its -labeled edges into -labeled edge-paths using the Kleisli morphisms of the Kleisli category associated to the monad arising from the free-forgetful adjunction between the category of -labeled graphs and the category of -labeled categories as you explained. I am trying to demonstrate, why I am thinking from the point of zooming-in (refining) details operation below:
Considering your example
Now, if we ask how? Then, an answer may be given by a morphism (, and the unique edge maps to the unique edge path in ) in the associated Kleisli category (w.r.t to the Free-forgetful monad ), where is as described by you .
However, one may ask how again!!
Then, one may define another morphism: (I think the definition is evident from the construction of below), where is as follows:
.
Question:
By construction, and are composable morphisms in the Kleisli category we are talking about. Now, can we think as the double-zooming in operation on , which takes to ?
If the above statement makes sense, then we can of course extend it to a "-level zooming in operations on ", by composing number of morphisms in the Kleisli category.
Another side of the story could be the following (Using Kleisli category)
I think the same idea has been used to describe the occurrence of motifs (which I like to think as some important simple shaped labeled graphs") like feedback loops, feedforward loops in a regulatory networks. I think the same idea had been used in the section 2.2 of https://arxiv.org/pdf/2301.01445 to find the occurrence of motifs in particular case of for regulatory networks.
I wonder the following: When we work with the monoid , then the corresponding important shaped graphs are mostly various feedback loops and feedforward loops etc..(for example, the ones mentioned in the Uri Alon's paper https://www.nature.com/articles/nrg2102).
Question:
What are some other monoids (monoids other than ) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?
@John Baez In the overleaf file (Remark 3.2), I called a morphism in the Kleisli category as an occurrence of an -shaped graph in . (Although, my nomenclature may not be appropriate).
James Fairbanks said:
In some code I wrote for the Regulatory Networks project I computed some pullbacks of signed graphs with Catlab and it did what I expected. You could make stratified models that way too. They would be stratified at the presentation level rather than at the free signed category level.
Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?
Nathanael Arkor said:
John Baez said:
Question. What good features does this category have when the adjunction is monadic, which it lacks otherwise?
is the monad induced by the adjunction, so this is exactly the Kleisli category of the monad. Monadicity is irrelevant.
Oh, duh! Thanks!
Adittya Chaudhuri said:
Another side of the story could be the following (Using Kleisli category)
I think the same idea has been used to describe the occurrence of motifs (which I like to think as some important simple shaped labeled graphs") like feedback loops, feedforward loops in a regulatory networks. I think the same idea had been used in the section 2.2 of https://arxiv.org/pdf/2301.01445 to find the occurrence of motifs in particular case of for regulatory networks.
Oh, very good point! Yes, a 'motif' is an example of a Kleisli morphism. I was talking to @Evan Patterson about this Kleisli idea recently, and he probably pointed that out.
Somehow looking for motifs in a complicated monoid-labeled graph feels a bit different than taking a simple model of a system, given by a monoid-labled graph, and 'zooming in', or 'adding detail', using a Kleisli morphism. But they're mathematically very similar.
Adittya Chaudhuri said:
Question: What are some other monoids (monoids other than ) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?
I listed a few in my blog articles and I plan to talk about these examples in our paper - otherwise it's useless to generalize from to an arbitrary monoid!
The simplest example is the multiplicative monoid of , where 1 and -1 mean 'positive effect' and 'negative effect', and 0 means 'no effect'. Notice that the additive group is a submonoid. There's a general construction of 'adding an absorptive element' or 0 to a monoid, and that's what we're doing to get the the multiplicative monoid of from the additive abelian group .
I also mentioned an example where we have a monoid with an element that means 'undetermined effect' - an effect that could be positive or negative.
It's also perfectly fine to use or as our monoid - these should be very useful in applications. These monoids have homomorphisms onto some of the small monoids I just mentioned.
Adittya Chaudhuri said:
John Baez said:
If we write for the category of presheaves on a category , and choose a presheaf , then:
(a) The category is a presheaf topos.
(b) The forgetful functor is a discrete fibration.
(c) The functor associated to is representable by .
(d) The category is equivalent to the category of elements .
Thanks. I added the material in the overleaf file in the way you suggested. For (a), I added a reference. For (b), (c) and (d), I sketched the proofs.
Great! We should find references for all this stuff - I can't believe any of it is new. But proof sketches are also good, mainly for educational reasons. I already added a reference to Loregian-Riehl that's an introduction to the relation between functors and discrete fibrations . I'll add it to this proof too.
Adittya Chaudhuri said:
Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?
You can repeatedly do pullbacks, but it gets complicated fast. The result of a pullback is actually not an object in the category, but a commutative diagram with an object that has the universal property and the span of projection maps. So in order to take a pullback of a pullback, you have to drop that extra structure. Then when you want to index into the resulting object from your nested pullbacks, you really wish you had those morphisms. In a nested pullback, it is hard to find the stuff you want algorithmically.
I found it easier to figure out what multiple stratification I wanted and set up the right limit in one shot. Then you have all the projections into the factors at one level of abstraction. You can build up that limit diagram one arrow at a time, computing limits and visualizing the results until you get all the factors in. But it is more convenient to build up the limit diagram iteratively and compute one big limit, rather than compute limits of limits.
That's interesting. I was telling Additya it's easier (for me) to think about multiple stratifications one step at a time. But thinking is different than coding.
Hey @Adittya Chaudhuri - thanks for putting a proof for Prop. 2.3. I've polished it up a bit - check it out.
Could you supply a similar proof for Prop. 2.6? I've gotten up to that point now. Here's a question for the world:
(Yes, we're being evil in treating as a 1-category, but that's mainly because in our application we think we want to use the strict slice category, where morphisms are triangles that commute on the nose.)
I know and are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.
This question reminds me of the fundamental theorem of topos theory. I read on the nLab that is a "2-topos". So I wonder if there is a "fundamental theorem of 2-topos theory", and if that could be relevant here.
Right. The sad thing is that we seem to want the strict slice 2-category, where morphisms are triangles that commute on the nose. Anyone proving a "fundamental theorem of 2-topos theory" would not use that.
But if someone has proved a "fundamental theorem of 2-topos theory" I'd still like to know about it!
In our paper we use the easier "fundamental theorem of presheaf topos theory", which says the slice category of a presheaf topos is a presheaf topos... and there's probably a higher version of that too.
John Baez said:
Adittya Chaudhuri said:
Question: What are some other monoids (monoids other than ) which accommodate the notion of "interesting simple shaped graphs", and why they would be considered as interesting from the point of both applications/mathematics?
I listed a few in my blog articles and I plan to talk about these examples in our paper - otherwise it's useless to generalize from to an arbitrary monoid!
The simplest example is the multiplicative monoid of , where 1 and -1 mean 'positive effect' and 'negative effect', and 0 means 'no effect'. Notice that the additive group is a submonoid. There's a general construction of 'adding an absorptive element' or 0 to a monoid, and that's what we're doing to get the the multiplicative monoid of from the additive abelian group .
Thank you, yes. But,
Question:
What could be some interesting patterns of subgraphs (motifs) in graphs labeled by elements of or . By interesting, I meant to say an appropriate analogue of feedback and feedforward loops in the context of and ?
John Baez said:
Hey Adittya Chaudhuri - thanks for putting a proof for Prop. 2.3. I've polished it up a bit - check it out.
Thank you so much for polishing the proof. The proof now looks much, much better than what I did. I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".
John Baez said:
Could you supply a similar proof for Prop. 2.6?
Thanks. Yes, I added a basic proof in the overleaf file. By basic , I mean like the way you upgraded the version of graphs (Proposition 2.3) to presheaf topos, there should be a similar upgradation for in Proposition 2.6. I think you are already hinting about this in In other words, can we characterise like categories?
James Fairbanks said:
Adittya Chaudhuri said:
Thank you. Does it allow multiple stratifications i.e different vertices can be stratified in different ways? i.e for example if one vertex represent students, then it may be stratified into UG and PhD's, and if another vertex denotes University, then it can be stratified into public and private?
You can repeatedly do pullbacks, but it gets complicated fast. The result of a pullback is actually not an object in the category, but a commutative diagram with an object that has the universal property and the span of projection maps. So in order to take a pullback of a pullback, you have to drop that extra structure. Then when you want to index into the resulting object from your nested pullbacks, you really wish you had those morphisms. In a nested pullback, it is hard to find the stuff you want algorithmically.
I found it easier to figure out what multiple stratification I wanted and set up the right limit in one shot. Then you have all the projections into the factors at one level of abstraction. You can build up that limit diagram one arrow at a time, computing limits and visualizing the results until you get all the factors in. But it is more convenient to build up the limit diagram iteratively and compute one big limit, rather than compute limits of limits.
Thank you. I understand your point.
John Baez said:
That's interesting. I was telling Additya it's easier (for me) to think about multiple stratifications one step at a time. But thinking is different than coding.
Thank you. Yes, now I understand why you were suggesting about multiple stratifications one step at a time.
John Baez said:
I know and are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.
It's not true that is cartesian closed in general.
Graham Manuell said:
John Baez said:
I know and are complete, cocomplete and cartesian closed. So one answer to my question would be "complete, cocomplete and cartesian closed". But maybe there's more.
It's not true that is cartesian closed in general.
I think Example 1.7 in https://sinhp.github.io/files/CT/notes_on_lcccs.pdf demonstrates it.
Regarding , let us consider the -graphs. Let . Then, a Kleisli morphism , where gives a factorisation of .
, For the monoidI think more generally, for any , if we consider , then every prime factorisation of can be described as a (monic)Kleisli morphism from to some graph , where is the monad associated to free-forgetful adjunction between -graphs and -categories?
Of course, it may not be anything interesting.
John Baez said:
- What kind of 'nice category' is the 1-category , such that any slice category is also nice in this way?
Locally finitely presentable (which implies complete and cocomplete and monadic over presheaf category) and extensive is about all I've got.
Adittya Chaudhuri said:
Thank you so much for polishing the proof.
Sure!
The proof now looks much, much better than what I did.
It's based on what you did, but I want to explain things to readers who may not know all the concepts well enough. When writing it's always important to imagine who the audience is. I'm imagining an audience of people who want to apply category theory, but may not know very much category theory. So just saying some functor is a discrete fibration may not mean much to them, so I wanted to explain what it means.
Also I think it may someday be useful for people to have an explicit description of the category of -labelled graphs as a presheaf category, so I added that. Probably I should move it out of the proof and make it a separate paragraph.
I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".
Great! By the way, I said "crisp", not "crispy". English is a weird language, almost impossible to learn: potato chips (which the British call "crisps") are crispy; fresh lettuce is not crispy - but it's crisp. "Crisp" also means something else:
A crisp way of speaking, writing, or behaving is quick, confident, and effective. For example:
a crisp reply
a crisp, efficient manner
Kevin Carlson said:
John Baez said:
- What kind of 'nice category' is the 1-category , such that any slice category is also nice in this way?
Locally finitely presentable (which implies complete and cocomplete and monadic over presheaf category) and extensive is about all I've got.
Thanks, that's super-helpful! If I'd thought I might have guessed they were locally finitely presentable, but I would never have thought about 'extensive' since I tend to think of that as applying to categories of 'spaces'.
John Baez said:
Adittya Chaudhuri said:
Thank you so much for polishing the proof.
Sure!
The proof now looks much, much better than what I did.
It's based on what you did, but it explains things to readers who may not know all the concepts well enough. Also I think it may someday be useful to explicitly describe the category of -labelled graphs as a presheaf category, so I added that.
I am slowly understanding what you meant by "to add the right amount of details" and in "a crispy way".
Great! By the way, I said "crisp", not "crispy". English is a weird language: potato chips are crispy, fresh lettuce is not crispy but it's crisp, but "crisp" also means something else:
A crisp way of speaking, writing, or behaving is quick, confident, and effective. For example:
a crisp reply
a crisp, efficient manner
Thank you. Yes, I meant `crisp', but somehow, I typed crispy.
John Baez said:
Thanks, that's super-helpful! If I'd thought I might have guessed they were locally finitely presentable, but I would never have thought about 'extensive' since I tend to think of that as applying to categories of 'spaces'.
Well, I think Grothendieck at least thought that categories are a kind of space!
If you think of categories as simplicial sets with an extra property you can think of them as a kind of space, but it takes a lot of nerve.
Yes, I meant 'crisp', but somehow, I typed crispy.
Okay. I started getting interested in the difference between crisp and crispy and got into an argument with my wife about it. I argued that good potato chips are clearly "crispy", while she says they are "crisp".
I don't know if I can continue to live with someone who has such fundamental disagreements with me.
On a more serious note:
-labeled graphs with are often called 'causal loop diagrams' because people use them to identify 'feedback loops'. I think it would be good to say a bit about this. Here's an idea:
We can define the first homology of a graph with coefficients in , , in two equivalent ways. We can either geometrically realize the graph as a space and take its first homology with coefficients, or - better - define an abelian group of 'cycles' in , called , and an abelian group of 'boundaries', called and form as a quotient:
At least when we have a finite graph, is a free abelian group. But beware: there's no canonical choice for the generators of this free group.
However:
1) is a directed graph, and for our applications the only possible feedback loops are directed loops: that is, paths of edges of the form
So, we're not really interested in the usual 1st homology, but only a kind of directed first homology. This should be easy to define, but maybe someone in directed algebraic topology has already defined the directed first homology of a directed graph - so we should find out.
2) In some of our applications is not an abelian group but merely a commutative monoid. So, the usual textbook theory of homology doesn't apply. However, the directed first homology of a graph with coefficients in a commutative monoid should still make sense.
When we understand this stuff, I hope there will be
and
so we can define the commutative monoid as modulo the congruence relation of being homologous.
(Note that with commutative monoids, we should mod out by [[congruence relations]], not submonoids! For abelian groups you can turn congruence relations into (normal) subgroups, which lets you talk about modding out by a subgroup. But that uses subtraction!)
I conjecture that, at least for a finite graph, will be a free commutative monoid on some set of generators. (If you draw me a directed graph, I believe I can find such generators.)
Then I hope we can do this:
Suppose is a commutative monoid and is an -labeled graph, i.e. a graph with a map sending edges to elements of .
Clearly for any directed loop in
we get an element of , namely
This describes the feedback around this directed loop. (It's analogous to a holonomy.)
But I also conjecture that we get a map of commutative monoids
sending cycles to elements of . I haven't defined cycles, but a cycle should an -linear combination of edges of with some property saying its boundary is zero. Then, to define $\tilde{\ell}$$ on a cycle, we sum up the labels over all the edges that appear in this cycle. (The same edge may show up several times in a cycle - so then we sum its label several times.)
This tells us the total feedback around any cycle!
I conjecture that if are homologous cycles, then we get the same element of this way:
If so, descends to a map of commutative monoids
Maybe I'll call this by the same name, .
I conjecture that this map contains all the information about how much feedback there is around any directed loop.
One nice thing about this little project is that @Adittya Chaudhuri has already studied holonomies of connections, and this is very much like that: made simpler because we are working on a graph, but more complicated because we are working with a (commutative) monoid instead of a group!
It's possible we should develop this for a not-necessarily-commutative monoid.
In the conventional first homology of a graph, there are no bounding cycles, so . And when you only look at directed cycles, two cycles can't add up to a cycle that doesn't self-intersect, so you actually get canonical generators.
I was also going to suggest matroid theory besides homology, but it actually doesn't look very promising for this.
John Baez said:
One nice thing about this little project is that Adittya Chaudhuri has already studied holonomies of connections, and this is very much like that: made simpler because we are working on a graph, but more complicated because we are working with a (commutative) monoid instead of a group!
Thanks. Are you hinting towards the following?
Given a connection 1-form on a smooth principal -bundle and a point ,
1) we can construct a holonomy map , where is the set of smooth loops on based at . On the other hand, one can recover the whole bundle with the connection data from the holonomy map.
2) when two smooth loops and are thin homotopic then the map descends to a map on the quotient . Here, is the equivalence relation on defined by thin homotopy.
I think (1) is analogous to .
I think (2) is analogous to the map .
Now, if we replace the set by containing lazy paths (as defined in https://arxiv.org/pdf/1003.4485), then we can turn into a group under concatenation as a binary operation. We can then extend the set map to a group homomorphism .
Then like the way the group homomorphism contain information of the bundle with connection , the homomorphism of commutative monoids contain the information of how much feedback in any directed loop in the -labeled graph .
Or, am I misunderstanding ?
@John Baez The relation (that you described) between the directed first homology of a graph with coefficients in a (commutative) monoid and the feedback loops in -labeled graphs seems super interesting!! Now, I am trying to understand all these things (what you wrote) properly.
By directed Algebraic topology, are you saying about https://ncatlab.org/nlab/show/Directed+Algebraic+Topology?
James Deikun said:
In the conventional first homology of a graph, there are no bounding cycles, so .
In Sunada's approach to undirected graphs (summarized in my paper Topological crystals, section 2) one counts an edge path like
as a 1-cycle that's homologous to zero. Maybe a more "efficient" approach wouldn't even allow such 1-cycles, but there's something nice about making every edge path that's a loop give a cycle.
But more importantly, you're right that in a directed graph, we're only interested in directed paths, so edges don't have inverses, and this issue goes away!
And when you only look at directed cycles, two cycles can't add up to a cycle that doesn't self-intersect, so you actually get canonical generators.
Another great point! I had not yet mentally adjusted from the case of undirected graphs.
This is very nice for the theory of causal loop diagrams, and it must already be known in the system dynamics community. A canonical set of 'generating' feedback loops is a very pleasant thing.
Adittya Chaudhuri said:
Then like the way the group homomorphism contain information of the bundle with connection , the homomorphism of commutative monoids contain the information of how much feedback in any directed loop in the -labeled graph .
Or, am I misunderstanding ?
No, that's exactly the correct analogy. This analogy has been on my mind for many decades, because starting in the early 1990s, before working on higher gauge theory, I worked on loop quantum gravity. Thus, long before treating connections on a trivial -bundle over a manifold as smooth functors from the path groupoid of to , I wrote some papers about connections on graphs, which took a similar viewpoint. See for example
It's a bit primitive since it doesn't introduce the holonomy of a path, but you can figure out what that must be!
I'm reusing those ideas now, but a bunch of things change when we have a directed graph, where paths are only allowed to move in the direction of the edges.
Thanks!! Sounds super interesting!
By the way, you read all my messages too quickly, so please reread that.
I usually rewrite my posts a few times. Sorry!
Anyway, here is how it works for a directed graph - the kind of graph we're talking about in our paper. Suppose is a monoid. A connection on a trivial -bundle over should clearly be a functor
where is the 'fundamental category' of and is the one-object category corresponding to . We get a fundamental category instead of a fundamental groupoid because we're not allowed walk backward along a directed edge. The fundamental category has
But look - this 'fundamental category' is exactly the same as the free category on the graph , which we call in our paper and study in detail!
Now, I got your point. And, sorry for reading the last message quickly.
In a way, -labeled category is a connection on a trivial -bundle over ?
It's okay to read my message quickly... but I never expect anyone to read my messages quickly, so I tend to keep rewriting them and improving them. I find it easier to write messages to you later in the day, when you're asleep and I can rewrite them dozens of times.
So, where was I?
A connection on the trivial -bundle over is a functor
But this is exactly a way of making into an -labeled category! This also corresponds precisely to a way of making into an -labeled graph.
In a way, -labeled category is a connection on a trivial -bundle over ?
Exactly.
John Baez said:
It's okay to read my message quickly... but I never expect anyone to read my messages quickly, so I tend to keep rewriting them and improving them. I find it easier to write messages to you later in the day, when you're asleep and I can rewrite them dozens of times.
I got your point.
So, we don't need to talk a lot about connections on graphs; they are just our -labeled graphs.
There's more to say when is a commutative monoid. When is noncommutative, the holonomy around a loop depends on the basepoint. (When is a group, the holonomy gets conjugated when we change the basepoint.) But when is a commutative monoid, the holonomy doesn't depend on the basepoint!
More generally, when is commutative we can talk about the holonomy around a collection of loops in : it's just the sum of the holonomies around each loop... and the sum is well-defined because is commutative.
Indeed, we can talk about the holonomy around any cycle in . Thus, when is a commutative monoid, for any -labeled graph we expect to get a homomorphism of commutative monoids
Note that also when a noncommutative monoid has a cyclic trace, you can use the cyclic trace to form a kind of basepoint-independent holonomy on loops specifically.
Good point. The funny thing is that in applications to "system dynamics", like epidemiology and systems biology, the monoid always seems to be commutative. So we'll probably focus on that case.
You pointed out something which @Adittya Chaudhuri and I should state and prove: since our graph is directed graph, the commutative monoid I'm calling is free on a canonical set of generators, the non-self-intersecting cycles.
We should make up some nice term for them, like the 'basic feedback loops'. There is probably already a standard term for them in the field of system dynamics.
So, any -labeled graph gives a commutative monoid homomorphism
but this is determined by its value on the 'basic feedback loops', i.e. the canonical generators of the free commutative monoid .
By the way, I'm pretty sure any free commutative monoid has a canonical set of generators. Consider the finitely generated case: a free commutative monoid is isomorphic to . This is free on the generators
and it's not free on any other generators: each cannot be expressed as a sum of elements except itself and the zero vector.
Adittya Chaudhuri said:
By directed algebraic topology, are you saying about https://ncatlab.org/nlab/show/Directed+Algebraic+Topology?
Yes. You'll see that there are lots of approaches to directed algebraic topology. As far as I know, none of them has triumphed yet.
The idea (at least according to me) is that just as a homotopy -type is essentially the same as an -groupoid, a directed -type should be essentially the same as an -category. But it should be some sort of space with extra structure that has a 'fundamental -category'. Thus, directed -types should give a more topological way of thinking about -categories, just as homotopy types give a more topological way of thinking about -groupoids.
But we don't need to think in detail about this complicated stuff: it's just a guiding philosophy. Directed graphs should be a nice way to think about directed 1-types, and we already know how to get the 'fundamental category' of a directed graph : it's what our paper calls .
We probably shouldn't talk much about directed algebraic topology in our paper; it's just good to keep in mind. The concept of the 'directed first homology monoid' of a (directed) graph seems somewhat useful, though.
Minor stuff:
John Baez said:
The idea (at least according to me) is that just as a homotopy -type is essentially the same as an -groupoid, a directed -type should be essentially the same as an -category. But it should be some sort of space with extra structure that has a 'fundamental -category'. Thus, directed -types should give a more topological way of thinking about -categories, just as homotopy types give a more topological way of thinking about -groupoids.
Interesting. In a way, are you are talking about a directed version of the homotopy hypothesis https://ncatlab.org/nlab/show/homotopy+hypothesis, and thus a fundamental -category https://ncatlab.org/nlab/show/fundamental+%28infinity%2C1%29-category should capture the information of a directed topological space?
In section 2.3 (overleaf file), you mentioned a relation between `motifs in music' and motifs in -labeled graphs. I was not much aware of this relation till today, and I just checked https://en.wikipedia.org/wiki/Motif_(music) to learn more about it. It's very interesting, and after reading the Wikipedia article, I also feel it reflects the same meaning. "Some small signature tunes within a big musical piece somehow repeat many times in the big piece". Surprisingly, I think listeners (especially non-professional musicians) also sometimes try to identify the whole big piece by only remembering/humming the small signature tunes or motifs in the big piece.
Right! I believe the word 'motif' comes from music, made especially famous by Wagner's use of leitmofits. Grothendieck used this word in mathematics - 'motive' is French for 'motif'.
Thanks. Interesting.
Adittya Chaudhuri said:
John Baez said:
The idea (at least according to me) is that just as a homotopy -type is essentially the same as an -groupoid, a directed -type should be essentially the same as an -category. But it should be some sort of space with extra structure that has a 'fundamental -category'. Thus, directed -types should give a more topological way of thinking about -categories, just as homotopy types give a more topological way of thinking about -groupoids.
Interesting. In a way, are you are talking about a directed version of the homotopy hypothesis https://ncatlab.org/nlab/show/homotopy+hypothesis, and thus a fundamental -category https://ncatlab.org/nlab/show/fundamental+%28infinity%2C1%29-category should capture the information of a directed topological space?
I was indeed talking about a directed version of the homotopy hypothesis. But I was talking about it for -categories, or probably better for -categories. There should be different kinds of directed space. In some the paths can be directed, i.e. non-invertible, but the homotopies between paths, etc. are all invertible. This kind of directed space should have a fundamental -category.
But people also study directed spaces where the homotopies are also directed, so you can have a homotopy from one path to another path but not from back to . If all homotopies between homotopies, etc., are invertible then these spaces would have a fundamental -category. And so on.
Thanks. I understand your point.
By the way, if you want to link to the nLab here you don't have to include a URL like https://ncatlab.org/nlab/show/homotopy+hypothesis. There's a more elegant way: you can just type the page title in double square brackets, like this: [[homotopy hypothesis]]. Some smart person added this feature.
John Baez said:
By the way, if you want to link to the nLab here you don't have to include a URL like https://ncatlab.org/nlab/show/homotopy+hypothesis. There's a more elegant way: you can just type the page title in double square brackets, like this: [[homotopy hypothesis]].
Thank you. Yes. I wanted to do it. But I was not aware "how to do it" until now.
You can also include arbitrary URLs like this:
[Motivating motives](http://math.ucr.edu/home/baez/motives/)
gives
But for the nLab you just need to put the page title in double square brackets.
Thank you. I will do this.
Today, I was trying to understand the things from the general perspective discussed before. I am just trying to write down what I am thinking.
1)There is a well-known adjunction between the category of topological spaces and the category of simplicial sets given by singular simplicial set and geometric realization functor which connects the study of topological spaces via simplicial sets. Now, according to The Homology of Simplicial Sets, given a topological space , replacing by for each , we obtain a simplicial abelian group . Now, one can convert the into a chain complex whose boundary maps are given by the alternating sum of the face operators of . Now, if we compute the homology of the chain complex ([[chain homology and cohomology]]), we get the singular homology of in each degree. In particular, in the degree 1, we get the first singular homology group of .
2) On the other hand, there is also a well-known adjunction between and given by the nerve functor and its left adjoint homotopy category functor which connects the study of categories via simplicial sets. Now, given a topological space , the fundamental groupoid functor (as explained by Harry Gindi in one of my 5-year-old MO question What is the geometric realization of the the nerve of a fundamental groupoid of a space? ) is isomorphic to , and hence, is same as the fundamental groupoid of i.e .
Now, as @John Baez was saying about directed topological spaces, and since he was discussing about homology monoids (which may make sense, because paths in directed topological spaces may not be able to be traversed in the reverse direction ) and fundamental categories (which may make sense, because paths in directed topological spaces may not be able to be traversed in the reverse direction ) of (such spaces), in the context of directed graphs, I was wondering whether one can suitably "directify" the correspondence of (1) and (2) to obtain suitable notions of singular homology monoids and fundamental categories of directed topological spaces. Then, we may obtain the results for directed graphs as a special case.
@John Baez Another thing about which I was also thinking today:
Directed graphs can also be seen as simplicial sets Directed Graphs as Simplicial Sets. Although the definition of a directed graph in Directed Graphs as Simplicial Sets coincides with our definition, the definition of morphisms are little different. However, the fundamental category of these directed graphs are free categories of the graphs as explained in The Path Category of a Directed Graph.
The questions you raise are very interesting.
I was wondering whether one can suitably "directify" the correspondence of (1) and (2) to obtain suitable notions of singular homology monoids and fundamental categories of directed topological spaces. Then, we may obtain the results for directed graphs as a special case.
That's a fascinating and ambitious project and I hope someone tries it - or has already done it. Of course one needs to properly define 'directed topological space', and right now there seem to be multiple competing definitions.
I don't have the energy to tackle this project: I just want to finish our paper, which is much easier. But there may be people here who can help you with this project.
You could say our project is exploring the homotopy theory and homology theory of one-dimensional directed spaces, and applying it to biology and epidemiology. That's ambitious enough for me. :upside_down:
I understand your point.
John Baez said:
You could say our project is exploring the homotopy theory and homology theory of one-dimensional directed spaces, and applying it to biology and epidemiology.
This is also fascinating. It may give ideas on how to approach the general case. Although I do not know.
John Baez said:
The questions you raise are very interesting.
Thank you!!
1-dimensional topology is a lot simpler than higher-dimensional topology; that's why we teach kids about the fundamental group long before we teach them about , and why people understood gauge theory long before higher gauge theory. But I like doing simple things that are actually useful. Often the simplest ideas are the most useful.
I understand your point completely. I think from the point of applications in biology and systems dynamics, a 1-dimensional case would be more suitable now. Then, if we feel that "we need to upgrade our framework for explaining certain realistic phenomenon in biology or systems dynamics" like the way "gauge theory upgraded to higher gauge theory" , then one may develop the general case with a correct motivation. I am also not sure if what I am saying actually makes sense or not.
John Baez said:
Often the simplest ideas are the most useful.
I really like this line and I fully agree to it.
John Baez said:
That's a fascinating and ambitious project and I hope someone tries it - or has already done it. Of course one needs to properly define 'directed topological space', and right now there seem to be multiple competing definitions.
Thank you. Yes, I understand your point regarding "multiple competing definitions" from the nLab page [[directed topological space]].
John Baez said:
- What kind of 'nice category' is the 1-category , such that any slice category is also nice in this way?
It may not be completely related to our paper, but I just observed this:
1-category can be given a model structure ([[Thomason model structure]]) and hence, for any category the slice category can be given a [[slice model structure]] in a nice way. One can also do similar with [[canonical model structure on Cat]].
Nice!
Here's how I'd like to define the first homology monoid of a graph in our paper, taking directedness into account. If there are flaws or mistakes, someone please let me know.
We start with a graph : a set of vertices, a set of edges, and two maps .
Note that we can generalize this to define the first homology of a graph with coefficients in any commutative monoid :
But I don't think we need now: it seems for applications what matters is (which is the same as .
The reason is that we want to prove this:
Theorem. is a free commutative monoid. There is a unique set of generators such that is free on the set .
If we're doing applied mathematics, we can call these generators the basic feedback loops.
Lemma. Let be a commutative monoid. Then every map (called an -labeling) extends uniquely to a monoid homomorphism , which we also call .
Notice that is a sub-monoid of , unlike what we usually expect in homology. thus restricts to a map , which we again call . By the theorem this map is uniquely determined by its values on the basic feedback loops!
The moral: a labeling of a graph lets us assign a kind of 'feedback' valued in to any cycle in the graph, but these feedbacks are determined by the feedbacks on the basic feedback loops.
(By the way, while I defined 1st homology with coefficients in the commutative monoid , it looks like what we're secretly using here is 1st cohomology with coefficients in . This makes me realize that when , we can think of an -labeling of a graph as assigning a voltage to each edge. This sets up a connection between electrical circuits and cohomology of graphs, which I've explained here). That choice of is an abelian group, and right now I'm mostly excited about the generalization to commutative monoids. But still, our paper should mention electrical circuits as an example.)
Thanks!! I am trying to understand your construction. Just a very minor point: In the begining at point 2, I think there is a typo. I think you meant ? Also, in point 3, I think by you meant ?
Yes, and yes. You're right! I'll fix those mistakes.
I may be completely misunderstanding, but I am trying to write down what I am thinking:
According to your construction, given a graph , consider a cycle in . Hence, by the way you defined a cycle in the point (3), we must have , which implies for all . Then, this should imply each of the edge is a loop. Now, if I assume the correctness of the theorem you stated, then we can assume the edges are basic feedback loops for each . This also says, every basic feedback loop is an edge (not a path).
Intuitively, (according to me), it means any cycle is composed of basic feedback loops, and the coeffcients attached to basic feedback loop say the number of times is counted in . However, if I consider a cycle like this: , where are distinct, then I am not able to guess how can I write in terms of basic feed back loops. (Also, I am not completely understanding the intuition behind the coefficients of the basic feedback loops).
May be I am misunderstanding something very fundamentally.
Now, if we slightly modify to the following:
Consider a graph as before, and now consider the category . Let us denote the object set and morphism set by respectively, and . Now, I am repeating the same construction as you did. More precisely,
Now, in the light of the above modified definition, consider a cycle in (each is a path or a sequence of edges). The condition of cycle then forces each to be a graph theoretic cycle (usual cycle in graph theory). Now, if I consider the cycle , then the whole itself is a basic feedback loop (not decomposable) and hence, a generator of .
Intuition of the modified definition:
Intuitively, (according to me), it means any cycle is composed of basic feedback loops, and the coeffcients attached to basic feedback loop say the number of times is counted in . I think in the light of the category , the word counted is also making sense.
Now, I am trying to see, how the Lemma part gets modified:
Consider a map (called an -labeling) . Now, by free-forgetful adjunction between graphs and categories, the set . Now, since , we should have a Lemma(modified) as follows:
Lemma(modified). Let be a commutative monoid. Then every map (called an -labeling) extends uniquely to a monoid homomorphism , which we also call .
But, it may not be a monoid homomorphism!!
Reason: Say, , where . Similarly, . But , which may not be equal to .
But the same reason also holds true on the non-modified version.
However, I think if we replace by and assume the commutative monoid is a part of a rig , then from the distribuitive law of a rig, can be proved as a homomorphism of monoids/ or may be rigs.
I apologise priorly if I am making very fundamental mistakes.
Adittya Chaudhuri said:
I may be completely misunderstanding, but I am trying to write down what I am thinking:
According to your construction, given a graph , consider a cycle in . Hence, by the way you defined a cycle in the point (3), we must have , which implies for all . Then, this should imply each of the edge is a loop.
If that's true my definition is bad. But let's see. Consider this triangle-shaped graph:
I expect that
is a cycle, since it goes around the triangle, even though we don't have as you claim. Let's see:
so yes, is a cycle.
I believe your mistake occurs here:
, which implies for all
There's no way to deduce from
Yh. Sorry. It was a bad mistake. Somehow, I mixed up!!
I suggest trying all my definitions, and my claimed Theorem (which is really just a conjecture), in some examples.
John Baez said:
John Baez said:
I suggest trying all my definitions, and my claimed Theorem (which is really just a conjecture), in some examples.
Thanks. I will.
If we were in the same room we could do some examples on a blackboard, with pictures. But they're quite tiring to write in LaTeX.
John Baez said:
Lemma. Let be a commutative monoid. Then every map (called an -labeling) extends uniquely to a monoid homomorphism , which we also call .
Without assuming a compatibility, between and , is it becoming a monoid homomorphism? For example,
Say, , . Similarly, . But , which may not be equal to .
However, I think if we replace by and assume the commutative monoid is a part of a rig , then from the distribuitive law of a rig, can be proved as a homomorphism of monoids/ or may be rigs.
John Baez said:
If we were in the same room we could do some examples on a blackboard, with pictures. But they're quite tiring to write in LaTeX.
Yes, I can completely understand.
Say, , . Similarly, . But .
How are you deriving that last two equations?
John Baez said:
Say, , . Similarly, . But .
How are you deriving that last equation?
By the same map :
if as a rig, we can do this by distributive law
Remember, we're extending the labelling to a monoid homomorphism , using the fact that is the free commutative monoid on . So this extended map
is given by
where .
Thanks. But what does the term mean?
Is it just summing times?
Yes. Every element of the free commutative monoid on a set , for example, can be written as where .
We're writing the monoid operation as addition here, which I admit is somewhat confusing because we often write it as multiplication!
When doing co/homology, people write the operation in the coefficient group as addition, so all the formulas will look more familiar if we do that for too.
There's a theory of modules for rigs (some people call them 'semi-modules'.) A commutative monoid is the same as a module of the rig . For any rig I like to denote the free -module on a set by : it's the set of finite -linear combinations of elements of . So the free commutative monoid on is . We can add elements of this, and multiply them by natural numbers.
Yes. Thanks I got the point.
@John Baez I am slowly realising (still a lot more to realise)!!! The definition you wrote about "homology monoids" is very beautiful. The fact that we are working with the free commutative monoid instead of the free abelian group says
is a directed graph and is seen as a directed space by the "non-existence of inverse" in the commutative monoid . By not using , it is clearly telling we are not considering our directed graph as a usual non-directed topological space example.
Thanks! You're exactly right. I noticed that the work we're already done on monoid-labeled directed graphs actually points to a generalization of 1st homology for directed graphs, where we use commutative monoids at every point where people usually use abelian groups.
This new 'directed first homology' is interesting: you can easily find two directed graphs homeomorphic to a circle, one whose directed first homology is , and one whose directed first homology is . It just depends on whether you can go around the circle following a directed path or not!
John Baez said:
Thanks! You're exactly right. I noticed that the work we're already done on monoid-labeled directed graphs actually points to a generalization of 1st homology for directed graphs, where we use commutative monoids at every point where people usually use abelian groups.
Thank you. Yes, I agree.
John Baez said:
This new 'directed first homology' is interesting: you can easily find two directed graphs homeomorphic to a circle, one whose directed first homology is , and one whose directed first homology is . It just depends on whether you can go around the circle following a directed path or not!
Nice!! Yes, indeed!! I didn't observe this before. Thanks!!
John Baez said:
Theorem. is a free commutative monoid. There is a unique set of generators such that is free on the set .
I am trying to make an attempt to find the candidate for the generators:
We know is a cycle iff
.......(*)
Now, I will say is a minimal cycle if for any proper subset , the corresponding analogous equation of (*) does not hold. [I think this also matches my intuition if I think about your basic feedback loops].
Now, let the set is a minimal cycle .
I claim the following:
My minimal cycles are your basic feedback loops i.e .
I am trying to to show why I think so:
Let be an arbitrary element of . Now, if is a minimal then we are done. Otherwise, let us assume is not minimal. Then by definition there is a proper subset such that the equation (*) holds for . Now, since we are working with "finite linear combinations", I think this process will end in a minimal loop after repeating this process for finitely many steps.
I think from the above, one can deduce your theorem (of course, one needs to fill in a lot of details).
I may be wrong.. I just thought about it!!
I believe an approach like this should work, and I hope you can turn it into a proof.
Thank you. Yes, I think I can.
For a while I thought there might be a quick nonconstructive proof that goes like this:
is the free commutative monoid on the set of edges of the graph . is a submonoid of . Every submonoid of a free commutative monoid is a free commutative monoid. Thus is a free commutative monoid.
But this proof is wrong! Every subgroup of a free abelian group is a free abelian group - that's why I hoped this would work for monoids too. But then I remembered there's a whole big subject that studies submonoids of the free commutative monoid on one generator, . These are called numerical monoids, and most of them are not free. This one is free:
but we can throw in the number 8 and close under addition to get one that is not free:
So, this approach doesn't work; there seems to be something special about that makes it free. And it's very good to understand your 'minimal cycles' concretely, because this is a practical subject. So even if there were a quick nonconstructive proof that is free, we'd also want a more algorithmic proof, like the one you're trying to get.
John Baez said:
So, this approach doesn't work; there seems to be something special about that makes it free. And it's very good to understand your 'minimal cycles' concretely, because this is a practical subject. So even if there were a quick nonconstructive proof that is free, we'd also want a more algorithmic proof, like the one you're trying to get.
Thank you. I completely agree that "understanding of minimal cycles" would be important for practical purposes as then we can narrow down our focus to "these objects" to study motifs like feedback loops etc, and hence this in turn would make things a bit simpler. I really like your line "there seems to be something special about that makes it free". I think I need some more time to understand this line in a better way.
John Baez said:
But then I remembered there's a whole big subject that studies submonoids of the free commutative monoid on one generator, . These are called numerical monoids, and most of them are not free. This one is free:
but we can throw in the number 8 and close under addition to get one that is not free:
Interesting objects!! I was not aware of these objects until now. Thanks!
@John Baez I am sharing some feelings about two things that we have already developed in our paper:
1) By Kleisli morphism approach, we can zoom in (add details) in a motif like feedback loop, and hence, often helps in finding a very complicated motif in a large network. [Complications are arising by adding extra causal relatioships in the framework.]
2) By the directed homology approach, we can express a complicated feedback loop interms of simpler loops (basic feedback loops), and hence, often helps in finding a very complicated motif in a large network. [Complications are arising by adding extra feedback loops in the framework.]
So, if I am understanding correctly, both (1) and (2) are serving a similar purpose but in different ways (at least at the moment, it seems so!!). Another good thing is that (1) needs a monoid structure in the labeling set but (2) needs a commutative monoid structure in the labelling set.. (so that the map is a homomorphism of monoids). However, I think most of the monoids that we are interested in are commutative.
In this context, what are some non-commutative monoids that may be interesting for our purpose?
Those thoughts are nice, thanks.
Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.
I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.
Here's one simple relation between them:
Suppose is a commutative monoid. Then for each there is an -labeled graph with one vertex and one edge labeled by . We can call this the 'walking loop with feedback equal to '.
When we have any -labeled graph , each cycle in this graph gets a holonomy valued in , which we can call its feedback. If the cycle looks like this
its feedback is defined to be
Now for the relation: a cycle in an -labeled graph has feedback if and only if there is a Kleisli map from the walking loop with feedback equal to to , that maps the loop to this cycle.
This is not very deep, but it suggests that we're starting to build a little tool kit of ideas.
Here is a little idea that might help you with your proof, @Adittya Chaudhuri. I believe any commutative monoid has a [[preorder]] defined as follows:
iff there exists such that
This is not always a partial order: for example, in an abelian group we have for all and .
However, for the commutative monoid I can prove this is a partial order, i.e. and imply .
Then we can define a minimal cycle in to be a cycle that is minimal with respect to this partial order: more precisely, is minimal if for any cycle with we have .
Then we want to prove is free on the set of minimal cycles.
However, the proof of this probably won't be abstract nonsense about commutative monoids! As I mentioned before, has some special features.
If you get stuck you might like to look at the proof of Lemma 4 in my paper Topological crystals. This is about undirected finite graphs, so it's a bit different. It uses homology with coefficients in rather than . But some of the ideas and tricks may still be relevant!
This lemma says that any integer linear combination of edges of a graph that is a 1-cycle can be written as a finite sum of 'simple loops', meaning loops in which each vertex appears at most once.
This is quite similar to what we're thinking about now. In some ways it's more complicated, because using integer coefficients one can't use the ordering defined as above. Instead one needs to use a concept of the "support" of a cycle, which is defined at the bottom of page 10.
John Baez said:
Those thoughts are nice, thanks.
Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.
Thank you!!
Yes, I fully agree to your point : we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.
John Baez said:
I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.
Now for the relation: a cycle in an -labeled graph has feedback if and only if there is a Kleisli map from the walking loop with feedback equal to to , that maps the loop to this cycle.
As you said, it could be interesting in finding various relations between (1) and (2) and interprete their meaning for applications!! But, your result already demonstrates one in this direction!! Nice!! Thank you!!
@John Baez Thank you so much for the ideas on the proof of the theorem. I am now trying to understand your ideas and implement them to construct a proof of your theorem by combining my previous approach.
John Baez said:
This is not very deep, but it suggests that we're starting to build a little tool kit of ideas.
I fully agree!!
John Baez said:
Theorem. is a free commutative monoid. There is a unique set of generators such that is free on the set .
Below I am trying to write down a proof based on the ideas you provided:
Proof: Consider the preorder relation in commutative monoid defined as iff there exists such that . Now, define a cycle a minimal cycle as a minimal element with respect to . Define a set is a minimal cycle . The fact that every element of is a finite linear combination of the elements of with coefficients in guarantees that the set is non-empty. It can be observed that if and , then and . Now, say and . Then, there exists such that and . Hence, we have . Because natural numbers can not be added to zero, we have . Hence, by the said observation, and . Hence, . Thus the preorder "" in is actually a partial order. Thus, we have the liberty to say is a minimal cycle if it can not be written as sum of cycles or, more precisely, for any cycle , we have . Thus, the definition of a minimal cycle matches our intuition. Now, we claim . To see this we take an arbitrary cycle . If is minimal, we are done, otherwise, there exists such that Now, if and are minimal we are done, otherwise, we repeat the process, with respect to or/and . Since, is a finite linear combination of the elements of with coefficients in , this process will end in finite number of steps to a representation of the form , where and each is a minimal cycle for each . Hence, .
I'm reading your proof - thanks very much for writing it up here.
Thus, we have the liberty to say is a minimal cycle if it can not be written as sum of cycles or, more precisely, for any cycle , we have .
This needs a little correction, since is a cycle:
Thus, we have the liberty to say is a minimal cycle if it can not be written as sum of two cycles which are both nonzero, or, more precisely, for any nonzero cycle , we have .
(It's a lot like how a prime number is not a product of two natural numbers that are both .)
Thanks, yes, I understand your point.
What you've proved is that every cycle is a sum of minimal cycles for some . (The sum of minimal cycles is .)
You have not yet shown that every cycle can be uniquely expressed as a sum of minimal cycles. That's what we need to show that is the free commutative monoid on the set of minimal cycles, i.e.
In other words, you've shown that is generated by the set of minimal cycles, but not yet that it's freely generated.
To show that it's freely generated we'll need to use more special features of . I suspect we'll need to use tricks similar to those used in proving Lemma 4 in Topological crystals.
It's worth proving that is freely generated by the set of minimal cycles, because this gives a complete description of , i.e. an isomorphism theorem
Thanks. I understand your point. Yes, I will look at the Lemma 3 in your paper.
That's good, but separately we should start thinking about how we would prove that every cycle is uniquely a sum of minimal cycles. Suppose a cycle can be written in two different ways as a sum of minimal cycles. What do we do now, to get a contradiction? I guess we first subtract off all minimal cycles that appear in both sides of the equation.
When we're done we get
where and are minimal cycles and none of the equal any of the .
Yes.
I think at this point in the proof it may be useful to note that the free commutative monoid on n generators () has a natural injection to the free abelian group on n generators (), so our becomes a submonoid of the usual . This allows us to subtract, and it becomes sufficient to show your minimal cycles freely generate .
We were here:
When we're done we get
where and are minimal cycles and none of the equal any of the .
But now we can subtract, and we can see it's sufficient to show:
If are distinct minimal cycles and
for some integers , then all thrse integers are zero.
John Baez said:
When we're done we get
where and are minimal cycles and none of the equal any of the .
To show the above, isn't the "minimality" enough? Do we really need to go to ? I mean a minimal cycle can not be written in terms of sum of cycles. Then, how can the coeefficients(attached to same minimal cycles) differ in LHS and RHS?
Adittya Chaudhuri said:
Thanks. I understand your point. Yes, I will look at the Lemma 3 in your paper.
Sorry, I meant Lemma 4.
John Baez said:
But now we can subtract, and we can see it's sufficient to show:
If are distinct minimal cycles and
for some integers , then all thrse integers are zero.
I am assuming you meant the use of in this step but not in the previous one.
Since the homology monoid is injectively mapped to the homology group, an equation between elements of the homology monoid holds iff the corresponding equation holds in the homology group.
But yes, in this equation the coefficients are natural numbers:
When we're done we get
where and are minimal cycles and none of the equal any of the .
There's a lot more to say, but here's one thing. We've been talking about 'minimal cycles' defined in a purely algebraic way, i.e. cycles that aren't sums of other cycles in a nontrivial way. The paper Topological Crystals instead takes a more geometrical approach: it talks about 'simple loops', which are loops of edges which visit each vertex at most once.
Just by looking at examples, I believe these are the same thing. But we haven't proved this. We could try to prove it.
The advantage of the algebraic definition is that it becomes rather easy to prove every cycle is a sum of minimal cycles.
Lemma 4 in Topological Crystals works for undirected graphs, but in that context it shows that every cycle is a sum of (cycles coming from) simple loops. This takes some work! The argument is mainly due to Greg Egan. It's a good example of how we can take advantage of the graph structure and think geometrically, essentially giving an algorithm to find a simple loop hiding inside any cycle. We can then subtract off this simple loop, and we get a new cycle whose support is contained in that of the original cycle. Then we repeat, and this must eventually terminate.
Anyway, I believe we will need to really work and think in terms of graphs to prove this conjecture, which I foolishly called a theorem:
Conjecture. If is a finite directed graph, its first homology monoid is the free commutative monoid on the set of minimal cycles (or alternatively: simple loops).
That's really two conjectures, and proving either one would be fine for now, though it would be even nicer to prove that minimal cycles are the same as simple loops, since then we'd know the 'algebraic' and 'geometrical' approaches are the same!
Anyway, I think we really need to get our hands dirty and work here, more like graph theorists than category theorists.
(deleted)
John Baez said:
That's really two conjectures, and proving either one would be fine for now, though it would be even nicer to prove that minimal cycles are the same as simple loops, since then we'd know the 'algebraic' and 'geometrical' approaches are the same!
Anyway, I think we really need to get our hands dirty and work here, more like graph theorists than category theorists.
Thank you!! Yes, I also feel that to align our "graph theoretic intuition" with our "algebraic definitions" it is necessary to see how a cycle is related to a graph theoretic cycle. For that, first I want to define a graph theoretic cycle of a presheaf graph .
Definition:
A graph theoretic cycle of a graph is defined as a finite sequence of edges satisfying the following properties:
and .
Definition:
Given a graph theoretic cycle of a graph , we define a graphical cycle as .
Now, I think, to see the relation between our intuition and your homology theory, I think the first step is to prove the following lemma:
Lemma
Given a graph and a graph theoretic cycle we have the following:
a) The graphical cycle is an element of .
b) For any element of , there exists a set of graph theoretic cycles such that each is an element of the set .
(a) follows from and .
Next we define a set is graph theoretic cycle . Then, (a) allows us to define a function , . Since is a submonoid of the free commutative monoid , the function is injective.
However, it feels "if your definition of homology is correct", then proving (b) is like "extracting/characterising the information of the space" from the/in terms of the "information of homology groups of the space", about which I am not sure how much information we can recover if we do not talk on "homotopy in our graphs" *. I am not sure!!
Next, may be we can define a simple loop (from the way you defined) in terms of our new language as follows:
Definition:
Given a graph , a simple loop is a graphical cycle induced from a graph theoretic cycle , such that for all except .
@John Baez Now, I think to show that you have proposed only 1 conjecture but not two, we may need to prove the following Proposition (via the correspondence described above):
Proposition:
Given a graph , there is a bijection between the set of simple loops and minimal cycles.
Or, may be a (weaker version)
Given a graph , there is an injective function from the the set of simple loops to the set of minimal cycles.
John Baez said:
Conjecture. If is a finite directed graph, its first homology monoid is the free commutative monoid on the set of minimal cycles (or alternatively: simple loops).
@Morgan Rogers (he/him) produced (possibly) a counter-example here
An observation:
Using the function , , I think we may talk about topological ordering in terms of your directed homology monoids as follows:
The above statement makes sense because every directed graph can be seen as a graph of our sense (presheaf graph).
That sounds right. People like to study DAGs, or directed acyclic graphs, and these have trivial first homology monoid. A DAG is good for when you have a bunch of tasks that needs to be done, and means you have to do task before you do task .
The theorem you mention says that for a DAG there's some way you can order the tasks so that you can do them in that order.
I'm not very interested in DAGs right now because I'm interested in 'causal loop diagrams' and their generalizations, where the focus is on feedback loops, i.e. directed cycles.
I think I see how to prove this:
Proposition:
Given a graph , there is a bijection between the set of simple loops and minimal cycles.
and I think it's still somewhat interesting even though my conjecture was false.
I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.
John Baez said:
I'm not very interested in DAGs right now because I'm interested in 'causal loop diagrams' and their generalizations, where the focus is on feedback loops, i.e. directed cycles.
Yes, I understand your point. Technically, as you explained, we are mostly interested when is not trivial. However, Topological ordering happens only when is trivial.
John Baez said:
I think I see how to prove this:
Proposition:
Given a graph , there is a bijection between the set of simple loops and minimal cycles.
and I think it's still somewhat interesting even though my conjecture was false.
Yes, I think then, we can relate the ideas of our paper with your paper on Topological crystals. Also, I think, as a result, it would be easier for us to imagine minimal cycles concretely.
John Baez said:
I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.
Sounds good.
Since you said earlier that our paper can also be seen as an exploration of directed homology and homotopy of directed multigraphs in degree 1, I was thinking whether we can say something about the homotopy invariance of your directed 1st homology monoid (because this is true in the usual case). Since you said, the definition of a [[directed topological space]] is still not completely standard as there are multiple contenders, I was thinking about the following:
Step 1:
Defining a suitable (useful for practical purposes) notion of directed homotopy equivalence between presheaf graphs.
Step 2:
To show that your notion of directed homology monoid of a presheaf graph is invariant under directed homotopy equivalence between presheaf graphs.
Maybe I am not making much sense. But, today, I was thinking about these.
I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:
In all of these we can find and explain good examples of
I think this will be quite nice. The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality. A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far".
It would be cool to study the homotopy theory of directed graphs... but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.
John Baez said:
I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:
- systems biology - regulatory networks
- 'systems dynamics' - epidemiology, business dynamics and industrial dynamics
In all of these we can find and explain good examples of
- commutative monoids used for describing polarities
- motifs
- the analysis of feedback loops
I think this will be quite nice.
Yes, I completely agree. It is also telling a nice and clear story.
John Baez said:
The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality.
Yes, I fully understand your point.
John Baez said:
It would be cool to study the homotopy theory of directed graphs.. but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.
Yes true!! I understand the point you made.
John Baez said:
I'd like to focus on developing tools that will be useful for our applications. I think we agreed that the end of the paper will explain how our work is useful in:
- systems biology - regulatory networks
- 'systems dynamics' - epidemiology, business dynamics and industrial dynamics
In all of these we can find and explain good examples of
- commutative monoids used for describing polarities
- motifs
- the analysis of feedback loops
I think this will be quite nice. The homotopy theory of directed graphs sounds like it should be in a different paper - a paper published in a journal that's good for homotopy theory, not Compositionality. A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far".
It would be cool to study the homotopy theory of directed graphs.. but you're trying to get a job in math applied to systems biology, right? If so, it pays to be strategic about the papers you write.
Thank you so much!! I completely agree with all of your points. I am very grateful for your guidance!! I like this line a lot A good paper tells a clear story, so it shouldn't consist of "everything we happened to have thought about so far"....It teaches me many things.
Great! One good thing about this philosophy is that it lets you build up a supply of ideas for papers which you can write later. That way, you'll eventually have a big list of papers you can write, and whenever you want to write a paper you can choose the best one - where 'best' might mean most fun, or best for getting your next job, or whatever you want.
John Baez said:
Great! One good thing about this philosophy is that it lets you build up a supply of ideas for papers which you can write later. That way, you'll eventually have a big list of papers you can write, and whenever you want to write a paper you can choose the best one - where 'best' might mean most fun, or best for getting your next job, or whatever you want.
Thank you so much!! Yes, it is a very nice as well as a very helpful point of view. I will follow this principle!!
John Baez said:
I'm not quite sure how the paper should go. Maybe we should define the first homology monoid, show that it's generated by minimal cycles (your result), and then prove these correspond to simple loops (I hope I can do this). We can include an example of a graph whose first homology monoid is not freely generated by the minimal cycles.
Now, that we know that your homology monoid of a directed graph may not be free, I was wondering (from the point of applications) that if it is possible to systematically characterise the graphs whose homology monoids are free on the minimal cycles.
I am trying to explain below why I said "from the point of applications"
Say we encounter a very big causal loop diagram (provided by domain specific scientists like biologists, policy makers, epidemeologists, ..etc), and we want to study its feedback loops. Then, according to the theory we developed, we may first look at the minimal cycles. Now, if from the nature of the causal loop diagram (by using the above-mentioned-proposed-characterisation/finding conditions) we know from the begining that the homology monoid of the underlying directed graph of such a diagram is free on the set of minimal cycles, then we may be able to completely characterise our study of feedback loops in that diagram by focussing only on its minimal cycles.
I think the situation in the reduced counterexample is really the only "forbidden minor", insofar as that concept even applies to this kind of graph. A theta graph for example is fine, you have to have at least two points of intersection in two cycles and there has to be separation on both sides. Coming up with a meaningful way to count or label the violations is harder. Every example I can think of with three cycles reduces to one with two cycles in at least one way, just as a double check. To prove it you might want to formulate some kind of exchange principle, or a way to break minimal cycles into "subatomic particles" that freely generate a minimal free monoid containing the homology monoid.
James Deikun said:
I think the situation in the reduced counterexample is really the only "forbidden minor", insofar as that concept even applies to this kind of graph. A theta graph for example is fine, you have to have at least two points of intersection in two cycles and there has to be separation on both sides. Coming up with a meaningful way to count or label the violations is harder. Every example I can think of with three cycles reduces to one with two cycles in at least one way, just as a double check. To prove it you might want to formulate some kind of exchange principle, or a way to break minimal cycles into "subatomic particles" that freely generate a minimal free monoid containing the homology monoid.
Thank you. I am trying to understand your ideas.
Let me try to prove there's a bijective correspondence between simple loops and minimal cycles.
Recall the framework. A graph is a set of vertices, a set of edges and source and target maps . We're working with a homology monoid, so
The source and target maps extend to monoid homomorphisms which we give the same names:
We define a 1-cycle to be a 1-chain with
We denote the commutative monoid of 1-cycles by . We define the first homology to be (since in general the first homology consists of 1-cycles mod 1-boundaries, but here the only 1-boundary is zero).
Later I will want to define the zeroth homology, but this is getting long so let's skip it for now.
There's a preorder on any commutative monoid given by
iff for some
Puzzle. Prove that for this preorder is a partial order.
Puzzle. As a subset of , inherits a partial order. Prove this is the same the preorder it gets by treating it as a commutative monoid in its own right. In other words, given with
for some
show that in fact we can take .
We say that a 1-cycle is minimal if the only 1-cycle is zero.
We define an edge path in to be a finite list of edges such that the target of each edge is the source of the next. In pictures:
where we allow the degenerate case . Any edge path gives a 1-chain called , namely the sum of its edges:
We say an edge path is a loop if .
Puzzle. Show that an edge path is a loop if and only if is a 1-cycle.
We say a loop is simple if it visits each vertex just once, except that it ends where it starts. More precisely,
is a simple loop if all the vertices are distinct.
Claim 1. If is a simple loop then is a minimal cycle.
Claim 2. For each minimal cycle there exists a simple loop such that .
Claim 3. If are two simple loops with , they differ only by where they start. That is, if is of the form
then must be of the form
So, we get a bijection between minimal cycles and equivalence classes of simple loops, where two simple loops are equivalent if they differ only by where they start!
Let's start with Claim 1, which I believe is the easiest.
Proof of Claim 1. Suppose is a simple loop, say
The corresponding cycle is
Any cycle must be a chain , and since is free on the set of edges any chain must be of the form
where . We have
and
If is a cycle we must have .
But since all the vertices are distinct (while ), the two sums above can only be equal if is all of , in which case , or is empty, in which case . (Note that since is free on the set of vertices, the two sums can only be equal if they are 'visibly' equal - there are no extra relations.) Thus is minimal. :black_large_square:
Now let me try Claim 2, which is where things get interesting.
Claim 2. For each minimal cycle there exists a simple loop such that .
Proof. Note that , the free commutative monoid on the set of edges of our graph, can also be thought of as the collection of [[multisets]] of elements of . This will be useful in what follows.
Let be a minimal cycle. Think of it as a multiset of edges as above. It's nonempty since , so choose an edge in this multiset and call it .
Now there are two cases:
If this path consisting of a single edge is a simple loop , so it gives a nonzero cycle , so by minimality of we must have and we are done.
If then is not a cycle so must be a sum of and one or more edges in the multiset , and at least one of these edges must have source , since otherwise it would be impossible to have . Choose one such edge and call it .
In the second case we get a path
Now there are three cases:
If then this path is a simple loop , so it gives a nonzero cycle , so by minimality of we must have and we are done.
If then the path is a simple loop , so it gives a nonzero cycle , so by minimality of we must have and we are done.
If are distinct then is not a cycle, so must be a sum of and one or more edges in the multiset , and at least one of these edges must have source , since otherwise it would be impossible to have . Choose one such edge and call it .
And so on. I could write this more formally, but I hope the pattern is clear.
Since is a finite multiset of edges this process must eventually terminate: i.e., eventually the vertex must equal one of the earlier vertices . If it equals , then
is a simple loop, so
is a nonzero cycle, and by minimality of this must equal .
Thus, we have found a simple loop with . :black_large_square:
Thank you!! I am now trying to understand your proof.
Feel free to ask questions. Claim 2 is the hard one. In our paper I would write up a more formal inductive proof; here I am outlining the first few steps of the induction and hoping the pattern becomes clear.
You'll notice that the proof of Claim 2 is closely related to the proof of Lemma 4 in Topological Crystals, but it's simpler.
John Baez said:
Feel free to ask questions. Claim 2 is the hard one. In our paper I would write up a more formal inductive proof; here I am outlining the first few steps of the induction and hoping the pattern becomes clear.
Thank you so much. I am reading your proof.
The proof is really an algorithm; it may be helpful to draw a graph and a minimal cycle in this graph, and carry out the algorithm.
Thank you!! I think I understod the proof. Although the proof of claim 2 is more complex than the proof of claim 1 I find both the proof beautiful and interesting.
I feel the current definition of a minimal cycle that you wrote: "We say that a 1-cycle is minimal if the only 1-cycle is zero" is a more workable definition if we look at the proof structure of your claim 2.
Regarding claim 1:
In the proof of claim 1, I like how both the facts and are together needed for writing , where , which I think is crucial in the proof of claim 1.
Regarding claim 2:
I like how at every step it is using the minimality of to check whether to move to the next step or not and usuing the cyclicity to ensure that we can indeed move to the next step in a legitimate way, and finally seeing as a finite multiset is ensuring that this algorithm works as it will end in a finite number of steps to our desired simple loop such that .
I think the proof of claim 3 shoud directly follow from the fact and the defnition of when .
Now, I think using your claim 3, it is possible to define an equivalence relation on , the set of simple loops in , and then we can denote the quotient set by . Now, if we denote the set of all minimal cycles of by , then your claim 1, claim 2 and claim 3 ensure a bijection defined as , where is the equivalence class of .
Then, I think ensures an unambiguous way of imagining minimal cycles both from the point of graph theory and from the point of homology theory of directed topological spaces.
Right! We get a bijection and thus a precise link between the graph-theoretic and the homological way of thinking about minimal cycles.
The ability to work with minimal cycles in both ways should be important. I think it makes precise something that "systems dynamics" researchers already intuit in a rough way (though most are blissfully ignorant of homology theory).
I will write up this stuff in our paper.
As we discussed in our private chat, I think our next big step is to figure out how cycles behave when we glue together open graphs, with the help of the Mayer-Vietoris exact sequence.
An open graph can be seen as a cospan of graphs
where and are graphs with no edges, just vertices.
(As you know, we can apply the theory of structured cospans whenever we have a left adjoint functor between cocomplete categories. Here this functor is the 'free graph on a set' functor to , sending each set to a graph with that set of vertices and no edges.)
To apply Mayer-Vietoris it seems easier to assume and are monic. The case where they are not monic can be very important, since sometimes we want to glue together two vertices of the same graph when composing two open graphs. But temporarily I'd like to avoid thinking about it, since then we can think of and are giving inclusions of subspaces: geometrically realizing our graphs to get spaces, we get a cospan of spaces
and these are 'nice' inclusions of the sort required for Mayer-Vietoris to apply.
(Recall: the simplest version of Mayer-Vietoris applies when we have an open subspace of a topological space, but a more general version applies whenever we have a closed subspace that is a [[neighborhood retract Whenever we have a graph, viewed as a topological space, any set of vertices in that graph is a closed subspace that's a neighborhood retract.)
Now, you may wonder why I'm starting to talk about topology and a theorem about the singular homology groups of topological spaces, when we are really doing graph theory and studying the directed homology monoids of a graph!
The reason is purely efficiency: we may be able to more quickly guess what's going on if we use existing results from topology, rather than invent our own Mayer-Vietoris theorem for the directed homology monoids of a graph. In the longer run (like tomorrow) maybe we should invent our own Mayer-Vietoris theorem. But first let's see what we can do with the standard one.
As you know, when we compose open graphs
and
the key step is to take the pushout of the diagram
Here we are gluing together two graphs and along , which is a graph consisting of just vertices.
Let's assume for now that and are monic. Then Mayer-Vietoris becomes relevant! Taking geometric realizations we get
and we can think of and as two spaces whose intersection is the discrete set . The pushout of this diagram can thus be identified with .
My notation is getting unwieldy here so let's write
We then get the Mayer-Vietoris exact sequence:
where is the famous 'boundary' map, but since is a discrete space this becomes
My main interest is in 'emergent feedback loops': how new 1-cycles can appear when we glue together two graphs, which weren't present in either graph. So I'm interested in how the map
can fail to be an isomorphism! And we see this happens precisely when the map fails to equal zero.
Thanks!! These ideas look extremely interesting!! Although I may need more time to realise more about these. I had always been excited about these "emergent feedback loops" that can appear after gluing causal loop diagrams. But I never imagined that the homology theory could play such an interesting and powerful role in identifying them until today. It is really exciting !! Thanks a lot!!! I am learning many things!!
emergentfeedbackloop.PNG
For example, If we glue a branch graph (red) and an incoherent feedforward loop(blue) along the graph with no edges, then we get a positive feedback loop and a negative feedback loop in the pushout graph. In fact, the pushout graph is itself has a special name called overlapping feedforward loop in the biochemical reaction network literature (see table 2 (number 20) of Functional Motifs in Biochemical Reaction Networks. In the attached file, both and are trivial but interestingly, is not!!..So, the boundary map must not be 0 as you explained at the end. However, I think, in this particular case, one may also say is not an isomorphism from the fact that and are trivial but is not.
John Baez said:
Now, you may wonder why I'm starting to talk about topology and a theorem about the singular homology groups of topological spaces, when we are really doing graph theory and studying the directed homology monoids of a graph!
The reason is purely efficiency: we may be able to more quickly guess what's going on if we use existing results from topology, rather than invent our own Mayer-Vietoris theorem for the directed homology monoids of a graph. In the longer run (like tomorrow) maybe we should invent our own Mayer-Vietoris theorem. But first let's see what we can do with the standard one.
Thanks!! I fully understand and agree to your point. By using the standard Mayer-Vietoris on the "geometric realisation" version, now it is clear at least "what we need to achieve" in an appropriate way!!
Thanks again for these beautiful ideas!! I am now trying to see how we can use these ideas to construct its analogue for our case (homology monoids of directed graphs).
John Baez said:
Right! We get a bijection and thus a precise link between the graph-theoretic and the homological way of thinking about minimal cycles.
The ability to work with minimal cycles in both ways should be important. I think it makes precise something that "systems dynamics" researchers already intuit in a rough way (though most are blissfully ignorant of homology theory).
Thanks.. Yes, I fully agree!!
Adittya Chaudhuri said:
emergentfeedbackloop.PNG
For example, If we glue a branch graph (red) and an incoherent feedforward loop(blue) along the graph with no edges, then we get a positive feedback loop and a negative feedback loop in the pushout graph.
We don't get a feedback loop in this picture, because the pushout graph doesn't have a directed loop (which is the kind of loop my theorems are about): you can't walk around any loop while following the direction of the arrows.
But if you turn around the edge from to , making it into an edge from to , then the pushout graph does have a directed loop. In fact it has two.
Yes, of course..Sorry!! I meant to .. I have drawn wrongly!!
Okay, no problem. Of course the usual homology group doesn't care which way the edges point, so your example is already good if we are studying that.
By the way, I think in the paper I'll call our new homology monoids , and the old homology groups . We can define for any commutative monoid .
Yes!! Thanks. I understand your point. I was actually trying with some of the pictures I had drawn yesterday for example 2.10.
Another thing , I was thinking "can we use the notation instead of "? In that way, we may emphasise that we consider our directed graphs as directed topological spaces but not the usual topological spaces.
In fact, you were initially using this notation before as here
I am trying to sketch an approach (based on the ideas you stated) for extending to case of homology monoids of directed graphs:
1) For any morphism of graphs I think there is an induced morphism .
2) Now, we have structured cospans and . Now, we can compose it to get another structured cospan , where and are natural projection maps to the pushout graph. Now, we apply (1) on these cospans to get induced maps on the homology monoids. Then, I think, we may be able to construct morphisms of commutative monoids which becomes .
3) Construction of boundary map .
Maybe I am misunderstanding something !! I will think about these more!!
Adittya Chaudhuri said:
Another thing , I was thinking "can we use the notation instead of "? In that way, we may emphasise that we consider our directed graphs as directed topological spaces but not the usual topological spaces.
I'm thinking a bit differently now: for a graph we can define what you're calling for any commutative monoid , but when is an abeilan group this is isomorphic to the usual undirected .
So right now I feel we don't need an extra arrow to indicate directedness: the fact that the coefficients are a commutative monoid requires directedness, but when the coefficients are an abelian group the homology becomes independent of which way the edges point.
Thanks. I understand your point.
I'm trying to understand Mayer-Vietoris for graphs, and in particular the all-important boundary map
in a really concrete way when and are subgraphs of a graph , and is just a set of vertices. I realize I don't understand this map as vividly as I'd like.
I first learned homology theory in a course using William Massey's book Singular Homology Theory, and section III.3 is called "Homology of finite graphs". But it doesn't help me that much since it's mostly about computing the homology of a graph.
John Baez said:
I'm trying to understand Mayer-Vietoris for graphs, and in particular the all-important boundary map
in a really concrete way when and are subgraphs of a graph , and is just a set of vertices. I realize I don't understand this map as vividly as I'd like.
I first learned homology theory in a course using William Massey's book Singular Homology Theory, and section III.3 is called "Homology of finite graphs". But it doesn't help me that much since it's mostly about computing the homology of a graph.
I realised I also do not understand it concretely. I am working on it (trying to understand it in the way you mentioned).
It may work like this. Suppose and are subgraphs of a graph and has no edges, only vertices.
Let be a cycle. Write as a linear combination of edges of , say , plus a linear combination of edges of . Let be the boundary of . This is a linear combination of vertices in , so it's an element of . Call this . This defines a map
I haven't proved that this works, this is just based on my intuitions about homology theory! It should be possible to check that this gives an exact sequence, the Mayer-Vietoris sequence.
More formally: given , write
where are edges of and . We can uniquely write as
where
Define
(I was confused for a while about why we choose to define to be instead of , but I've decided there's an arbitrary choice of sign in the definition of .
John Baez said:
It may work like this. Suppose and are subgraphs of a graph and has no edges, only vertices.
Let be a cycle. Write as a linear combination of edges of , say , plus a linear combination of edges of . Let be the boundary of . This is a linear combination of vertices in , so it's an element of . Call this . This defines a map
I haven't proved that this works, this is just based on my intuitions about homology theory! It should be possible to check that this gives an exact sequence, the Mayer-Vietoris sequence.
More formally: given , write
where are edges of and . We can uniquely write as
where
Define
(I was confused for a while about why we choose to define to be instead of , but I've decided there's an arbitrary choice of sign in the definition of .
Thanks!! Yes, I think your prescription should work as it is using the same principle of barycentric subdivision to represent a cycle in as a sum of -1-chain and -1-chain and then using the definition of a cycle (as an element of kernel space) and using the fact that we can take negative coeffcient to conclude that is indeed an element of .
From yesterday, I was trying to think a bit graph theoretically about "how to create an analogue for directed graphs/ when the coefficients are coming from ".
I am trying to write down what I am thinking (though still some/ lot of work might be needed to make things concrete)[It is also very possible that I am making fundamental mistakes (ignoring technical obstructions) while thinking graph theoretically]
We need to construct/understand
Since we have already established
1) minimal loops generate the commutative monoid of 1-cycles in directed graphs and
2) a bijection between the set of minimal cycles and the set of simple loops in directed graphs
I think we may not lose much if we consider only cycles of the form , where , and is an element of and is a minimal cycle.
Now, I think there are only two cases: [I think this needs a proof]
a) is itself a minimal cycle/simple loop in or a minimal cycle/simple loop in
b) is made up of two edge paths (not cycles) and in and respectively.
I think in the case , when we cut the into and in and , we have distinguished vertices "starting vertex of , ending vertex of and starting vertex of , ending vertex of ".
Now, I want to define .as follows:
For case (a):
For case (b): starting vertex of ending vertex of starting vertex of , ending vertex of . From the construction, I think we can show that starting vertex of ending vertex of starting vertex of , ending vertex of is an element of . Of course we need to define appropriately.
Also, I think from the fact that is a minimal cycle, we can simplify the definition in case (b).
But, I am feeling that the above definition aligns with what you said before that is how the "non-zero-ness" of the boundary map can be considered as a determining factor for the existence of emergent feedback loops when we glue graphs along vertices .
I was thinking in terms of the attached example
emergeentloops.PNG
I just found a counter-example to my previous claim:
"Now, I think there are only two cases: [I think this needs a proof]
a) is itself a minimal cycle/simple loop in or a minimal cycle/simple loop in
b) is made up of two edge paths (not cycles) and in and respectively. "
In the attachement I constructed the counter example.
counterexample(b).PNG where the unique minimal cycle in can not be decomposed into an edge path in and an edge path in .
Hence, my definition of is not correct.
However, I think the following may work:
Conisder , where , and is an element of and is a minimal cycle.
Now, I claim the following:
Lemma:
Every edge path in the graph can be either decomposed uniquely into finite collection of maximal edge paths in and finite collection of maximal edge paths in or else, itself is a maximal edge path in or .
I define a maximal edge path in a graph as an edge path in such that there does not exist any edge in such that and .
Now, assuming that the above lemma is correct, I reformulate my previous claim (using the same notations as used previously) as follows:
Now, I think there are only two distinct cases:
a) is itself a minimal cycle/simple loop in or a minimal cycle/simple loop in
b) is made up of (uniquely) finite collection of maximal edge paths in and finite collection of maximal edge paths in .
Now, I am redefining as follows:
For case (a):
For case (b):
Since is a cycle, I think it will follow that by construction in the Lemma. Of course we need to define appropriately. Note that is well defined because of the "uniqueness" condition in the Lemma
Still, maybe there are some mistakes in this approach that I am yet to find out!!
I am restating (what I stated before) for completeness:
I feel the above definition of aligns with what you said before that is how the "non-zero-ness" of the boundary map can be considered as a determining factor for the existence of emergent feedback loops when we glue graphs along vertices .
Of course after this we need to make sure makes sense from the point of Mayer-Vietoris exact sequence.
Adittya Chaudhuri said:
Lemma:
Every edge path in the graph can be either decomposed uniquely into finite collection of maximal edge paths in and finite collection of maximal edge paths in or else, itself is a maximal edge path in or .
This lemma must be correct, and I'd like to state it more simply like this:
Lemma:
Every edge path in the graph can be be decomposed uniquely into finite collection of maximal edge paths in and finite collection of maximal edge paths in .
Since an empty collection is still a finite collection, this includes the case where the edge path stays in (then the collection of maximal edge paths in will be empty) and the case where stays in (then the collection of maximal edge paths in will be empty).
However, I don't like this formula:
You seem to be using as a substitute for the formula we'd use in homology with coefficients, namely
I don't think replacing subtraction by addition is a good way to deal with the fact that doesn't have subtraction.
John Baez said:
That fixed **le
Adittya Chaudhuri said:
However, I think the following may work:
Conisder , where , and is an element of and is a minimal cycle.
Now, I claim the following:
Lemma:
Every edge path in the graph can be either decomposed uniquely into finite collection of maximal edge paths in and finite collection of maximal edge paths in or else, itself is a maximal edge path in or .This lemma must be correct, and I'd like to state it more simply like this:
Lemma:
Every edge path in the graph can be be decomposed uniquely into finite collection of maximal edge paths in and finite collection of maximal edge paths in .Since an empty collection is still a finite collection, this includes the case where the edge path stays in (then the collection of maximal edge paths in will be empty) and the case where stays in (then the collection of maximal edge paths in will be empty).
Thank you. Yes, I understand your point.
John Baez said:
However, I don't like this formula:
You seem to be using as a substitute for the formula we'd use in homology with coefficients, namely
I don't think replacing subtraction by addition is a good way to deal with the fact that doesn't have subtraction.
Yes, I agree it is not the right way to deal with the "lack of inverse" in . In the usual set up (coeffcients in ), cycles are the elements of the kernel space of a boundary map . From the "alternating sum definition" (target source for 1-chains) of the boundary map, one can conclude that the boundary of any cycle is . But, in our case (coeffcients in ), we do not have the previledge to define alternating sum. You defined cycle as an element such that . However, we are yet to define a cycle interms of a kernel space of a boundary map. The definition of a boundary map was not necessary till now, as technically, a right definition of should make it trivial because a graph is not expected to have any non-degenerate 2-chains. Hence, we had no difficulty in defining .
However, I think now to deal with "lack of inverse" situation in in constructing a right definition of , we need to see cycles as elements of the kernel space of an appropriate boundary map. I am not sure but I think this may be a better motivation to define an appropriate boundary map for the case (coefficients in ). I am not fully sure, but I am feeling that "congruence relation on a monoid" should play a good role here (from the way you previously discussed here )?
When doing homology with coefficients in instead of , I believe that instead of defining the space of cycles as the kernel of some map , we should define it as the equalizer of two maps and .
When working with abelian groups the equalizer of two maps and is the kernel of , so all equalizers can be expressed as kernels. This is not true when working with commutative monoids, since we can't subtract. So, we need to rephrase exact sequences in a different way.
Thanks!! Yes, I agree. It is interesting and really makes sense in the context of defining exact sequences (when coefficients in ) to replace kernels with equalisers. Does such a kind of exact sequence already exist in literature in other contexts?
I think it works like this... this will take several posts to explain!
Suppose and are subgraphs of a graph and the intersection has no edges, only vertices.
When working with coefficients the Mayer-Vietoris sequence implies that a sequence like this is exact:
But is the space of 1-cycles mod 1-boundaries and for a graph the only 1-boundary is zero, so we can rewrite this as
What the first map here does is sum a 1-cycle on the subgraph and a 1-cycle on the subgraph to get a 1-cycle on .
Digressing a bit: note that every 1-chain in is uniquely expressed as a linear combination of edges of plus a linear combination of edges of , so the summation map
is actually an isomorphism, which we can call
This map restricts to a map sending cycles to cycles,
but this restricted map is not an isomorphism: there are cycles on that aren't sums of a cycle on and a cycle on .
Given any cycle we write it uniquely as a sum
where and , but and will not always be cycles.
We have , and where
are uniquely defined by the property that they map any edge (thought of as a 1-chain) to its source and target (thought of as 0-chains).
Now let me remember the definition of that we need to make this sequence exact:
First, note that
since is just a discrete set of points: a graph with vertices but no edges. So we can switch to talking about the sequence
In these terms, I claim that
makes the sequence exact, where this formula is defined using the fact that given any cycle we write it uniquely as a sum
where and .
First, let's note that the asymmetrical looking formula (1) is equivalent to another formula involving , namely
since , because is a cycle, so .
Next, note that this being exact:
is equivalent to being the equalizer of and , where
are any maps with
Claim. is the equalizer of if we take .
I can prove this later, but note what's nice about this! First, as I just said, it means that if we let with this choice of and , we get exactness at this point of the Mayer-Vietoris sequence when we're working with coefficients:
But second, since and don't involve subtraction, we have a version of the Mayer-Vietoris sequence that works with coefficients, saying that is the equalizer of and !
In fact, I'll prove the claim working with coefficients, where a 1-cycle is defined as a 1-chain for which .
Proof of Claim. To prove this, first we show that , i.e.
for all . By definition of we have
and by definition of and we have
and these are equal since is a cycle. :check_mark:
Next, we show that if
for some , then .
By definition says
but since is a cycle we have , and thus , which together with the above equation implies
(Here I'm using cancellation in . It's a cancellative monoid! I'm not using subtraction.)
(a) and (b) say that and are 1-cycles! Thus, we have
where . So as desired! :black_large_square:
I feel this idea is similar to what you were saying, but it avoids the trouble you were running into with wanting to define but not being able to subtract, by switching from a kernel to an equalizer. It also avoids the need to explicitly break paths into a bunch of smaller paths, by using the fact that any 1-chain on can be uniquely broken into where is a 1-chain on and is a 1-chain on . I feel you had a good intuition for the situation and I'm polishing up what you were trying to say.
Adittya Chaudhuri said:
Thanks!! Yes, I agree. It is interesting and really makes sense in the context of defining exact sequences (when coefficients in ) to replace kernels with equalisers. Does such a kind of exact sequence already exist in literature in other contexts?
Hopefully this isn't too far off base since I haven't followed this thread in detail, but doing homological algebra with commutative monoids reminds me of Homological algebra in characteristic one by Connes and Consani. Although this is about homological algebra over the Boolean semifield (where ), I think that essentially the same approach should appy to commutative monoids, and perhaps Homology of systemic modules is a good reference for this? I just came across this, so not sure.
Here's my guess as to how this all works: let's define a category where objects are commutative monoids and a morphism is a pair of additive maps , to be thought of as a version of the formal difference which avoids actually forming the difference. Such pairs compose via the twisted composition . Furthermore, call such a morphism "null" if it is of the form .
This gives notions of kernel and image: the kernel of is the universal morphism into whose composite with is null. There's a dual notion of cokernel. Furthermore, we get an induced notion of image, defined as the kernel of the cokernel projection.
Now consider composable morphisms and that have a null composite, meaning that . The homology monoid can now be defined as the kernel of the second morphism modulo the image of the first, where "modulo" is again in the cokernel sense.
All of this is really based on Marco Grandis's elegant and very general approach to non-abelian homological algebra.
Does any of this look like it could be relevant to you?
What I'm not sure about is whether the resulting category is really going to satisfy Grandis's axioms for a "homological category". If not, then there are known ways around this at the expense of having to introduce further bookkeeping. I can say more about this if it turns out to be relevant.
John Baez said:
I feel this idea is similar to what you were saying, but it avoids the trouble you were running into with wanting to define but not being able to subtract, by switching from a kernel to an equalizer. It also avoids the need to explicitly break paths into a bunch of smaller paths, by using the fact that any 1-chain on can be uniquely broken into where is a 1-chain on and is a 1-chain on . I feel you had a good intuition for the situation and I'm polishing up what you were trying to say.
Thank you so much for explaining your ideas in such a detailed and vivid way. I find your ideas very interesting.
Yes, I agree my approach was to break a minimal cycle in uniquely into a "collection of paths in " and a "collection of paths in ", and then I was trying to define in such a way that it maps all the minimal cycles in or minimal cycles in to . However, my approach was failing because I was not able to define in a right way on the minimal cycles in which are not made up of minimal cycles in and minimal cycles in so that we get an exact sequence.
After your explanation, I realised that although I wanted to do the right thing, I was not using the correct language that expresses what I want in a consistent way. Now, I understand "how Mayer-Vietoris sequence for a directed graph" is not actually an exact sequence, but very close it (as we can recover the exact sequence if we work with coefficients in ). More precisely, your version of Mayer-Vietoris is a generalisation of the usual Mayer-Vietoris and can be framed precisely in the form of the following statement:
is the equalizer of if we take .
I find it very interesting how "a very realistic idea of finding the existence of emergent feedback loops while glueing directed graphs" naturally leads to a "generalisation" of the Mayer-Vietoris sequence in undirected graphs. More interestingly, this generalisation was necessary to find a right language to phrase the problem of finding emergent feedback loops.
Tobias Fritz said:
Does any of this look like it could be relevant to you?
Thank you so much!! These ideas are very interesting, and I think they are very much related to what we are doing and what John Baez was saying about equalisers. I would definitely be interested in knowing more about these objects. Can you please explain a bit more about the significance of "twisted composition" ?
I understood that if and are composable and is null, then we have .
Sure! I realize that I had gotten the order of composition mixed up above and fixed that now.
It's probably more intuitive to write the morphisms as pairs to underline the analogy with the fomal difference . In this notation, the twisted composition is
corresponding to the two terms in .
Tobias Fritz said:
Sure! I realize that I had gotten the order of composition mixed up above and fixed that now.
Thanks!!
Tobias Fritz said:
It's probably more intuitive to write the morphisms as pairs to underline the analogy with the fomal difference . In this notation, the twisted composition is
corresponding to the two terms in .
Thanks!! I think you are constructing a category where "the desired " can be written as morphism in your category so that we can avoid the situation of "existence of inverse elements". Am I understanding correctly?
Exactly, that's the idea! And furthermore, if you consistently work with pairs like that, then there is an established machinery for homological algebra that applies. You can just turn the crank, and you'll get well-behaved concepts of chain complex, homology objects, connecting maps and even long exact sequences, provided that Grandis's axioms are satisfied. (I'm not sure if they are; in case that they're not, then there's still a workaround that I can explain.)
Thanks!! Sounds very interesting!!
(And I wouldn't want to call it "my" category because those pairs and twisted composition are already considered in the papers I've linked to.)
@John Baez Accoording to what I understand: The failure of the map to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of and is an isomorphism if and only if .
My questions are the following:
1) What are some nice criteria (other than , because to know , I think we precisely need to know whether there exists any cycle in which is not made up of a cycle in and a cycle in ) which would imply is not an isomorphism?
2) Can we give a measure of how much is far from being an isomorphism?
If I am not misunderstanding, then I feel each of these questions may be relevant from the point of realistic applications in systems dynamics or systems biology.
Thanks, Tobias, for all these potentially helpful ideas! It will take me a while to absorb them but they may give a more systematic way of developing an analogue of the Mayer-Vietoris exact sequence for the homology of a directed graph with coefficients in (or any other commutative monoid).
Sounds good! I think Grandis's Homological Algebra In Strongly Non-Abelian Settings, which this is based on, is really pretty, useful and quite underappreciated.
Adittya Chaudhuri said:
John Baez According to what I understand: The failure of the map to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of and is an isomorphism if and only if .
My questions are the following:
1) What are some nice criteria (other than , because to know , I think we precisely need to know whether there exists any cycle in which is not made up of a cycle in and a cycle in ) which would imply is not an isomorphism?
2) Can we give a measure of how much is far from being an isomorphism?
These are good questions. I don't know good answers.
The answers may involve relative homology. Suppose you have a graph where and are the walking edge, consists of two vertices, and has a nonzero cycle. Where does the cycle in come from? It seems to come from relative cycles and that "fit together to form a cycle":
so is a cycle.
Here I am using coefficients, which allows me to mention and . To use coefficients we should instead say
i.e. " starts where ends, and starts where ends".
That's all I have to say right now - maybe you can go further.
I still feel I haven't gotten the analogue of the Mayer-Vietoris sequence perfectly worked out, and I want to do that, perhaps using ideas from the papers Tobias pointed us to. I also want to write up the stuff we know is working well.
Here's one more thought on what I've been suggesting. The functor from the category that I've proposed should have a "forgetful" functor to which maps every commutative monoid to its enveloping abelian group, and maps every morphism to the actual difference . It seems to me that this functor should preserve the kernels and cokernels. If this is indeed the case, then this functor is "exact". In particular, it should commute with homology: the homology objects associated to a chain complex of commutative monoids will be such that their enveloping groups are precisely the homology groups of the chain complex of the enveloping groups.
So the enveloping groups of the homology monoids that I'd construct should be the usual homology groups over of the underlying undirected graph. Is this what you'd expect to happen?
No. What we want (and have) is that the usual 1st homology group over of a directed graph cannot be functorially constructed from its 1st homology monoid, nor vice versa. They convey different information because the homology group detects cycles that are not linear combinations of directed edge loops, while the homology monoid only detects cycles that are linear combinations of directed edge loops. We saw examples earlier here. You can find an example with two vertices and two edges.
So, we're not doing homology of ordinary spaces with novel coefficients; we're doing homology of a very simple class of directed spaces with coefficients in a commutative monoid (mainly ).
John Baez said:
No. What we want (and have) is that the usual 1st homology group over of a directed graph cannot be functorially constructed from its 1st homology monoid, because there are cycles in the usual sense that are not linear combinations of directed loops. We saw examples earlier here. You can find an example with two vertices and two edges.
But I'm trying to look at the homology over of the underlying undirected graph, which I think would be the construction that corresponds to forming enveloping groups of the chain complex and then looking at homology. Then it seems to me that the two parallel edges example is consistent with my expectation, no?
(I'm also not 100% sure that my approach really conforms with my expectation; I guess it would be best to work out some examples of what those kernels in the category that I've described actually are.)
Okay. I don't want to recover the homology of the underlying undirected graph, since I already know everything I need to know about that. But we can do this:
Start with a directed graph and first apply the free commutative monoid functor to this and get something I'll call . So far this is very interesting to me.
Then apply the enveloping group functor and get something I'll call . (I'm lazy so I keep calling the maps and .) Define and get a 2-step chain complex of abelian groups. The homology of this is the usual homology of the underlying undirected graph.
Then it seems to me that the two parallel edges example is consistent with my expectation, no?
Yes. We'd get a chain complex with .
I'm mainly interested in the information that gets lost when we apply the enveloping group functor!
Yep, that's what I'm getting at! Of course these enveloping groups are not really the thing that's of interest to you, but what I'm trying to see is whether my proposal would satisfy your desiderata, and that looks promising. (Although my head is arguably too cloudy now to think clearly and I should call it a day.)
I must be misunderstanding because I was trying to explain why your proposal doesn't satisfy my desiderata.
By the way, when you said
forming enveloping groups of the chain complex and then looking at homology.
I didn't understand that, because in the proposal I just sketched we don't have a chain complex until we take enveloping groups: we instead have a pair of parallel maps . Only after taking the enveloping groups can we subtract these and get the desired chain complex, with differential .
(Admittedly any morphism between anything can be seen as a 2-step chain complex, because the condition doesn't enter! But I don't want to think of or as a 2-step chain complex... unless you're telling me I should.)
John Baez said:
I didn't understand that, because in the proposal I just sketched we don't have a chain complex until we take enveloping groups: we instead have a pair of parallel maps . Only after taking the enveloping groups can we subtract these and get the desired chain complex, with differential .
Ah, what I was referring to is a chain complex in the category that I had described, where morphisms are precisely pairs of parallel additive maps between commutative monoids. A chain complex in this category is a composable sequence of such pairs , where the twisted composition of any two is null. This means that .
So it's a chain complex in a certain generalized sense, namely the one of Grandis's non-abelian homological algebra. I believe that this is exactly what you have.
The enveloping groups of the directed homology monoids should not be the same as the undirected homology groups because when you move from directed to undirected entirely new cycles appear that are not formal differences of any directed cycles. That's what the example of two parallel arrows is supposed to illustrate.
John Baez said:
I must be misunderstanding because I was trying to explain why your proposal doesn't satisfy my desiderata.
Sorry, you're right, and I can blame it on my cloudy head :sweat_smile: So either what I'm suggesting isn't what you want, or my expectation about homology commuting with group envelopes is wrong, or "my" homology monoid acts differently in a way that you perhaps would want if interpreted suitably. One difference is that there then would be two maps from cycles to chains, and this probably means that elements of my monoid of cycles should not be thought of as sums of directed loops. But they could still have a different meaningful interpretation in which directed loops are encoded.
I'll think about it, thanks!
John Baez said:
Adittya Chaudhuri said:
John Baez According to what I understand: The failure of the map to be an isomorphism is the criterion for the existence of emergent feed back loop in the "composite/glued" directed graphs. However, technically, this is a Mathematical formulation of the "idea of emergence in the framework of causal loop diagrams". I think the Mayer-Vietoris of directed graphs (in terms of equalisers) elegantly describe this very same thing : Equaliser of and is an isomorphism if and only if .
My questions are the following:
1) What are some nice criteria (other than , because to know , I think we precisely need to know whether there exists any cycle in which is not made up of a cycle in and a cycle in ) which would imply is not an isomorphism?
2) Can we give a measure of how much is far from being an isomorphism?
These are good questions. I don't know good answers.
The answers may involve relative homology. Suppose you have a graph where and are the walking edge, consists of two vertices, and has a nonzero cycle. Where does the cycle in come from? It seems to come from relative cycles and that "fit together to form a cycle":
so is a cycle.
Here I am using coefficients, which allows me to mention and . To use coefficients we should instead say
i.e. " starts where ends, and starts where ends".
That's all I have to say right now - maybe you can go further.
I still feel I haven't gotten the analogue of the Mayer-Vietoris sequence perfectly worked out, and I want to do that, perhaps using ideas from the papers Tobias pointed us to. I also want to write up the stuff we know is working well.
Thank you!! Interesting. Yes, I agree that the relative homology is a correct language to characterise "which portions of a cycle in is coming from " and "which portion of is coming from " by looking at how and are glued together. I am trying to understand it more clearly (concretely) to say something precise about my previous questions (1) and (2).
I remembered now that my former student @Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".
I could try to summarize this work, but we actually want something different. Here's one idea we can extract from Jade's thesis. We are studying a graph for which the intersection is just a set of vertices, and trying to understand , which is isomorphic to the set of cycles in . As we know, every cycle in is a sum of minimal cycles, and every minimal cycle is an equivalence class of simple loops, where two simple loops are equivalent iff they differ only by their starting point - so let's focus on those.
Every simple loop either
or
or
or... etc. So the set of simple loops in is -graded!
So is the set of equivalence classes of simple loops (check that the grade doesn't depend on the representative) - or in other words, minimal cycles.
Since cycles are sums of minimal cycles, but not necessarily in a unique way, I don't know if this grading on the set of minimal cycles gives a grading on ... but at least it gives a [[filtration]]:
where the submonoid consists of all cycles that are sums of minimal cycles of grade .
Note that
and all the higher are 'corrections' to the simple but wrong guess that this image is all of .
Hmm! I think we can analyze the higher in terms of the map
since this map takes a minimal cycle and gives an element of that keeps track of where this cycle crosses between and .
Thank you. I am now trying to understand your ideas.
Although it may not be anything serious, I am sharing some realisations that I had today:
I think given a directed graph , the bijection between the set of minimal cycles in and the set of equivalence classes of simple loops in precisely tells that our minimal cycles correspond to directed circles (according to the definition of a [[directed topological space]]). I am thinking like this because when we are taking the equivalence class of a simple loop, then we just remember the direction of the simple loop but forget everything else, and thus, in a way, I think we end up in a topological space homeomorphic to a circle with a sense of direction. Now, since minimal cycles of a graph generate all the cycles, we can focus only on the directed circles in a graph. Hence, in the context of emergent directed cycles, when we glue graphs and along (containing only vertices), then we may think of a measure of emergence as the set , where , and are the sets of directed circles in , and , respectively. Now, if we are working with finite graphs, then we may say that the natural number representing the cardinality of i.e gives a measure of the emergence here.
One final comment on the Grandis-Connes-Consani approach. I've just done some calculations, and it looks like the precise category that I've described probably isn't the "right" one, because it seems hard to determine what kernels and cokernels might be.
But following the related ideas from Section 5 of Homological algebra in characteristic one instead, I've arrived at the following notion of "kernel" for the pair of additive maps associated to a directed graph. This kernel is the monoid of all pairs for which . You should think of such a pair as representing the formal difference without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation
This monoid of cycles has a canonical involution given by . The elements that are invariant under the involution should be thought of as "trivial", because they trivially satisfy the defining equation. The equalizer that you're considering is the submonoid of elements of the form . On the other hand, the usual group of cycles of the underlying undirected graph is what you get upon identifying all trivial elements with zero.
For example for the graph with two parallel edges and , writing and likewise for identifies the space of cycles with the monoid . Taking necessitates , recovering the fact that there are no directed cycles. Quotienting by trivial cycles produces a group isomorphic to the standard group of cycles . So it seems to me that this object contains all the information that one would hope for, even though it's not so easy to interpret the meaning of a general element.
Thank you @Tobias Fritz. I will take some time to understand your ideas.
John Baez said:
Hmm! I think we can analyze the higher in terms of the map
since this map takes a minimal cycle and gives an element of that keeps track of where this cycle crosses between and .
But, did we actually define the map when the coefficients are from ? I thought you defined the Mayer-Vietoris for the case of directed graphs in the form of following statement:
is the equalizer of if we take .
Am I misunderstanding?
Tobias Fritz said:
This kernel is the monoid of all pairs for which . You should think of such a pair as representing the formal difference without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation
I've figured out a good way to think about it: the first component is a directed chain, while the second component plays the role of a directed chain with the opposite direction, and the equation then says that these two together must form a cycle. With this idea in mind, it's clear that this notion of kernel keeps track of both directed cycles and undirected cycles at the same time. On a related note, the involution keeps track of orientation reversal as a structure, while in the usual undirected setting with coefficients orientation reversal is more like a mere property (namely the existence of additive inverses).
this construction seems evocative of the Grothendieck group of the integers. is there a connection?
Tobias Fritz said:
Tobias Fritz said:
This kernel is the monoid of all pairs for which . You should think of such a pair as representing the formal difference without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation
Yes! For a commutative monoid , you can factor the construction of its enveloping group (Grothendieck group) into two steps: first, form , which is an involutive monoid with respect to the swap as involution, to be thought of as some sort of "pre-negation". Second, identify all elements of the form , which are precisely those that are invariant under the involution, with . The resulting monoid happens to be a group, and it's the Grothendieck group of .
John Baez said:
I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".
Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).
Adittya Chaudhuri said:
John Baez said:
Hmm! I think we can analyze the higher in terms of the map
since this map takes a minimal cycle and gives an element of that keeps track of where this cycle crosses between and .
But, did we actually define the map when the coefficients are from ?
No, not really - I made a mistake here. To fix that mistake I could be more clear about coefficients and define a map
as follows:
1) First, there's a natural map
since is (isomorphic to) the commutative monoid of -linear combinations of equivalence classes of simple loops, and any such linear combination gives a cycle in . Wow, that sounds complicated when I say it, but the picture in my mind is simple! Another way to say it is:
We can define the directed first homology of a directed graph with coefficients in any commutative monoid, and any homomorphism of commutative monoids induces a map
in a functorial way. To get we apply this to the inclusion .
2) Conjecture: The map is is injective.
This doesn't really matter for what I'm about to say, but my mental image is that cycles with coefficients in are just cycles with coefficients in with a special property.
3) So, we can form the composite
and that's what I was unconsciously doing when I wrote "".
There is still more to say about how we might use this map , or some similar map, to understand emergent cycles that form when we glue together two graphs. But I should think about it some more first.
Adittya Chaudhuri said:
John Baez said:
I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".
Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).
Here's one thing that may help. Jade considered, not labeled graphs, but what she called "-matrices" for a rig . An -matrix is simply a set together with a function . So, it's a square matrix with entries in , where the rows and columns are indexed by the set .
If we take to be the boolean rig , an -matrix is a directed graph with at most one edge from any vertex to any vertex : we say there's an edge from to iff .
Note this is not the same as our kind of graph (namely a [[quiver]]): you can think of it as a special case. Both -matrices and quivers are a special case of graphs enriched in a category . A graph enriched in a set and a function
.
(A quiver is a graph enriched in . An -matrix is a graph enriched in the discrete category corresponding to the set .)
Also, Jade focused on the case where the rig is actually a [[quantale]]. A quantale is a monoid in the monoidal category of [[sup-semilattices]]. A sup-semilattice is a poset that has all colimits, called 'suprema' or 'sups'. Any quantale becomes a rig where the addition is given by the binary case of the sup and the multiplication is given by the monoidal structure.
But why did Jade restrict to quantales?
Here's why: so she could do infinite sums without worrying about them! You'll see a lot of 'matrix multiplication' in her work, where she multiplies -matrices using usual formula
When you use a quantale, these sums make sense even when ranges over an infinite set!
However, if we restrict her ideas to make sure we're only doing finite sums, we can then generalize them to graphs enriched in a rig category like or .
Tobias Fritz said:
Tobias Fritz said:
This kernel is the monoid of all pairs for which . You should think of such a pair as representing the formal difference without actually taking the difference, and thereby the defining equation becomes the positive analogue of the usual cycle equation
I've figured out a good way to think about it: the first component is a directed chain, while the second component plays the role of a directed chain with the opposite direction, and the equation then says that these two together must form a cycle. With this idea in mind, it's clear that this notion of kernel keeps track of both directed cycles and undirected cycles at the same time. On a related note, the involution keeps track of orientation reversal as a structure, while in the usual undirected setting with coefficients orientation reversal is more like a mere property (namely the existence of additive inverses).
Great, now I understand what you're doing! This description is quite vivid.
@Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.
Among other things I wrote a proof of what we were calling Claim 3. I've strengthened it a bit:
Claim 3. Every loop that gives the same cycle as a simple loop
is of the form
where we treat the subscripts as elements of and do addition mod .
Proof. Suppose
is a simple loop and
gives the same cycle as , so that
Since is simple, all the vertices are distinct, so the edges are distinct. We must thus have , with the list of edges being some permutation of the list of edges . Since all the vertices are distinct, the only permutations that make into a loop are cyclic permutations. :black_large_square:
John Baez said:
Adittya Chaudhuri said:
John Baez said:
Hmm! I think we can analyze the higher in terms of the map
since this map takes a minimal cycle and gives an element of that keeps track of where this cycle crosses between and .
But, did we actually define the map when the coefficients are from ?
No, not really - I made a mistake here. To fix that mistake I could be more clear about coefficients and define a map
as follows:
1) First, there's a natural map
since is (isomorphic to) the commutative monoid of -linear combinations of equivalence classes of simple loops, and any such linear combination gives a cycle in . Wow, that sounds complicated when I say it, but the picture in my mind is simple! Another way to say it is:
We can define the directed first homology of a directed graph with coefficients in any commutative monoid, and any homomorphism of commutative monoids induces a map
in a functorial way. To get we apply this to the inclusion .
2) Conjecture: The map is is injective.
This doesn't really matter for what I'm about to say, but my mental image is that cycles with coefficients in are just cycles with coefficients in with a special property.
3) So, we can form the composite
and that's what I was unconsciously doing when I wrote "".
There is still more to say about how we might use this map , or some similar map, to understand emergent cycles that form when we glue together two graphs. But I should think about it some more first.
Thanks for explaining. I find it interesting that the homomorphism of commutative monoids induces a functor , as you explained. Somehow, it makes me recall that we already have a functor from the defined as or . Now, I think the later case means we can functorially change the labels on the edges of the our graphs (as when we change the labels, we keep the underlying directed graph structure as it is), while I think in the former case, it means we can functorially forget/change the sense of directions in the edges of our graphs( as when we change the coefficients of our homology monoids, we keep the underlying "undirected graph structure" as it is). It makes me wonder, if we consider coefficients from more general commutative monoids, I think "intuition of direction" is not making much sense to me at the moment. Then, probably, I am thinking in the wrong direction.
John Baez said:
Adittya Chaudhuri said:
John Baez said:
I remembered now that my former student Jade Master did work that's quite relevant to this issue. Probably the best thing to read is Sections 5 and 6 of her thesis Composing behaviors of networks - these sections are called "Operational semantics of enriched graphs" and "Compositionality for the algebraic path problem".
Thank you! Yes, I will read these portions. I think I am slowly understanding the analogy here. I will try to write it down (from the perspective of our framework).
Here's one thing that may help. Jade considered, not labeled graphs, but what she called "-matrices" for a rig . An -matrix is simply a set together with a function . So, it's a square matrix with entries in , where the rows and columns are indexed by the set .
If we take to be the boolean rig , an -matrix is a directed graph with at most one edge from any vertex to any vertex : we say there's an edge from to iff .
Note this is not the same as our kind of graph (namely a [[quiver]]): you can think of it as a special case. Both -matrices and quivers are a special case of graphs enriched in a category . A graph enriched in a set and a function
.
(A quiver is a graph enriched in . An -matrix is a graph enriched in the discrete category corresponding to the set .)
Also, Jade focused on the case where the rig is actually a [[quantale]]. A quantale is a monoid in the monoidal category of [[sup-semilattices]]. A sup-semilattice is a poset that has all colimits, called 'suprema' or 'sups'. Any quantale becomes a rig where the addition is given by the binary case of the sup and the multiplication is given by the monoidal structure.
But why did Jade restrict to quantales?
Here's why: so she could do infinite sums without worrying about them! You'll see a lot of 'matrix multiplication' in her work, where she multiplies -matrices using usual formula
When you use a quantale, these sums make sense even when ranges over an infinite set!
However, if we restrict her ideas to make sure we're only doing finite sums, we can then generalize them to graphs enriched in a rig category like or .
Thank you very much. Graphs enriched in seems to be a very general notion of graphs. Interesting!! Yes, I agree that if we make sure to do only finite (well-defined sums), we may be able to extend Jade's ideas to the case of categories enriched in or .
John Baez said:
Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.
Among other things I wrote a proof of what we were calling Claim 3. I've strengthened it a bit:
Claim 3. Every loop that gives the same cycle as a simple loop
is of the form
where we treat the subscripts as elements of and do addition mod .
Proof. Suppose
is a simple loop and
gives the same cycle as , so that
Since is simple, all the vertices are distinct, so the edges are distinct. We must thus have , with the list of edges being some permutation of the list of edges . Since all the vertices are distinct, the only permutations that make into a loop are cyclic permutations. :black_large_square:
Thank you. Yes, I will read Section 2.5 of our paper. Also, thanks for the proof of Claim 3 with the strengthening.
Sure, thanks!
By the way, here's a fun puzzle for everyone:
Puzzle. Show this strengthened version of Claim 3 is false: every loop that gives the same cycle as a loop
is of the form
where we treat the subscripts as elements of and do addition mod .
@Adittya Chaudhuri wrote:
Graphs enriched in seems to be a very general notion of graphs.
Yes, I like them. They're important because any category enriched in a monoidal category has an underlying graph enriched in the underlying category : we just forget the composition and identities.
I find it interesting that the homomorphism of commutative monoids induces a functor , as you explained. Somehow, it makes me recall that we already have a functor from the defined as or .
Yes, I'm a bit confused about the relation between these two concepts: graphs labeled by elements of a commutative monoid, and cycles on a graph with coefficients taken from some commutative monoid!
In general if is a commutative monoid an -labeling of a graph is exactly the same as a 1-cochain on with coefficients in . We should probably say this and exploit it a bit.
(I haven't defined 1-cochains with coefficients in a commutative monoid, so my claim above is more like a definition than a theorem, but if you extend the usual definition of 1-cochain from abelian groups to commtuative monoids, that's what you get.)
I hadn't expect this paper to become so homological, but I think this direction is good: we get some interesting theorems, and we begin to set up a connection between fancy-sounding pure math and very elementary-sounding applied math.
Yes, I agree, and I find this connection very interesting!!
Should we do functorial semantics (like construction of symmetric monoidal double functor/lax symmetric monoidal double functor) in this paper? (in the section of open labeled graphs)..And use Mayer-Vietoris in the semantics part?
Yes, I think we should.
John Baez said:
Adittya Chaudhuri wrote:
Graphs enriched in seems to be a very general notion of graphs.
Yes, I like them. They're important because any category enriched in a monoidal category has an underlying graph enriched in the underlying category : we just forget the composition and identities.
Nice!! Yes, it is a natural generalisation(enrichment) of the forgetful functor , because every small category is a locally small category and, hence, a category enriched in .
John Baez said:
Yes, I think we should.
Nice!!
Adittya Chaudhuri said:
John Baez said:
Adittya Chaudhuri wrote:
Graphs enriched in seems to be a very general notion of graphs.
Yes, I like them. They're important because any category enriched in a monoidal category has an underlying graph enriched in the underlying category : we just forget the composition and identities.
Nice!! Yes, it is a natural generalisation(enrichment) of the forgetful functor , because every small category is a locally small category and, hence, a category enriched in .
John Baez said:
Sure, thanks!
By the way, here's a fun puzzle for everyone:
Puzzle. Show this strengthened version of Claim 3 is false: every loop that gives the same cycle as a loop
is of the form
where we treat the subscripts as elements of and do addition mod .
I think I solved your puzzle. In the attached file, I think I managed to construct a counterexample.
counterexampleclaim3stregthened.PNG
John Baez said:
Adittya Chaudhuri - check out Section 2.5 of our paper; I've written up all our work on the homology of directed graphs except for our still developing thoughts about composition of open graphs and emergent cycles.
Thanks. I enjoyed reading through section 2.5, and I want to discuss the following points in section 2.5:
In Example 2.22 (Page 9), if I am not mistaken, I think according to the diagram, is not a cycle. I feel the following is the list of minimal cycles in the diagram: , , , , and the condition that tells is not free should be .
If I am not misunderstanding, I think the statement (Page 10 begining) that claims is determined by its value on minimal cycles is not correct. I feel for this statement to be true we need the freeness condition i.e we need to have " freely generated on minimal cycles." In the attached file, I tried to construct a counter example when the labelling monoid in .
Counterexample.PNG
I am not able to see how the proof of injectivity in Theorem 2.26 is following from Lemma 2.27 and Lemma 2.28. I think we need to use a weakened version of Proposition 2.29 to show the injectivity if we define equivalence class on the "set of simple loops" rather than "on the set of loops" as you mentioned in Definition 2.24.
Adittya Chaudhuri said:
- If I am not misunderstanding, I think the statement (Page 10 begining) that claims is determined by its value on minimal cycles is not correct. I feel for this statement to be true we need the freeness condition i.e we need to have " freely generated on minimal cycles." In the attached file, I tried to construct a counter example when the labelling monoid in .
Counterexample.PNG
Why is "" included in that first line? It doesn't seem like it should be.
Adittya Chaudhuri said:
- If I am not misunderstanding, I think the statement (Page 10 begining) that claims is determined by its value on minimal cycles is not correct. I feel for this statement to be true we need the freeness condition i.e we need to have " freely generated on minimal cycles."
I guess the word "determined" is ambiguous, or else I'm using it incorrectly.
When I said is determined by its value on minimal cycles, I didn't mean you could arbitrarily choose its values on the minimal cycles. That would indeed require that be freely generated by the minimal cycles. I merely meant that if you know the value of on all minimal cycles, you know . I.e. if for all minimal cycles , then .
Suppose and are commutative monoids, is a subset, and is a function. Can we find a homomorphism extending ?
We are in situation 2, and that's what I was trying to say. But since you didn't understand me I must not have said it well.
I've rewritten this passage in the paper to make it clearer.
Adittya Chaudhuri said:
- In Example 2.22 (Page 9), if I am not mistaken, I think according to the diagram, is not a cycle.
Thanks, you're right! I had and switched in the diagram. I wanted the edges to be labeled as we go from top to bottom. I think it's okay now - please take a look.
James Deikun said:
Why is "" included in that first line? It doesn't seem like it should be.
Thanks!! Yes, it was a mistake. My thoughts and drawings were not running in parallel somehow.... Sorry!! You are right, the first line should be , and hence, in that case, there is no contradiction. Hence, my example is not at all a counterexample.
John Baez said:
We are in situation 2, and that's what I was trying to say. But since you didn't understand me I must not have said it well.
I've rewritten this passage in the paper to make it clearer.
Thanks!! I understand your point. I mixed up the things somehow!! I feel the portion is very well written, but I misunderstood!! And, yes, as I mentioned in the previous message, my counterexample was incorrect. Maybe in the afternoon today, my mind was not working properly!! Sorry!! I just checked the portion of overleaf file..Thanks!! But I also emphasise that it was very well written even before also!!
John Baez said:
Thanks, you're right! I had and switched in the diagram. I wanted the edges to be labeled as we go from top to bottom. I think it's okay now - please take a look.
Yes, it is fine. Thanks. Also, the condition that I wrote to show is not free is incorrect in the same way as @James Deikun pointed out my mistake in my drawing.
Adittya Chaudhuri said:
- I am not able to see how the proof of injectivity in Theorem 2.26 is following from Lemma 2.27 and Lemma 2.28. I think we need to use a weakened version of Proposition 2.29 to show the injectivity if we define equivalence class on the "set of simple loops" rather than "on the set of loops" as you mentioned in Definition 2.24.
I think the above statement also does not make much sense, as by definition of , the function is injective.
Okay, good, I hadn't gotten around to commenting on that last statement, because it confused me.
Now I'm thinking about the section on rig-valued polarities. I notice you do something interesting there: given a path
in a graph , you define to be the set of edges in :
I would prefer to think about the multiset of edges involved in . This keeps track not only of which edges appear in but also how many times they appear.
Here's why this interests me.
The collection of multisets of edges of is the free commutative monoid on the set of edges of . In our paper we've been calling this free commutative monoid . We also call it , the set of 1-chains with coefficients in .
If we use this to modify your idea, we get the following nice things.
Any path has a multiset of edges; let me call this . When is a loop is a 1-cycle, and we've already been using the notation in this case. But there are some advantages to generalizing this idea to arbitrary paths!
As we know, paths are morphisms in the free category on the graph , which we call . Composition of paths obeys this nice rule:
So, we get a functor
where is the 1-object category corresponding to the the commutative monoid .
We have already noted in the paper that if is a way of labeling edges by elements of a commutative monoid , it extends uniquely to a monoid homomorphism
since is the free commutative monoid on . This doesn't work if is noncommutative. But if is a monoid that's not necessarily commutative, we still get a functor
that sends each edge to .
So, there's a sense in which is an 'abelianized' version of . But there's more to say about this.
Thanks!! The above ideas look very interesting!! I am trying to write down what I thought about it.
We have already established a bijection between simple loops and minimal cycles which connects graph-theoretic way of studying cycles and topological way of studying cycles in a directed graph.
1st point
Now, I think this correspondence does not extend bijectively between "set of paths in " and the set of elements of the free commutative monoid(abelianised paths) . However, as you explained, we still get a functor , which I think restricts to the bijective correspondence between our simple loops in and minimal cycles in . In a way, the functor extends our bijection between simple loops in and minimal cycles in to the level of paths.
2nd point:
Some days back, you also mentioned that "it might be useful to have a correspondence" between graph theoretic picture and topological picture. Now, as you explained, if is non-commutative, then we can not extend to , however, we still get a functor . Thus, we may consider it as a graph theoretic way of seeing paths are behaving better here over seeing paths from the point of topology when we are labeling the edges with a non-commutative monoid.
Now, when is commutative I think we get a commutative diagram in described by the functors and the functor and . I think this might be seen as what you explained " is an abelianised version of ".
Yes, I agree with all this!
When I said "there's more to say about this", here's what I was hinting at. is a category with one object for each vertex of , while has just a single object. There's also a category that's halfway between these two! This is the category where morphisms are "homology classes of paths".
This category has one object for each vertex of , and morphisms are equivalence classes of paths where two paths are equivalent if
using my notation above, i.e. and give the same 1-chain.
I claim this category is a kind of "abelianization" of , though it's a category rather than a commutative monoid like . For example, suppose and are two morphisms in this category such that and are both well-defined. Then .
This idea is also interesting in ordinary topology: for any space we can define a category with one object for each point of and morphisms being equivalence classes of paths , where two paths are equivalent iff the 1-chains in singular homology that they define are homologous, i.e. their difference, which is clearly a 1-cycle, is actually a 1-boundary.
Homotopic paths are homologous, but the converse is not necessarily true, even when our space is a graph (thought of as a topological space).
So, if we call this category , we get a functor from the fundamental groupoid of to this category:
and it's full but not faithful. Like , is a groupoid.
As you know, if is path connected and we pick any point , the automorphism group of in is the fundamental group . Similarly I believe the automorphism group of in is the first homology group .
I used this idea when is a graph (seen as a topological space) in my paper Topological Crystals.
Wow!! I find these ideas very beautiful and interesting:
In the context of directed graphs:
If we denote the homology category of a graph by , then we get a functor which is identity on vetices and takes a path to its homology class I am attaching an example where loops but are homologous() .
homologouspath.PNG
In other words, I think the following may be true
Lemma
If are two paths in , then is homologous to if and only if and differs by a permutation of edges.
However, I think more generally the "homologous path relation" defines an equivalence relation on the set of all paths in . Then this equivalence relation induces a congruence relation on by which the quotient category is the homology category , the one you defined. Now, I think, there is a faithful functor , which sends all the objects of to the unique object in , and sends the . Hence, is the functor you defined here Now, I think the category should satisfy some universal property that would justify your claim that is an abelianisation of . (I think this should follow from the fact that ).
Now, given a labeling for a commutative monoid , we can extend it to a functor . Then, I think the functor induced from is same as , which is a factorisation of through . However, such factorisation of does not exist when is non commutative. In other words, I think it proves your conjecture ( "when is commutative, the homologous paths have the same -valued holonomy" .
In the context of Topological spaces/manifolds:
The results on -labeled directed graphs are now making me wonder whether similar results are true for -labeled paths in a (directed) topological space. I think for the case of smooth manifolds , the usual gauge theoretic holonomy is a particular case when is considered as a Lie group (structure group of a bundle (with connection data) over ) (Although I am not aware of the existence of the notion of an appropriate smooth homology category of a smooth manifold).
All these ideas are fascinating to me - thanks for explaining all this. Though I haven't proved it, I believe your lemma is true:
Lemma
If are two paths in , then is homologous to if and only if and differs by a permutation of edges.
because I proved the same sort of lemma for undirected graphs in my paper Topological Crystals. (Sorry for repeatedly giving this link, but I just noticed that the version on my website was an old version, and this is the new version.)
In Section 2, I define two paths in an undirected graph to be homotopic if they differ by a sequence of the following moves:
I say two paths are homologous if they differ by a sequence of the following moves:
and
("Makes sense" is a reminder that we can't compose paths unless the first path ends where the next one starts! So we need this restriction in Move A and Move B.)
Then in Lemma 3, I prove that two paths are homologous in the above sense iff they define the same 1-chain! The proof uses the fact that in is the abelianization of . We can't use that proof in the directed case.
Still, I believe your lemma is true in the directed case. In the undirected case Move A doesn't make sense because edges (and paths) don't have inverses. So, the concept of "homologous" only involves Move B, as in your lemma.
(The "makes sense" restriction doesn't need to be mentioned in your lemma, but it's implicit there. Also, you consider arbitrary permutations rather than the limited sort allowed in Move B, which is more efficient.)
I don't know if we need this stuff in our paper, but maybe we could put it in an appendix. It's nice pure math, and it's relevant because when is a monoid any -labeling of a graph gives a functor
but when is a commutative monoid this factors through the 'homology 1-groupoid'
On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.
John Baez said:
All these ideas are fascinating to me - thanks for explaining all this.
Thank you so much!!
Thanks for the ideas in the undirected case. I will read the proof of Lemma 3 in your paper, Topological crystals. Then, I will try to prove the Lemma in the directed case.
Thanks for telling about the newer version of your Topological Crystals paper. Somehow, the paper link on my server is not working. It says, "404 not found". I do not know how to fix this error.
John Baez said:
I don't know if we need this stuff in our paper, but maybe we could put it in an appendix. It's nice pure math, and it's relevant because when is a monoid any -labeling of a graph gives a functor
but when is a commutative monoid this factors through the 'homology 1-groupoid'
Thanks. I was thinking the following:
Since our paper considers the possibility of the labelling monoid to be non-commutative (In fact, many nice results like "finding motifs via Kleisli category", "defining a symmetric monoidal double category of -labeled graphs via structured cospans" are possible for both commutative and non-commutative monoids). However, I think, to come up with "useful semantics of our labeled graphs (like analysing feedback loops) via directed homology theory" we had to restrict ourselves to commutative monoids. If I am thinking correctly, then the factorisation that you mentioned is a nice way to highlight the strength of using the commutative monoids for labeling. Basically, I think it precisely tells: holonomy of paths in directed graphs is invariant under directed homology. I am assuming by the statement "I don't know if we need this stuff in our paper" you meant the proof of my Lemma. In that case, yes, I fully agree that it is much better to keep the statement in the paper's main body and write the proof in the appendix. However, I feel it is essential (it is very possible that I am misunderstanding) that we keep the factorisation in the paper's body itself to hightlight the strength of using commutative monoids as labelling monoids.
John Baez said:
On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.
Thank you so much. I will read your examples in Section 2.
Adittya Chaudhuri said:
Thanks for telling about the newer version of your Topological Crystals paper. Somehow, the paper link on my server is not working. It says, "404 not found". I do not know how to fix this error.
Whoops - I gave the wrong link. I fixed it. Here is the right link: Topological Crystals.
Thank you.
John Baez said:
On a different note: I just added a list of example monoids, good for "polarities", to the start of Section 2.
Examples are very nice!!
The reality of Example 2.7: "a causal loop diagram serving as a simple model of students doing homework" is very clear, and I think it is relatable to any student at any level in any place.
In Example 2.8 I like how "absence of an edge" represents no influence and a label represents unknown influence. Actually, this is very essential for SBGN-AF representations of biochemical reaction networks (as there is a specific symbol for denoting an unknown influence) in their notations. From a general perspective also it is very realistic as many times (we are only aware of an existence of an influence), and only come to know about its type (positive or negtive) after analysing the situation for a sufficient period of time. In a way, given a directed graph , I think may represent an initial(not fully understood) causal loop diagram while represents an evolved version(understood) of the causal loop diagram .
Here, I preferred the word understood over fully understood because, over time the graph structure of itself may change. For example, one may add additional edges or remove some existing edges to discover a new interaction or remove an unnecessary interaction, respectively.
John Baez said:
Right now I don't know noncommutative monoids that are useful for applications of "graphs with polarities". So, right now the only reason to start by studying graphs labeled by elements of a general monoid and then turn to graphs labeled by elements of a commutative monoid is that we're mathematicians and we like to see how much we can do with the minimum amount of structure. Maybe later someone will invent some good applications of noncommutative monoids to this subject; then our paper will still be useful to them.
I was thinking about constructing an example of a useful non-commutative monoid. Below I am trying to write down what I am thinking:
Let be a non-commutative monoid and be a set. Then we can define an action of on the set as a function of that satisfies compatibility condition with associativity and identity laws of the monoid . Now, if we consider the underlying graph of the action category , then I think we can get a -labeled graph defined by .
Now, from the point of applications (usefulness), I was thinking about transition monoid of semiautomation. I do not know about these objects in detail. Below I am trying to describe my basic understanding:
Let be the [[free monoid]] generated on the set . Now, there is an action of the monoid on the set induced by the transition function . Now, for every , we get an endomorphism . Now, the set is a monoid (non-commutative) with the binary operation as composition of functions. The monoid is called the transition monoid of the semiautomation .
Now my guess is the following:
The -labeled underlying graph of the action category induced by contains all the information that the transition monoid of the semiautomation possess.
Advantages of representing semiautomation via labeled graphs may be the following:
1) finding motifs in semiautomata networks via Kleisli morphisms. I am imagining a directed cycle may represent a kind of regulatory mechanism in the associated labeled graph.
2) Building bigger semiautomata networks by gluing small automata networks using structured cospans.
Lastly, my knowledge about automata/semiautomata is very poor. So, I may be fundamentally misunderstanding certain things while thinking about semiautomata networks.
Adittya Chaudhuri said:
I will try to prove the Lemma in the directed case.
I am trying to write a proof of the above Lemma with a little strengthening:
Lemma
If are two paths in , then is homologous to if and only if and differs by a permutation of edges.
Proof: One direction is obvious that is if and differs by a permutation of edges, then . Now, for the other direction let and be two paths in such that
.
Now, let and be the multisets representing the L.H.S = and R.H.S= of the above equation. Now, if , then because . Similarly, if , then . Again, since , the number of times an element occurs in is same as the number of times occur in . Hence, where are distinct edges of . Hence, by definition, both and are permutations of the string
.
In the above equation, by the string I meant i.e although the elements are same, but they are considered different when they appear multiple times in the string. I meant similar for other too.
Hence, is a permutaion of .
Great, that's nice. This statement doesn't immediately imply the following stronger statement:
Stronger Lemma
If then can be obtained from by a finite sequence of moves of type B:
Move B: replacing a path of the form by a path of the form .
(Note that in this move we require that is still a well-defined path, e.g. ends where starts, and so on.)
We may not need this stronger lemma. But I think it's true.
Let me say why I care this stronger lemma.
Let's say a category is commutative when for any pair of morphisms we have whenever this could possibly make sense. That is, whenever both and are both well-defined and have the same source and both have the same target.
You can check that for both and to be well-defined we need ,
and for them to both have the same source and the same target we need . So .
Thus, a category is commutative iff for all objects in that category the endomorphism monoid is commutative.
Any preorder is a commutative category, but there are many other commutative categories.
We can take any category and force it to be commutative by imposing a bunch of new equations between morphisms. To do this we say that
whenever are morphisms in the original category where both composites above are well-defined. Then we mod out all homsets by the equivalence relation .
Let's call this abelianizing the category - though the result is a commutative category, not an [[abelian category]], which is something completely different. I can't get myself to say 'commutativizing'.
If we take a topological space and let be its fundamental groupoid, then abelianizing gives the homology 1-groupoid , where:
Similarly I believe that if is a graph and is the free category on that graph, then abelianizing gives , the category where
This will follow from the Strong Lemma but not from the Lemma.
To prove the Strong Lemma, we need to show this:
Suppose we have an edge path
and an edge path
such that the list of edges is a permutation of the list of edges . Then this permutation can be accomplished by a finite sequences of moves of the form
where are (possibly empty) edge paths and the composites and are both well-defined edge paths from to .
Adittya Chaudhuri said:
I was thinking about constructing an example of a useful non-commutative monoid. Below I am trying to write down what I am thinking:
Let be a non-commutative monoid and be a set. Then we can define an action of on the set as a function of that satisfies compatibility condition with associativity and identity laws of the monoid . Now, if we consider the underlying graph of the action category , then I think we can get a -labeled graph defined by .
Now, from the point of applications (usefulness), I was thinking about transition monoid of semiautomation.
Thanks, this was very helpful. I read a bit of a book on automaton theory and added Example 2.14 in Section 2 as an example of graphs labeled by elements of noncommutative monoids. I think this should be enough about noncommutative monoids, since we should focus on our topic of polarities.
Here's what I wrote:
All the above monoids above are commutative, and indeed commutative monoids are by far the most commonly used for polarities. Thus, we discuss special features of the commutative case in Sections 2.4 and 2.5. However, graphs with edges labeled by not-necessarily-commutative monoids do show up naturally in some contexts. For example, in computer science [Sec. 2.1, Ginzburg 1968], a semiautomaton consists of a set of states, a set of inputs, and a map that describes how each input acts on each state to give a new state. Let be the monoid of maps from to itself generated by all the maps for . Let be the graph where:
- The set of vertices is .
- The set of edges is .
The source map is given by
The target map is given by
Since the monoid of maps is generated by elements for , there is an -labeling of given by
In short, we obtain an -labeled graph where the vertices represent states, and for each input mapping a state to a state there is an edge labeled by the monoid element .
By the way, one of the founders of category theory, Samuel Eilenberg, wrote at least two books on categories for automaton theory! I've heard that at the time most people were shocked, and wondered why he didn't stick to working on algebraic topology. So he may be one of the early examples of someone turning to applied category theory.
John Baez said:
Thanks, this was very helpful. I read a bit of a book on automaton theory and added Example 2.14 in Section 2 as an example of graphs labeled by elements of noncommutative monoids. I think this should be enough about noncommutative monoids, since we should focus on our topic of polarities.
Here's what I wrote:
Thank you so much!! Yes, this example looks great. I think I was just lucky!! I might not have thought in the direction of automata while finding an example of a non-commutative monoid. My inspiration was knowledge graphs (which I saw before), where edges are labeled by strings of English-alphabets. Now, I realised the free monoid on English alphabets is non-commutative. In my undergraduation, I had a basic course on automata theory. Then, I searched and found the Wikipedia article about semiautomation. By, looking at the definition, I realised it is a bit similar to the "action Lie groupoid construction in higher Lie Theory (that I studied in my PhD)". Then, I realised, the underlying graph of the action category has a natural labelling system induced from the action of the monoid on the set of vertices.
I agree entirely that since our focus is on polarities, this example should be sufficient for our paper when we talk about labelling edges with elements of non-commutative monoids.
John Baez said:
By the way, one of the founders of category theory, Samuel Eilenberg, wrote at least two books on categories for automaton theory! I've heard that at the time most people were shocked, and wondered why he didn't stick to working on algebraic topology. So he may be one of the early examples of someone turning to applied category theory.
Wow!! That's really interesting and very inspiring. I just downloaded the books (by Samuel Eilenberg ) that you suggested. I will try to read some parts of it in the coming days.
John Baez said:
Great, that's nice.
Thank you so much!!
John Baez said:
Thus, a category is commutative iff for all objects in that category the endomorphism monoid is commutative.
I find this definition of "commutative category" very interesting because previously, I encountered such kind of a definition in the the context of higher gauge theory which I am briefly explaining below:
For example, in the definition of Transport functors (Definition 3.5 in the PARALLEL TRANSPORT ON PRINCIPAL BUNDLES OVER STACKS by (Collier-Lerman-Wolbert), a functor from the thin fundamental groupoid of a manifold to the category of -torsors is defined to be smooth if it is smooth locally that is the restriction is smooth (in the sense of diffeology) for all .
I am now guessing that if we want to define a property (like here commutativity or smoothness) on the whole category, then we may define it locally on all the automorphism monoids of the category.
However, I think in many cases this may not be sufficient. For instance, given a Lie groupoid , every automorphism group is a Lie group for each . However, I think the converse is not sufficient i.e if is a category such that is a Lie group for each , then still may not be a Lie groupoid (though at the moment I am not able to find an example).
John Baez said:
Let's call this abelianizing the category - though the result is a commutative category, not an [[abelian category]], which is something completely different. I can't get myself to say 'commutativizing'.
If we take a topological space and let be its fundamental groupoid, then abelianizing gives the homology 1-groupoid , where:
Now, I understand what you meant by abelianizing!!
John Baez said:
This will follow from the Strong Lemma but not from the Lemma.
Interesting!! I agree.
John Baez said:
To prove the Strong Lemma, we need to show this:
Suppose we have an edge path
and an edge path
such that the list of edges is a permutation of the list of edges . Then this permutation can be accomplished by a finite sequences of moves of the form
where are (possibly empty) edge paths and the composites and are both well-defined edge paths from to .
Thanks. Yes, I agree. I will try to prove this.
Good comments! Returning for a moment to automaton theory, I think it's cool that you took a course on that subject. I've read a bit about it, but never quite enough. I've seen bits of information about the various classes of automata shown here, and how they correspond to various classes of languages recognized by these automata (the Chomsky hierarchy), but I've never studied the details.
Someone gave me a fascinating book called The Wild Book, also called Applications of Automata Theory and Algebra via the Mathematical Theory of Complexity to Biology, Physics, Psychology, Philosophy, and Games, by John Rhodes. Wikipedia describes it as an "underground classic". It almost sounds like the work of a crackpot, but it includes a serious theorem, the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! I wish I understood it!
Thank you!! These directions look very interesting!! Especially, I find your statement
"the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! "
very exciting!!
I do not know if there are any, but I really wish that Applied Category Theory should produce theorems like this with respect to domain-specific already existing systems theory.
John Baez said:
Someone gave me a fascinating book called The Wild Book, also called Applications of Automata Theory and Algebra via the Mathematical Theory of Complexity to Biology, Physics, Psychology, Philosophy, and Games, by John Rhodes. Wikipedia describes it as an "underground classic". It almost sounds like the work of a crackpot, but it includes a serious theorem, the Krohn-Rhodes theorem, which is a kind of classification of finite semigroups, and also of finite-state machines! I wish I understood it!
Thank you so much for sharing about the The Wild Book. I just read the section "Foreword to Rhodes’ Applications of Automata Theory and Algebra" by Morris W. Hirsch. It's super fascinating!!! It feels like "the reason for my interest in biological systems" aligns very much with what is written there in that section. Although I have to read many times, I think my understanding will improve over time.
I encountered a portion in that section which states:
In Evolution the application of semigroup theory is necessarily more speculative, but also more comprehensible. Here the objective is not precise computations of complexity or SNAGs, but rather general principles influencing Evolution. Highly evolved organisms, he suggests, are in “perfect harmony” with their environments— otherwise they would either die out or evolve further:
It reminds me of Gaia hypthesis. I was not aware of it until you mentioned it in a discussion in the comments section in one of your mastodon post https://mathstodon.xyz/@johncarlosbaez/113946882428468110 about the mutation in Palmer amaranth which made them glyphosate-resistant, where you relate this to current crisis of our global socio-political-environmental-ecosystem. At the end, you described it beautifully as "a crisis of nature not wanting to die".
I was thinking about rig-labelled graphs. Below, I am sharing my thoughts.
Let be a rig. A -labeled graph is a set labeled finite graph with a labeling . Now, let be another -labeled graph with a labeling . Then, we define a morphism from to as a morphism of graphs such that , where the addition is defined w.r.t the commutative monoid structure . To define polarities on -labeled graphs we will use the additional monoid structure of the rig .
We label a morphism of by , with respect to the monoid structure . Now, for every additive morphism , we get a functor . We label by , where for each , is defined additively w.r.t to the commutative monoid structure . Hence, although, and are -labeled categories, but is not a morphism of -labeled categories.
My question is the following:
What if we define rig-labeled graph as above and we do not define "rig-labeled categories" like the way we have defined monoid -labeled categories at all? Then, we will not face any "infinite/undefined sum issues", and also, we can describe polarities. However, one bad thing is that while defining polaritites we can not use the free forgetful adjunction between graphs and categories. Are we loosing anything qualitatively with these limitations?
More precisely:
Let be a rig. Define a rig-labeled graph as functor . Now, we define a morphism from to as a functor induced from an additive morphism of -labeled graphs . Now, I think with these definitions, the collection of rig-labeled graphs will form a category. However, I think it is essential to see how nice this category is!!
Yes, that's important to study!
On a different note, my friend Marco Grandis put out a paper on directed topology; in the first page he explains two concepts of directed space, which he calls a d-space and (more generally) a c-space. Both these kinds of space have an obvious concept of "fundamental category". So, we could abelianize that and get a "directed first homology monoid". But I don't actually want to work on that now.
He wrote:
The fourth and last paper in my series on ’The topology of critical processes’ is published, in Cahiers (as the previous parts):
The topology of critical processes, IV (The homotopy structure)
M. GrandisAbstract. Directed Algebraic Topology studies spaces equipped with a form of direction, to include models of non-reversible processes. In the present extension we also want to cover 'critical processes', indecomposable and un-stoppable - from the change of state in a memory cell to the action of a thermostat.
The previous parts of this series introduced controlled spaces, examining how they can model critical processes in various domains, and studied their fundamental category. Here we deal with their formal homotopy theory.
Cah. Topol. Géom. Différ. Catég. 66 (2025), no. 2, 46-93.
https://cahierstgdc.com/index.php/volume-lxvi-2025/
John Baez said:
Yes, that's important to study!
Thanks!!
John Baez said:
On a different note, my friend Marco Grandis put out a paper on directed topology; in the first page he explains two concepts of directed space, which he calls a d-space and (more generally) a c-space. Both these kinds of space have an obvious concept of "fundamental category". So, we could abelianize that and get a "directed first homology monoid". But I don't actually want to work on that now.
Thanks!! I will check the construction of the fundamental categories in Marco Grandis' paper.
Adittya Chaudhuri said:
Let be a rig. Define a rig-labeled graph as functor . Now, we define a morphism from to as a functor induced from an additive morphism of -labeled graphs . Now, I think with these definitions, the collection of rig-labeled graphs will form a category. However, I think it is essential to see how nice this category is!!
I think I figured it out. I am trying to write it down.
Let be a rig. I am denoting the category of labeled graphs as , whose description is given above. Now, I claim there is an isomorphism of categories between and described by the following functor:
, defined
If I am not misunderstanding anything, then all the nice things that we can do with we can also do with .
Now, I think we can construct a symmetric lax monoidal pseudofunctor for labeled graphs, and thus it would allow us to have a compositional framework for -labeled graphs using decorated cospans.
If I have not made any fundamental mistakes while thinking about rig-labeled graphs above, I am trying to see it from a little general perspective:
Consider a rig . Now, define a category whose
Note that we could have also replaced by and rest accordingly to obtain a coorresponding category , but in each case I claim there is an isomorphism of categories between and described by the following functor:
, defined
But in this setting, I think, there is no natural way to use the propoerty of in labeleling the morphisms in . However, we have seen that we can actually use both and in labeling the morphisms in .
Adittya Chaudhuri said:
Let be a rig. Define a rig-labeled graph as functor . Now, we define a morphism from to as a functor induced from an additive morphism of -labeled graphs .
Now, I think with these definitions, the collection of rig-labeled graphs will form a category.
I'm trying to understand the interplay of and here. Your definition of objects seems to use , while your definition of morphism uses . That's a bit strange, though interesting.
However, notice that morphisms
correspond bijectively to functions
from the set of edges of to the underlying set of . That's because a functor from to is determined by its value on edges of , and these values can be arbitrary since the edges of freely generate the category .
So, unless I'm confused, for any rig the category of rig-labeled graphs you just described is equivalent to the one where:
And this in turn is the category of -labeled graphs and additive morphisms!
Adittya Chaudhuri said:
I think I figured it out. I am trying to write it down.
Let be a rig. I am denoting the category of labeled graphs as , whose description is given above. Now, I claim there is an isomorphism of categories between and ...
Okay, good, we agree! I realized this while wondering whether your initial definition of really used the multiplicative structure. While it did superficially, we've seen that only the additive structure of the rig really matters.
Yes, I also feel, the use of is superficial about which I am not feeling that good. But I am also feeling that it is serving the purpose of "polarities" in a way.
Given that the multiplication of plays no essential role in the definition of a rig-valued category, we can wonder what it's good for. Here we should turn to examples and think about what we want to do with them!
For starters consider the rig of polarities , which is actually a ring.
Okay, I need to think about this a while.
Somehow, after the semiautomation example, I started thinking of polarities as a kind of "a collection of actions of a monoid on a set " which can be expressed as a -labeled category. (Here the action law of the monoid on is translated into the compositional law of the corresponding -labeled category).
Here, I said "collection of actions" because our labeled graphs can contain multiple edges between a pair of vertices.
The above is just an intuitive feeling I developed. I may be completely misunderstanding.
I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.
I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.
John Baez said:
I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.
Thank you. I will read the portion (you mentioned) in the Overleaf file.
John Baez said:
I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.
Yes, I agree.
I think I realized certain things about rig-labeled graphs. I am trying to write down my thoughts below:
In short, every "type" of transformation of a graph canonically induces a "similar type" of tranformation on the associated graph with polarity.
I am attaching a picture where I picturised the situation.
polaritytransformation.png
John Baez said:
I have something to say about that, but it's bed time so I'll just say that around Definition 2.25 I gave a better explanation of both the 0th and 1st homology monoid of a graph. I hadn't said anything about the 0th homology, but while less exciting it's still important, especially later when we talk about Mayer-Vietoris.
I enjoyed reading the portion. Now, the definition of 1st and zeroth homology groups looks very natural (from the point of generalisation from abelian groups to commutative monoids). Only a very minor point: If I am not misunderstanding, I think the exact sequence before Example 2.26 is written in the reverse order.
Adittya Chaudhuri said:
John Baez said:
I've temporarily moved \end{document} forward because Overleaf has decided to reduce the amount of time it allows us to compile a document in the free version! We'll need a better solution in the long run.
Yes, I agree.
Do you do Github? Or Dropbox?
I used to use Dropbox. For some years, I have not used it. But I can try Dropbox.
Adittya Chaudhuri said:
John Baez said:
I think of (1) and (2) as serving related purposes, and we have barely begun to explore what we can do with them.
Now for the relation: a cycle in an -labeled graph has feedback if and only if there is a Kleisli map from the walking loop with feedback equal to to , that maps the loop to this cycle.
Today, I was thinking about possible relationships between Kleisli morphisms and directed graph homology. I am trying to write down my thoughts below:
For a monoid , a monic Kleisli morphism determines an occurrence of an -shaped graph in the graph . Now, from the point of view of causal loop diagrams, we are mostly interested in a particular shape i.e when is a directed labeled path(graph theoretic) in . Let us denote the set of occurrences of all directed labeled paths in by .
On the other hand, the monoid of 1-chains of gives us the information of all the 1-chains in , which in particular, also include the information of all directed paths (graph theoretic) in . Now, (when is commutative), the labeling map defines a monoid homomorphism (holonomy) , which contain the information of all labeled 1-chains in and hence, in particular, the information of all labeled directed paths in .
Now I want to explore the relationship between the elements of and the the holonomy map .
More precisely, let us consider a directed labeled path
.
Hence, by definition, the holonomy .
Now, consider a monic Kleisli morphism , where is considered as a directed subgraph of and the labeling map is the restriction of on the subgraph . Since is monic, the image looks like the following:
such that holonomy remains invariant in each edge or, more precisely, the following holds for each :
.
Now, consider the set , the set of all monic Kleisli morphisms from to .
The following are my realisations regarding the relationship between homology and Kleisli morphisms:
- Given a directed labeled path in , each element gives an edge-wise holonomy invaraint decomposition of the labeled subgraph .
- I feel holonomy of a labeled path/loop is a kind of summation/integration of the component holonomies on the edges present in , and on the other hand each Kleisli morphism applied to a labeled path/loop is a kind of decomposition/differentition/factorisation of the edges present in such that the edgewise (and hence total) holonomy of remains invariant.
The above ideas can also be restricted to loops/cycles and then, we can get similar results w.r.t to loops or elements in .
The above description may not be anything interesting (or may contain mistakes). However, today I was thinking about these things.
I'm back in action, having spent one night in Edinburgh. I think your idea of relating Kleisli morphisms to holonomies of paths is interesting. We should think about it more and see if there's anything surprising we can do with it.
I also have a bunch of new thoughts on 'emergent cycles' formed when gluing together two directed graphs, which I will try to write down here.
So, let me start. I want to study how "the whole is greater than the sum of the parts" when we combine two systems to form a larger system. But I want to study this in an extremely simple context, where a system is a graph. The vertices of this graph represent entities of various kind. The edges represent how one entity can directly affect another. Paths represent how one entity can indirectly affect another. Loops represent how one entity can indirectly itself, so we sometimes call them 'feedback loops'.
We've talked about a setup to quantify the feedback around a loop in a graph where the edges are labeled by elements of a commutative monoid. I won't recall it here. I'll just talk about loops, and simple loops (which are loops that don't cross themselves, so they can't be broken into smaller loops), and cycles (which are very roughly speaking linear combinations of simple loops - the real definition is given here).
I want to understand the cycles, and also the loops, and also the simple loops, in a graph formed by gluing together two graphs and along some set of vertices. Some cycles (resp. loops, simple loops) will come from cycles (resp. loops, simple loops) in , and some will come from (resp. loops, simple loops) in . The rest I'll call emergent, since they appear only when we glue and together. I want to understand the emergent ones!
A graph is a set of edges , a set of vertices and source and target maps . A map of graphs consists of a map sending edges to edges and a map sending vertices to vertices, compatible with the source and target maps. The resulting category of graphs, , is a presheaf category.
Let's consider the pushout in of the following diagram
where is a graph with no edges, only vertices, and and are monic. I'll call this pushout and write .
So, intuitively speaking, we have a graph that's the union of two subgraphs and , and their intersection has no edges, just vertices.
Now, any loop in will either
1) stay in or stay in
or
2) go back and forth between and some number of times, say .
We can think of the first case as the case where .
I want to more precise and mathematical, and I'd like to generalize this classification of loops to a classification of cycles. But now it's getting late, so I'll quit here.
John Baez said:
I'm back in action, having spent one night in Edinburgh. I think your idea of relating Kleisli morphisms to holonomies of paths is interesting. We should think about it more and see if there's anything surprising we can do with it.
Nice!! Thanks. Yes!!
John Baez said:
I'd like to generalize this classification of loops to a classification of cycles.
I find this idea very interesting!! I am thinking in terms of your previous discussion
.Let me first start by digging into the classification of loops in a bit more. Consider any loop
It will have certain vertices that lie in the intersection (which remember is a graph consisting solely of vertices, no edges). These vertices are of four distinct types:
This takes a little thought to check.
Puzzle. Why is it impossible for or to be in ?
I don't really care about vertices of type 3 and 4, since we're trying to understand how the loop moves from to or to , not how it stays in or stays in .
Just to make sure you're paying attention:
Puzzle. Why is there an equal number of vertices of type 1 and of type 2?
As we go around the loop, keeping track only of which vertices are of type 1 and which are of type 2, we'll see that they alternate: after a vertex of type 1 there must be one of type 2, and vice versa.
There is more to say about this, but I'm not really interested in analyzing the behavior of one specific loop. I'm more interested in the set of all loops. So let me turn to that.
We've seen the set of all loops in is -graded:
where loops in have vertices of type 1 and of type 2: that is, they cross from into times, and cross back from back into times.
The loops in with are the 'emergent' loops. The big question is whether we can say anything nontrivial about these. For example: can we find conditions that guarantee that emergent loops exist, or don't exist? Can we find an efficient algorithm to count or otherwise understand the emergent loops?
So far I can only think of an 'obvious' condition that rules out emergent loops.
If there's an emergent loop:
Also:
In fact we can classify loops in not only by how many vertices of type 1 and type 2 they have, but by how many times they go through each passage from into , and how many times they use each passage from to . So we get a much more refined grading of the set of loops.
I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before. Are you dealing with -enriched graphs? Or perhaps you want other enrichments. Regardless, you can understand the emergent loops in your graph pushout as follows:
Let this graph be
image.png
The main insight is that the paths of this graph grade the paths in your pushout:
For now just think of as a dependent set . What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of graphs). Also this result explains why the emergent paths are graded by , it's because the paths of are graded by their length. I know that you are interested in loops specifically and not just paths? Perhaps there is a variant of this result specialized to loops. Now I will do my best to explain what actually is, it is the loose morphism component of a double functor where
John Baez said:
Puzzle. Why is it impossible for or to be in ?
Thanks. I am not able to find where I am making a mistake in the attached example.
counterextotype1234.PNG
In the attached example, both and lie in . If I am understanding correctly, then it is an example of a graph where none of the vertices in the intersection are of Type 1,2,3 and 4.
Jade Master said:
Are you dealing with -enriched graphs?
Yes, I think so.
John Baez said:
So far I can only think of an 'obvious' condition that rules out emergent loops.
If there's an emergent loop:
- there must be a vertex in that has an edge coming into it from a vertex in , and an edge going out of it into a vertex in . Let's call this a passage from to .
Also:
- there must be a vertex in that has an edge coming into it from a vertex in , and an edge going out of it into a vertex in . Let's call this a passage from to .
Yes, I agree.
John Baez said:
In fact we can classify loops in by how many times they go through each passage from into , and how many times they use each passage from to . So we get a much more refined grading of the set of loops.
I agree. If I am not misunderstanding, the grading on simple loops that you discussed before https://categorytheory.zulipchat.com/#narrow/channel/229156-theory.3A-applied-category-theory/topic/Graphs.20with.20polarities/near/513523364 precisely describes this idea ?
Jade Master said:
I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before.
Thank you!! I am trying to understand your ideas!!
Adittya Chaudhuri said:
John Baez said:
Puzzle. Why is it impossible for or to be in ?
Thanks. I am not able to find where I am making a mistake in the attached example.
counterextotype1234.PNGIn the attached example, both and lie in . If I am understanding correctly, then it is an example of a graph where none of the vertices in the intersection are of Type 1,2,3 and 4.
Thanks! You're right, I was confused!
I think I have a better way of thinking about all this stuff now, which evolved today while I was eating eggs Benedict at Kilimanjaro Coffee, my favorite place for breakfast in Edinburgh.
John Baez said:
I think I have a better way of thinking about all this stuff now, which evolved today while I was eating eggs Benedict at Kilimanjaro Coffee, my favorite place for breakfast in Edinburgh.
Thanks!! The breakfast sounds very tasty!! Is Kilimanjaro a type of coffee ? Or a cafe in Edinburgh?
Kilimanjaro is a famous mountain near Kenya, and a lot of good coffee grows in Kenya, and Kilimanjaro Coffee is a cafe in Edinburgh.
I see.. Thanks!!
I just learned that Kilimanjaro is in Tanzania... but it's visible from a game park in Kenya:
Kilimanjaro viewed from Amboseli
Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.
John Baez said:
Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.
This is really sad.. I really wish we could change it by changing our lifestyle may be?
John Baez said:
I just learned that Kilimanjaro is in Tanzania... but it's visible from a game park in Kenya:
Interesting!!
Jade Master said:
I've been following along by email, and I've realized that the problem you are working on is something I've actually thought about quite a bit before.
Yes! I mentioned your work a while ago in this mammoth thread. The idea of gluing two graphs together and paths that zig-zag back and forth between the two graphs is visible in our old paper Open Petri Nets, and you developed it enormously after that. I wanted to apply your ideas here. So I'm glad you're joining the conversation.
Are you dealing with -enriched graphs?
At first, yes. Then we're looking at -enriched graphs with edges labeled by elements of a set , and if I'm not confused, these are the same as graphs enriched in .
John Baez said:
Amazingly it has glaciers, though 80% of them have melted during the 20th century, and people expect them to disappear entirely during this century.
For many years, this article https://johncarlosbaez.wordpress.com/2015/03/27/spivak-part-1/ has motivated me in many ways. I really like this line
Can we expect or hope that our species as a whole will make decisions that are healthy, like keeping the temperature down, given the information we have available? Are we in the driver’s seat, or is our ship currently in the process of spiraling out of our control?
When you said that glaciers at Kilimanjaro will melt entirely in the end of this century, then I recall about this article.
Interesting! That article was written by David Spivak, but those questions are ones I wonder about often. Right now I feel that we are spiraling out of control - politically, economically and ecologically. I'm trying to accept the possibility that our civilization may crash. For some reason many of us feel that a history of unending progress is necessary for the universe to be okay: anything else would be tragic. But just as a finite lifetime for an individual is okay, so is a finite history for a species.
Anyway, this is a huge digression! I want to talk about Jade's ideas and how they interact with my thoughts at Kilimanajaro Coffee. But also I want to finish writing the paper.
John Baez said:
Interesting! That article was written by David Spivak, but those questions are ones I wonder about often. Right now I feel that we are spiraling out of control - politically, economically and ecologically. I'm trying to accept the possibility that our civilization may crash. For some reason many of us feel that a history of unending progress is necessary for the universe to be okay: anything else would be tragic. But just as a finite lifetime for an individual is okay, so is a finite history for a species.
Thanks for explaining!! I am also realising about our almost inevitable upcoming times, about which you explained. Although it is very hard to accept!!
People often don't accept big problems until it's too late, or almost too late. Pretending problems aren't real reduces pain in the short term.
John Baez said:
People often don't accept big problems until it's too late, or almost too late. Pretending problems aren't real reduces pain in the short term.
I agree!!
Jade Master said:
What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of graphs).
Interesting. In a way is it saying that the same argument can be extended for structured multicospans? (when we have )
Almost everything about structured cospans always generalizes to structured multicospans, but I think that's a bit of a distraction for the present paper. There's a limit to how complicated we want to make things!
Adittya Chaudhuri said:
Jade Master said:
What I like about this result is that it explains why the case of two graphs is not special. For three graphs, you could let have three vertices, self loops, and edges going between all pairs (and I think this should generalize to the pushout of graphs).
Interesting. In a way is it saying that the same argument can be extended for structured multicospans? (when we have )
Well I don't know what a structured multicospan is exactly, but if you have a diagram in the category of graphs which is a bunch of connected cospans, then this argument will apply to that situation.
Jade Master said:
Well I don't know what a structured multicospan is exactly, but if you have a diagram in the category of graphs which is a bunch of connected cospans, then this argument will apply to that situation.
Thanks!! I understand your point. I also do not know exactly what a structured multicospan is!! I was thinking of it as a device , where is finitely cocomplete, is a finitely cocontinuous functor and , which has number of interfaces through which it can glue with number of objects in at the same time. However, the argument you gave perfectly matches my intuition.
John Baez said:
Almost everything about structured cospans always generalizes to structured multicospans, but I think that's a bit of a distraction for the present paper. There's a limit to how complicated we want to make things!
Thanks. I understand and agree to your point.
Here's something Adittya and I were discussing. Say we have an -labeled graph where is some rig. Say and are vertices of this graph. What's the difference in meaning between
and
In the framework we're talking about, an edge from to labeled by means that (or the entity corresponding to ) has a direct effect on of type .
Thus, one can argue that no edge means that has no direct effect on .
But one can also make an argument that an edge labeled by means that has no direct effect on .
This leads to a somewhat annoying situation: we have two ways of saying the same thing. I wouldn't say this is a 'contradiction', it's just a bit strange.
Adittya had a different theory: he says that perhaps no edge from to simply means that we don't know the effect of on .
This is certainly believable if we consider applications to biochemical regulatory networks. These graphs can have dozens or hundred of vertices, one for each chemical under consideration, and we usually haven't checked all pairs of these chemicals to see if one directly affects another.
Let's use "u" to mean "we don't know", or the absence of an edge.
In an -labeled graph if we have labeled edges
the indirect effect of on is given by the product .
But if there's no edge from to what is the indirect effect?
We could try to throw the element into a rig, forming a new rig , and declare that
for all . This amounts to taking and giving it a new zero, namely .
I'm not sure this is a good idea, though. (There's a lot to say about this idea, but there are different directions to explore, and I'm not sure any of them are good.)
Instead of going further, I'll stop and let @Adittya Chaudhuri talk.
Thanks.
I was thinking we usually start an investigation about "how and are related/influencing each other" when we at least "think" that there is a chance that and can be possibly related in a causal way. To distinguish this situation with the situation that "we do not find any reason to investigate a causal relationship between and , we are distinguishing from "no edge".
I was thinking of as a correlation...that is the "zeroth" stage of any causality investigation. My assumption was " a causality investigation" should start "somewhere"... To me, we start the investigation when we see a"correlation". Then, while understanding the correlation, we may or may not come up with a causal relationship. From this perspective can we think of "0" as an unknown influence/correlation? On the other hand, when we have "absence of edge", it may mean that there is no correlation.
In a way, I want to think of an edge labeled by as the one which admits the "existence of a direct influence" , but the absence of an edge does not admit the "existence".
I am somewhat confused by all those words of yours.
From this perspective can we think of "0" as an unknown influence/correlation?
Here you seem to be using an edge from to labeled by to mean "we don't know" if has a direct effect on .
But then someone creating a biochemical regulatory network needs to start by drawing a complete graph and labeling every edge with . I don't think they actually do this!
I was using no edge from to to mean "we don't know" if has a direct effect on .
Then, if we discover has no direct effect on we draw an edge from to labeled by .
Yes, I agree, I meant to say the existence of a direct influence is marked by , when we don't know the type of the said influence.
Now it seems you're drawing a distinction between
and
That's okay. But there must also be a third option:
Yes, I understand the dilemma now: How to distinguish between (1) and (3)?
I should emphasize that there are 2 issues here:
For example there are some particular interesting choices of where contains one element meaning "an effect, but we don't know what kind it is" and another meaning "no effect".
However, right now I was trying to understand the general theory of -labeled graphs. For example, we might have or or . I want some general philosophy that applies to all of these cases.
I'll quit here - time for dinner. But we're definitely not done with this!
John Baez said:
But we're definitely not done with this!
Yes, I agree completely.
John Baez said:
Now it seems you're drawing a distinction between
- we don't know if has a direct effect on
and
- we know has a direct effect effect on but we don't know what it is
That's okay. But there must also be a third option:
- we know that has no direct effect on .
Maybe we have to come up with the right choice of a monoid which covers all these possibilities?
I don't know if there's a place for this as well: there may or may not be a direct effect of on , but regardless we are choosing to ignore any such effect.
This kind of choice can be practically helpful when modelling complex systems.
So there can be a difference between what we know and what we choose to model.
I don't know if this distinction shows up in the applications you are interested in. I could imagine a setting where one wants to model every direct effect one knows about.
Another question: If we do not know the type of an influence, then does it actually help in "not forgetting" these kinds of influences? What extra things can we benefit from the information about the "existence of an unknown direct influence" over the "ignorance of its existence" ?
If we ignore the existence of an unknown direct influence, then we are left with only two choices:
However, I think this is again going back to what you discussed here
.We can also think like this:
If there is no indirect influences(path) from to , then then we may assume there can not exist any direct influence from to , and hence, an "absence of edge". However, if there exists atleast one indirect influence(path) from to , then we can draw an edge labeled to denote the possibility of a direct influence.
May be I am still misunderstanding!!
David Egolf said:
So there can be a difference between what we know and what we choose to model.
I don't know if this distinction shows up in the applications you are interested in.
Quite possibly nobody using graphs with polarities is sophisticated enough to have formalized this yet. Indeed, I bet nobody except us talks about an arbitrary monoid of polarities. But we're trying to increase the level of sophistication, so your suggestion is interesting. Maybe we can invent a nice monoid or rig that includes an element meaning "a nonzero effect, but we choose to ignore it". Or other subtle concepts.
I am wondering if choosing to not put an edge from to could be a way to indicate we're choosing to not model how impacts .
And then one could attempt to use some kind of labelled edge even in the case of "we know there is no effect" or "we don't know if there is an effect".
Anyways, I'll continue to read this thread with interest! :smile:
David Egolf said:
I am wondering if choosing to not put an edge from to could be a way to indicate we're choosing to not model how impacts .
More often, not putting an edge from to means you haven't had time to think about it! We start with no edges, and drawing each edge takes thought.
But this is problematic, because we might also deliberately not draw an edge to indicate the absence of a direct effect.
So, it's possible that current standard practice is suboptimal.
John Baez said:
Maybe we can invent a nice monoid or rig that includes an element meaning "a nonzero effect, but we choose to ignore it". Or other subtle concepts.
The closest thing that comes to my mind is to choose "absence of an edge" over an "edge labeled by an absorbing element". In a way, an absorbing element is not adding much information to the system? Non-zero absorbing element?
Btw, a monoid can only have at most one absorbing element. But you can always take a monoid and adjoin an absorbing element! If it already had one, the new one absorbs the old one, and the old one absorbs everything else!
Yes, I agree.
When you said "non-zero effect" in
, did you mean non-absorbing element then?I think now my mind is not properly working. I should probably sleep now . In a rig, I think is the only absorbing element. So, there can not be any non-zero absorbing element in a rig-labeled graph.
When you said "non-zero effect" did you mean non-absorbing element then?
I was talking about concepts in system modeling, not mathematics, so I meant "an effect that differs from no effect at all".
In a rig, I think is the only absorbing element.
Yes, since the in a rig is an absorbing element for its multiplicative monoid:
and since an absorbing element in a monoid is always unique (if it exists at all), the only absorbing element in a rig is .
Thus, the concept of 'absorbing element' only deserves a special name when discussing monoids, not the multiplicative monoids of rigs.
John Baez said:
I was talking about concepts in system modeling, not mathematics, so I meant "an effect that differs from no effect at all".
I see. Thanks!
John Baez said:
Thus, the concept of 'absorbing element' only deserves a special name when discussing monoids, not the multiplicative monoids of rigs.
Thanks. Yes, I fully agree.
I added another example of a monoid good for monoid-labeled graphs: the terminal monoid! This is a good excuse to describe some issues of interpreting monoid-labeled graphs. See what you think:
The terminal monoid is a monoid containing just one element. This is also known as the trivial group. In the applications at hand we write the group operation as multiplication and call the one element , so that . Thus, we call this monoid . Any graph becomes a -graph in a unique way, by labeling each edge with , and this gives an isomorphism of categories
We can use a graph to describe causality in at least two distinct ways:
We can use the presence of an edge from a vertex to a vertex to indicate a way in which (the entity named by) has a direct effect on (the entity named by) , and the absence of an edge to indicate that has no direct effect on . Note that even if there is no edge from to , may still have an indirect effect on if there is path of edges from to .
We can use the presence of an edge from a vertex to a vertex to indicate a way in which has a direct effect of , and the absence of an edge to indicate that has no currently known direct effect on . This interpretation is useful for situations where we start with a graph having no edges, and add an edge each time we discover that one vertex has a direct effect on another.
We can also take at least three different attitudes to the presence of multiple edges from one vertex to another:
We can treat them as redundant, hence unnecessary, allowing us to simplify any graph so that it has at most one edge from one vertex to another.
We can treat them as indicating different ways in which one vertex directly affects another.
We can use the number of edges from to to indicate the amount by which affects .
All these subtleties of interpretation can also arise for -graphs where is any other monoid. We will not mention them each time, but in applications it can be important to clearly fix an interpretation.
Thanks!! I feel the example of seems a very natural and perfect starting point to argue the reason for generalising from graphs to Set-labeled graphs, and then to further monoid-labeled graphs as you already explained.
As you explained, if I am not misunderstanding, then a directed multigraph come with following structural types of basic directed influences among its vertices viz:
[Of course, I am also assuming the two kinds of interpretations of causalitites and three kinds of attitudes that you described above]
Now, I think since (as you have shown ), it is already equipped to describe the above causalities. However, since, there is only element that we can use to label edges, we can not distinguish between the functional types of directed influences. To capture distinct functional types of directed influences along with the structural types of directed influences I think it is now necessary to generalise from to , for a non-singleton set . However, since we want to capture indirect influences/composed influences, i.e influences which is themselves a product of various influences, we naturally need to put some algebraic structure on the labeling set . I think from the point of "polarity studies", the most natural generalisation is to make a monoid. Although we can add other algebraic structures (in addition to the monoid structure) to capture how in different ways "we can add/compose/subtract..etc different types of directed influences.
However, I think tells precisely that we have a valid reason to extend our framework from to for realistic purposes.
John Baez said:
All these subtleties of interpretation can also arise for -graphs where is any other monoid. We will not mention them each time, but in applications it can be important to clearly fix an interpretation.
Yes, I fully agree.
John Baez said:
I added another example of a monoid good for monoid-labeled graphs: the terminal monoid! This is a good excuse to describe some issues of interpreting monoid-labeled graphs. See what you think:
We can use a graph to describe causality in at least two distinct ways:
We can use the presence of an edge from a vertex to a vertex to indicate a way in which (the entity named by) has a direct effect on (the entity named by) , and the absence of an edge to indicate that has no direct effect on . Note that even if there is no edge from to , may still have an indirect effect on if there is path of edges from to .
We can use the presence of an edge from a vertex to a vertex to indicate a way in which has a direct effect of , and the absence of an edge to indicate that has no currently known direct effect on . This interpretation is useful for situations where we start with a graph having no edges, and add an edge each time we discover that one vertex has a direct effect on another.
We can also take at least three different attitudes to the presence of multiple edges from one vertex to another:
We can treat them as redundant, hence unnecessary, allowing us to simplify any graph so that it has at most one edge from one vertex to another.
We can treat them as indicating different ways in which one vertex directly affects another.
We can use the number of edges from to to indicate the amount by which affects .
I really like these interpretations. I think it also says that we can actually express quite a lot of causalities by using only the structural aspects of non-labelled graphs/ and the free categories generated on it.
Great!
I added some examples of how we can use maps between monoids.
Proposition. The following assignments
define a functor
where is the category of commutative monoids and is the category of -labeled finite graphs.
The functors have many practical applications:
Example. Every commutative monoid has a unique homomorphism where is the terminal monoid. The resulting functor
takes any -labeled finite graph and discards the labeling, giving a finite graph. This can be used to discard information about how one vertex directly affects another and merely retain the fact that it does.
Example. There is a homomorphism from the multiplicative group of the reals to the group sending all positive numbers to and all negative numbers to . The resulting functor turns quantitative information about how much one vertex directly affects another into purely qualitative information.
Example. There is a homomorphism from the multiplicative monoid of the reals to the monoid sending all positive numbers to , all negative numbers to , and to . The resulting functor again turns quantitative information into qualitative information.
Example. The homomorphisms in the previous examples all have right inverses. For example, there is a homomorphism sending to , to and to , and this has
The functor can be used to convert qualititative information about how one vertex directly affects another into quantitative information in a simple, default manner. Of course this should be taken with a grain of salt: since , quantitative information that has been converted into qualitative information cannot be restored.
By the way, @Jade Master - I do plan to think and talk more about what you said yesterday! I'm just doing a lot of different stuff, and right now I'm writing up some old thoughts.
Thanks!! These examples are very interesting. I am writing my thoughts below:
I was thinking that analogous to the functor , we also have a functor , which takes a monoid to the category . If I am not misunderstanding, then I think all the examples that you constructed by can also be constructed via . But, of course, the setups would be different due to the presence/absence of additive morphisms. However, an interesting thing that may happen in the latter case is the liberty to use non-commutative monoids.
Now, I was thinking in terms of the semiautomation example. The concerned non-commutative monoid in this case is generated by the set of endomorphisms , indexed by a set of inputs . Now, an edge in the associated semiautomation graph is labeled by (the process through which the state changes to the state ). Now, if we think of the monoid describing a discrete time system as you described in the overleaf file, then a homomorphism induces , which in particular maps the semiautomation graph to a graph which describe how much units of time the automation take when we change various states. So, in a way the homomorphism turns a qualitative information into a quantitative information in a non-obvious way. However, I think we can consider this example only if we use the functor .
That sounds right. Perhaps we should include an analogue for categories of monoid-labeled graphs of Propositions 2.23 and 2.28. There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms! But I'm indecisive about this, because I don't want to stuff the paper with boringly similar propositions. Maybe the right thing to do is simply state, after Prop. 2.28, that 2.23 and 2.28 have analogues for other cases.
John Baez said:
That sounds right. Perhaps we should include an analogue for categories of monoid-labeled graphs of Propositions 2.23 and 2.28. There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms! But I'm indecisive about this, because I don't want to stuff the paper with boringly similar propositions. Maybe the right thing to do is simply state, after Prop. 2.28, that 2.23 and 2.28 have analogues for other cases.
Thanks. Yes, I understand your points and fully agree with your suggestions. I also realised that the content in Proposition 2.5 and Proposition 2.6 are also similar in nature. As you suggested, can we club all these results together in a single/pair of propositions with certain remarks/statements about other analogous cases after the proposition? Something with the theme "behaviour under the change of labelling systems". The reason I am saying this because I think the proof structure in each case is almost the same, although the consequences are little different.
John Baez said:
There's probably also an analogue for the category of monoid-labeled graphs and Kleisli morphisms!
This sounds interesting to me!! Especially, in the context of "how identification of motifs in a monoid-labelled graph changes when we change only the labelling system, but keep the underlying graph structure as it is". For example, in the attached file,
Kleislilabel.PNG
I constructed a very simple example when we change the labeling from to by the unique homomorphism.
Maybe the attached example is not that interesting. But may be in some other cases, it could be interesting, although I am not sure.
Another example could be the following (which may or may not be interesting)
Say, is the semiautomation monoid generated by the set of endomorphisms , indexed by a set of inputs , and let be the associated -labeled graph. Now, let the monoid describing a discrete time system, and let be a monoid homomorphism (expressing time duration for change of states), which induces . Now, consider the graph . Then, I think a monic Kleisli morphism from the walking feedback loop with holonomy say , is same as finding various sequences of change of states induced by the semiautomation such that it comes back to its original state after units of time.
I don't want to talk about semiautomata in this paper, except for the one remark we've already made. I believe automaton theory will tend to confuse readers who are interested in our primary applications, namely causal loop diagrams and regulatory networks.
However, you've convinced me that monoid-labeled graphs deserve a bit more respect. So:
Can you write up an analogue of Propositions 2.23 and 2.28, and proofs, for monoid-labeled graphs and Kleisli morphisms? Namely:
Right now I plan to write up a section on Mayer-Vietoris for the homology monoids of directed graphs. I think my earlier attempt to state this result were a bit suboptimal. I want to see if the ideas described here can be helpful:
Patchkoria has been developing a generalization of homological algebra that works for categories of modules of rigs - e.g. the category of commutative monoids, since commutative monoids are -modules.
(Patchkoria says "semimodule of a semiring" but I say "module of a rig", because I get sick of seeing the prefix "semi-", and I don't think the shorter terminology is confusing.)
Patchkoria defines a concept of chain complex that works for modules over a rig, where instead of maps with you have maps obeying some conditions.
But we only need a pathetically simple special case of this, since we're dealing with the chain complex associated to a graph, where only and are nonzero!
He shows that a short exact sequence of his generalized chain complexes gives a long exact sequence of homology monoids.
This should imply a Mayer-Vietoris theorem for the homology monoids of graphs. But since the chain complex associated to a graph is so simple - with just two nonzero terms - it should be just as easy to handle this special case "by hand", without explaining all of Patchkoria's machinery. (Of course we should still cite him.)
Oh, actually Patchkoria's results don't directly apply: he is starting with a diagram of modules of a rig
where is monic, is epic and some sort of exactness condition holds (see definition 1.1), saying roughly that the image of is the kernel of . But we don't have that when we're trying to prove a Mayer-Vietoris theorem for the homology monoids of graphs!
John Baez said:
I don't want to talk about semiautomata in this paper, except for the one remark we've already made. I believe automaton theory will tend to confuse readers who are interested in our primary applications, namely causal loop diagrams and regulatory networks.
However, you've convinced me that monoid-labeled graphs deserve a bit more respect. So:
Can you write up an analogue of Propositions 2.23 and 2.28, and proofs, for monoid-labeled graphs and Kleisli morphisms? Namely:
- a proposition stating that there is a functor sending each monoid to the category of -labeled graphs and Kleisli morphisms between these. (Check that it's true!)
- a little proposition describing in simple terms.
Thanks. I understand and agree to your points. Yes, I will write up the Kleisli morphism analog of Propositions 2.23 and 2.28, and proofs as you suggested.
Great, thanks!
Now I'll say a bit about the homology of graphs with natural number coefficients, and the Mayer-Vietoris theorem for this sort of homology. As usual I'll start with the basics and repeat myself a lot... but since I keep understanding things slightly better, I think that's good to do.
If we were working with abelian groups, each graph would give a chain complex, and then we could take the homology of that. But since we're working with commutative monoids, we have to adapt our approach. It actually becomes simpler: instead of chain complexes we work with graph objects!
Here's how:
I'll call the "free commutative monoid on a set" functor , since it sends each set to the commutative monoid , which consists of -linear combinations of elements of .
By applying this functor to a graph
we get a graph object in commutative monoids, namely
I will call this graph object . Then:
Now, to study Mayer-Vietoris for the homology of graphs with coefficients, I want to consider the pushout in of a diagram like this:
where and are monic. I will often call this pushout and write as , to remind you of the usual Mayer-Vietoris theorem.
So, intuitively speaking, we have a graph that's the union of two subgraphs and , and is their intersection.
When I talked about this last I assumed the intersection has no edges, just vertices. But we may be able to drop this assumption!
I just explained how the "free commutative monoid on a set" functor
can be used to turn any graph into a graph object in . This process is functorial and I'll call it
Here is my stupid notation for the category of graph objects in .
The free commutative monoid on a set functor
is a left adjoint, and I believe
is also a left adjoint. If so this should follow from some abstract nonsense, or else we can check it directly. For now let me assume it's true.
Left adjoints preserve pushouts! So, because the graph I'm calling is the pushout of
it follows that is the pushout of
We can say this a different way, since a pushout is a coproduct followed by a coequalizer. We have two inclusions of graphs
so we get
and the coequalizer of these two maps is .
This fact is the closest analogue to where we start in the usual Mayer-Vietoris theorem, where we have a topological space that's the union of open sets and , and we get a short exact sequence of chain complexes
Instead of a short exact sequence, we now have a coequalizer diagram! Let me try to argue that this is just as good.
A short exact sequence says 1) some map is monic, and 2) the image of is the kernel of some map , and 3) is epic. I think the analogues of these 3 facts are:
1) and are monic. This doesn't follow from being a left adjoint, since left adjoints don't always preserve monics, but if you stare at it I think you'll see it's true.
2) is the coequalizer of
3) The resulting map
is epic. This seems obvious directly, but it's also true because coequalizers give epics.
So, this is nice. Now, to get the usual Mayer-Vietoris theorem we take a short exact sequence of chain complexes
and get a long exact sequence of homology groups. We want to do something similar here. But we need to adjust our thinking, since we can't talk about kernels and cokernels: we need to talk about equalizers and coequalizers.
This is what I'm trying to figure out now.
Thanks !! These ideas look very interesting. I need a little more time to fully understand your ideas.
One short question: Do you want to construct a boundary map (for getting the long exact sequence)? Or, you want to express it in the form an equaliser ? I am asking this because somedays back to .
you gave a presription of such boundary map by the changing coefficients fromThat's a very good question.
When working with commutative monoids, it seems more "honest" to use equalizers and coequalizers of pairs of maps rather than kernels and cokernels of maps. I got a formula for a boundary map in the Mayer-Vietoris long exact sequence by switching coefficients from to , but this feels a bit like "cheating". So, right now I'm trying to get two different maps from to .
Here's one recent attempt. There's an obvious "projection" map
which takes any linear combination of edges of and kills all the edges that aren't in . This restricts to a map
since is actually a submonoid of . Thus, we can define two maps
and we can compose these with the quotient map
to get two maps I'll call
These feel like a pretty obvious substitute for the usual boundary map in the Mayer-Vietoris sequence. But I need to see if they have good properties!
Thanks!! To me, this construction looks much natural than the earlier one
" is the equalizer of if we take "
Furthermore, the fact that "it allows to have edges in the graph " fits perfectly with the description.
John Baez said:
These feel like a pretty obvious substitute for the usual boundary map in the Mayer-Vietoris sequence.
I agree.
Thanks! I believe these new maps
are the same as my old maps
composed with the quotient map
I'm trying to make them look more elegant.
But I still need to check that they have properties that are somehow analogous to what you'd expect from the exactness of the Mayer-Vietoris exact sequence.
I've been thinking about your idea of a rig containing a "negligible" element, @David Egolf - that is, a nonzero element that we nonetheless decided to neglect.
I was trying to start with some commutative rig (think of it as the real numbers if you like), throw in a new element , make up an addition and multiplication for that express the idea that is negligibly small compared to all numbers except , and then check that this addition and multiplication make into a rig.
For example I want
I think this works, but I seem to have found another way to say it.
Instead of trying to put in a new negligible element, let's take and put in a new element with all the properties that zero must have in a rig! We have to check that this gives a rig, but I believe this is fairly easy.
Now here's the trick: let's call the old zero . This is now our negligible element! Note that it obeys
Note also that
I will check more carefully that all the commutative rig laws hold, but it's looking good.
This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!
The idea is interesting. It seems the old is now an almost absorbing element with respect to the multiplicative monoid of the new rig. I said `almost' because, you wrote for all but, . However, I think there might be another condition for all ?
To me, it seems similar to adding an "unknown influence " to an already existing rig of known influences.
But, I think what separates your construction with the "insertion" of an unknown influence is the condition for all . I think this justifies as a non-zero negligible element. Interesting!!
Somehow your construction is reminding me of the Extended reals
Adittya Chaudhuri said:
The idea is interesting. It seems the old is now an almost absorbing element with respect to the multiplicative monoid of the new rig. I said 'almost' because, you wrote for all but, . However, I think there might be another condition for all ?
Yes, I want that too. (I may have said 'rig' in a few places, but I wanted all rigs in this discussion to be commutative, mainly to keep things simple, so . The ideas may also work for noncommutative rigs, but even then I want for all in the original rig.)
And yes, is "almost absorbing": it's absorbing with respect to the original rig (since it was the zero of the original rig), but .
John Baez said:
This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!
Every field is by default a rig. So, I guess Levi-Civita field is itself a concrete example (with much more properties than we probably need)?
Although, I am not able to guess at the moment about the physical interpretation of Levi-Civita Field elements.
If we denote the new rig by , then, I think there is a rig homomorphism , which is identity on and sends to .
I think we may see it as a process of forgetting the negligible influence?
Adittya Chaudhuri said:
John Baez said:
This reminds me of various tricks for taking the real numbers and throwing in an infinitesimal, like the Levi-Civita field or the superreal and surreal numbers. But it's simpler because we're only trying to get rig!
Every field is by default a rig. So, I guess Levi-Civita field is itself a concrete example (with much more properties than we probably need)?
The Levi-Civita field and the superreals and the surreals are all examples of hyperreal number systems.
A hyperreal number system is a field that has all the same properties expressible in the first-order language of fields as , but also a nonzero number such that
etcetera. (This property, being an infinite conjunction, cannot be expressed in the first-order language of fields.)
Anyway, I liked David Egolf's idea of taking the real numbers and throwing in just one element that means 'negligible'. In any hyperreal number system we have to put in infinitely many new numbers. For example we have and also huge numbers like
Thanks. I understand the point now.
John Baez said:
Anyway, I liked David Egolf's idea of taking the real numbers and throwing in just one element that means 'negligible'.
Yes, I think it is interesting and also useful from the point of applications. If we really want to consider some influence as negligible, then we may not need to distinguish between two negligible influences unless it is very necessary.
Right! If something is negligibly small, its square is even smaller, but we are probably happy to call its square negligible and leave it at that! There's been a lot of work on hyperreal number systems, but David's idea feels new to me, and practical, so I'll put it in our paper.
Yes, it is interesting and a new way of treating negligible influences.
I was not aware of hyperreal numbers till today. They look very interesting. Thank you for explaining them!!
I like the @David Egolf 's idea of choosing some non-zero influence to neglect for a particular modeling purpose. Your construction precisely describes a choice (the old ). I think it `reminds the modeler' that although he/she has created the model but he/she has not considered "that particular influence". As you constructed, we can just do that by considering "that particular influence" as the old in the new rig.
I'm glad you are finding this idea helpful! (I didn't really come up with this idea though. I said something related and then @John Baez came up with this idea!)
You said
I am wondering if choosing to not put an edge from to could be a way to indicate we're choosing to not model how impacts .
That's true! But I thought about it for so long I forgot what you said! I started thinking about using a special kind of labeled edge to indicate that we're choosing not to model how impacts . And somehow I interpreted this to mean that we're treating the impact as negligible (though I see now that's different).
My theory is that practitioners don't want to have to think very hard before not putting in an edge. So, the absence of an edge should have a very boring meaning.
Anyway, there's a multiplicity of interpretations of the same math. Here is what I added to the paper just now. I realized most of what I have to say works not just for rigs but for monoids. We introduce rigs later.
Example 2.9. Another important monoid of polarities is the multiplicative monoid of , which we can write as . This contains the 2-element monoid of the previous example as a submonoid, but it also contains a new element that is absorbing: for all . So, unlike , this monoid is not a group.
There are at least three distinct interpretations of -graphs:
Our choice of interpretation affects how we interpret the absence of an edge from one vertex to another. We discuss this in the following example.
Example 2.10. More generally, for any monoid we can form a new monoid where is a new absorbing element. Thus, in the product of elements of is defined as before, but for all , and . This new monoid contains as a submonoid. As before, there are at least three interpretations of the element :
1) there is no direct effect of on , or
2) there is no currently known direct effect of on .
Thanks!! Two examples and interpretations are very nice. I think it clarifies really well about the "absence of edge vs " situation.
Thanks! I've been worrying about what's the difference between 0 and the absence of an edge for a long time. It seems there are a couple of consistent interpretations. Maybe now I can relax.
I am attaching an example
necessary stimulation.png
where the symbol stands for necessary stimulation. In other words, the presence of is necessary for to perform its function that is positively affecting , negatively affecting and necessary stimulating . These notions of necessary stimulations are common in SBGN-AF descriptions (they have a special arrow symbol for that). Can we construct a monoid which can accomodate these kinds of causalities?
Interesting. Maybe you can try to invent a nice monoid that contains and .
Thanks. I will try .
I am trying to write down a binary operation suitable to express the meaning of necessary stimulation.
In particular, by , I am interpreting the statement " is necessary for to perform" is a kind of postive stimulation of on , but different from the usual .
Claim: The required monoid is , where is defined as follows:
Hence, is the identity element in the monoid . Hence, is not a submonoid of . However, I think a more general statement is true:
Lemma
For any monoid and a sigleton set , there is a unique monoid such that
For our case .
I see, you're forming a new monoid by adjoining a new identity element to . This beats the old identity , which still acts like an identity when you multiply it with any element except the new identity.
This is very similar to, but distinct from, our trick of adjoining an absorbing element to a monoid .
When we have a rig , these tricks get combined: we can adjoin a new element that's absorbing for and the identity for . We've talked about this recently. But now we've separated this idea into two parts. Very nice!
I will add this idea to the paper soon. I also need to get serious about constructing the Mayer-Vietoris exact sequence for homology monoids of graphs. I think my candidate for the connecting homomorphism is screwed up. I want to go back to the usual construction of the connecting homomorphism for homology groups, and see how to adapt it to commutative monoids.
It’s going to be very cool if you do get a Mayer-Vietoris sequence! In Grandis’ directed topology program, there are “flexible” directed spaces which have a nice van Kampen and Mayer Vietoris theory, ie their homotopy pushouts behave a lot like those of undirected spaces, but also “inflexible” ones for which you can’t prove such theorems. The key distinction is whether you’re allowed to stop partway through a directed path. Graphs are thus inflexible as directed spaces, because you have to get all the way from one vertex to another—there’s no such thing as stopping “in the middle of an edge.” So if there is a good Mayer-Vietoris theorem, it’ll probably be quite particular to the details of graphs, and thus nice and concrete and interesting!
John Baez said:
When we have a rig , these tricks get combined: we can adjoin a new element that's absorbing for and the identity for . We've talked about this recently. But now we've separated this idea into two parts. Very nice!
Thank you so much!! Yes, this looks interesting both from the application perspective and from the Math perspective.
Kevin Carlson said:
So if there is a good Mayer-Vietoris theorem, it’ll probably be quite particular to the details of graphs, and thus nice and concrete and interesting!
Thank you. Yes, I got your points. Interesting!!
John Baez said:
I will add this idea to the paper soon.
Thanks!
Although it may not add anything extra to help the required construction of the boundary map, but still I am just trying to rephrase the construction category theoretically as follows:
If I am not misunderstanding, then our required boundary map (if we want to see it as a map) is a special map from the equalizer of to the coequalizer of . Although I am not able to guess any such map from "any universal property" involved.
Yes, that's what the source and target of should be. I'm trying to define using a diagram chase similar to the usual one, but with various kernels and cokernels replaced by the appropriate equalizers and coequalizers. I'll report on my progress in a while!
I feel I have a rough mental image of what looks like in examples. It's easier to describe if we're doing homology with coefficients. You take a 1-cycle on and chop it up as the sum of a part supported on and a part supported on . and are 1-chains but not 1-cycles, and let be the homology class of .
It's really helpful to draw a picture. Take a circle , chop it in half, and let be the two endpoints of one of the intervals, counted with appropriate signs.
There's not a unique way to chop into a part supported on and a part supported on , but different choices should give choices of that are homologous, i.e. differ by the boundary of something in .
In the case I used to focus on, where is a graph with no edges, there is a unique way to write as the sum of and , because
Thank you. I am now thinking along the ideas you explained.
I have a question(most probably, I am misunderstanding)
You already proved the following
is the equalizer of if we take . here
With new terminology, we have
is the equalizer of if we take .
Now, we have, a map , by . So, we have by the definition of and . Now, by the universal property of equalizer, we have a unique map , .
Now, what would be the problem if we consider the following as our Mayer Vietoris?
, where the double arrows denote and .
Something like this might work. If this were a conventional exact sequence of abelian groups, you would be claiming that the image of is the kernel of . But these are commutative monoids. So what analogue of exactness are you claiming for and ?
Thank you. I got your point. At the moment I am not sure about the exactness condition of and . However, I feel if my construction is correct, then an exactness condition can be framed in terms of equalizer and coequalizer. I am thinking about this point.
It's possible that instead of working with the single map
we should focus a lot of attention on the two obvious inclusions
When working with commutative monoids instead of abelian groups, we often need to replace single maps by pairs of maps, kernels by equalizers and cokernels by coequalizers.
Of course followed by the two projections from onto its summands gives and .
Thanks. Yes, I got your point.
I've been trying to develop an analogue of 'exact sequence' for commutative monoids and I think maybe I have. It's good to start by thinking about exact sequences of abelian groups (or objects in any abelian category, but let's keep it concrete).
Suppose we have a strict -category internal to . This has an abelian group for each and source and target maps
for each , obeying the usual laws of a [[globular set]]:
Of course the -category also has composition maps, and identities, but I believe all of these can be reconstructed starting from just the source and target maps and addition in the abelian groups !
That may seem surprising, but it's well-known for , see for example HDA6 where it's shown that a category internal to can be reconstructed from its underlying graph internal to , meaning the pair of linear maps
The argument works equally well for categories internal to .
But in fact we can go further: up to equivalence we can reconstruct a category internal to from a single map
defined by
This was also explained in HDA6.
Thank you!! I am trying to understand your ideas.
I do not want to distract your explanation. I just wanted to share what I was thinking today morning after seeing your post about using double arrows. I am probabaly fully wrong. I just have a mental picture(could be wrong), with which I will try to make a general statement, and would try to justify it in our framework.
Definition
Let be a category with equalizers, coequalisers. Then, I define an exact sequence in as the following diagram:
such that is the equalizer of , is the coequalizer of , is the equalizer of , is the coequalizer of , and so on!!
My claim is the following:
is the exact sequence (as per my definition). Here, are as you defined before, and takes to . (This is my proposed Mayer-Vietoris)
The above definition is motivated by the definition you constructed for and .
Let me justify why I think so!
, which is satisfying my definition of an exact sequence in the category of commutative monoids.
Now, another claim:
For a category with nice properties like (category of abelian groups), there is a one-one correspondence between
and usual exact sequences.
Our ideas may converge. My ideas lead to a somewhat different concept of exact sequence in , but we may be able to reconcile them. I'll continue and try to explain my idea.
HDA6 gives the (previously known) argument that up to equivalence we can reconstruct a category internal to from a single map
defined by
In fact more generally I believe this - I don't know if anyone has written a proof, so I'll call it a conjecture:
Conjecture. Strict -categories in an abelian category are equivalent to chain complexes in .
This conjecture is a bit vague because I haven't chosen a definition of 'equivalence' here. I believe there's a very concrete procedure for extracting the underlying chain complex from a strict -category in , and also a very concrete procedure for taking a chain complex and building a strict -category in from it. We can just do one and then the other, and see what happens. For example, if we start with a chain complex, I believe the round trip will get us a chain complex that's chain homotopy equivalent.
However, this conjecture won't apply to strict -categories in , because we used subtraction to reduce the source and target to a single map:
If we don't go so far as to replace and with , we have the important concept of a 'globular object'. A globular object of height in a category is a sequence of objects with maps
obeying
Every strict -category has an underlying globular object of height , where we keep the source and target maps but forget about composition and identities. I believe this:
Conjecture. Strict -categories in an abelian category are equivalent to globular objects of height in .
I'm afraid this too may use subtraction! Let's look at the case and take A globular object of height in is just a graph object
How do we define composition of 1-cells starting with a graph object in ? Given with let's write . Let's define the arrow part of by
Notice that
Here we've taken the morphism and 'translated it back to the origin', as we often do when explaining vector addition in a basic course, where it's important to explain the difference between a vector starting from an arbitrary point in space and a vector starting from the origin. In fact that stuff is an example of what we're doing here!
Given and here's how we define .
Notice that
while
so indeed
as desired.
I did all this just to remind myself why we need subtraction in our abelian groups to define composition of morphisms starting from a graph object or more general general globular object in . This won't work in the case I'm actually interested in: a globular object in . :cry:
Nonetheless I'll try to explain my conjecture that exact sequences in are the same thing as namely contractible globular objects in .
And - here finally is the point of all this stuff I'm saying! - I'm hoping that the right generalization of exact sequences to may be contractible globular objects in .
(I could have just started with the definition of these, but I wanted to explain what I'm actually thinking. I hadn't expected this to take so long.)
Okay: say is a globular object in a category . We call the object of -cells. As before, let's write
whenever is an -cell with .
We say is contractible if
1) given any 0-cells there exists a 1-cell with
2) given any -cells for , there exists an -cell with
Intuitively, 1) means that is connected and 2) means that the th homotopy group of vanishes because we can fill in any -sphere.
Conjecture. Suppose is a globular object in and is its underlying chain complex. Then is exact iff is contractible.
Note that this conjecture, unlike my previous ones, is quite precisely stated, at least if you remember that the procedure for getting a chain complex from a globular object in is to let and .
Why do I believe this conjecture? The idea is that the 'vanishing of homotopy groups' intuitively captured by the contractibility of is nothing other than the vanishing of homology groups for . But a chain complex has vanishing homology groups iff it's exact.
So now I'll make a guess, which we see is not completely supported by the evidence. What's the correct analogue for commutative monoids of an exact sequence of abelian groups?
Guess: it's a globular object of commutative monoids
that is contactible.
That was quite a lot of talk to motivate a concrete suggestion! So, I'm suggesting that our 'Mayer-Vietoris exact sequence of commutative monoids', or any exact sequence of commutative monoids, should actually be a list of commutative monoids together with maps
obeying the globular identities
together with contractibility:
if and then there exists with .
I plan to keep studying this, but now I'll try to understand what you did. Use seem to be using a more ad hoc generalization of exact sequence, which may be what we actually seen in this example. It's possible that in your diagram
when you have a single arrow between commutative monoids this arrow is both my and my .
I have already argued that perhaps your should be treated as two arrows, which I called and .
Anyway, I'm ready to stop thinking about grandiose abstract nonsense for a while (unless anyone has anything interesting to say about my conjectures!) and work on our specific problem, which really should be rather simple.
John Baez said:
That was quite a lot of talk to motivate a concrete suggestion!
Thank you!! I am trying to understand your motivation.
Most of it is closely connected to old ideas from HDA6; I don't know if you've read that paper. Actually most of those ideas were not original to HDA6; they can be found in the work of Grothendieck and others. The idea of contractible globular objects is also not new.
John Baez said:
Most of it is closely connected to old ideas from HDA6; I don't know if you've read that paper. Actually most of those ideas were not original to HDA6; they can be found in the work of Grothendieck and others. The idea of contractible globular objects is also not new.
Thanks. Yes, although I have not read your paper HDA6 in detail, but I used your notion of 2-vector spaces to represent strict Lie 2-algebra of a strict Lie 2-group/as well as fibres in VB-groupoids in our paper https://arxiv.org/pdf/2107.13747 and https://arxiv.org/pdf/2309.05355.
John Baez said:
Conjecture. Strict -categories in an abelian category are equivalent to chain complexes in .
I just went through 1st part of the proof of Lemma 6 in HDA6 (Construction of the composition law from the and the underlying group structuctue of vector spaces). From the construction, it seems its a version of the construction of 1-category (internal to ) from the data of a globular 1-set [[globular set]] in . Now, as you exaplined in the post and as well proved in Lemma 6 in HDA6 that actually the construction is possible from a single map , your conjecture seems very much to be true and natural from my point of view. I feel what a globular 1-set is to a 1-category, a globular 2-set is to a strict 2-category (at least from the definition and laws of globular set). Thus, I feel your conjecture seems very natural generalisation (n-level) of the Lemma 6 in HDA6 .
Let be a graph in . Then, if we apply the and functors, has all the data of , because we define the composition by concatenation of paths in .
However, if we consider an arbitrary category in , then from the underlying graph of it is not possible to reconstruct the category (For example, consider a category with a single object and only identity arrow).
Now, let be category in or a 2-vector space as in HDA6. If I assume there is an internal free-forgetful version of adjunction between (Category of 2-vector spaces) and (Graph internal to vector spaces), then my question is the following:
Is equivalent to ? Or, we may recover some "extra morphisms" than what is already present in ? Maybe I am thinking in wrong direction (about a relation between the "construction of from the underlying globular 1-set in of (as in Lemma 6 in HDA6) and (may be possible ) construction of from via functor")
John Baez said:
That was quite a lot of talk to motivate a concrete suggestion! So, I'm suggesting that our 'Mayer-Vietoris exact sequence of commutative monoids', or any exact sequence of commutative monoids, should actually be a list of commutative monoids together with maps
obeying the globular identities
together with contractibility:
if and then there exists with .
Interesting!! I think I got your motivation now!! Thanks!! If I assume the correctness of your 3rd conjecture (which seems very reasonable from your definition of contractibility) that "if is the underlying chain complex of a globular object in , then is exact if and only if is contractible" which I feel is a kind of restriction of the equivalence given by the combination of your 1st two conjectures i.e between chain complexes in Abelian category and globular objects in , which to me is a sort of globular version of [[Dold-Kan correspondence]].
Now, although the "exactness condition" does not directly makes sense from the point of a suitable definition of a chain complex in the category of commutative monoids, interestingly(as you explained), from the point of vanishing of homotopy groups (your contractibility definition), a notion of contractible globular object in the category of commutative monoids still perfectly makes sense.
Thus, I feel your proposed definition of exact sequence of commutative monoids seems very natural and very reasonable. I find this point of view (of changing the perspective from defining an exactness condition in chain complex to defining a contractibility condition on globular objects for generalising the notion of an exact sequence of abelian groups to an exact sequence of commutative monoids) very interesting!!!
Now, I am trying to make a connection between a possible relation between my definition of exact sequences of commutative monoids and your contractible globular objects in commutative monoids.
Consider the following diagram in a category .
.
Let denote the parallel arrows , and denote the single arrows.
Then, I am calling is a chain complex in if for all , we have
Now, consider a category with equalizers and coequalizers. Then, I am defining an exact sequence in as a chain complex in
such that is the equalizer of , is the coequalizer of , is the equalizer of , is the coequalizer of , and so on.
Now, by combining of your 1st two conjectures, we get a one-one correspondence between chain complexes in Abelian category and globular objects in . Now, if I am not misunderstanding, I think you have not defined a notion of a chain complex in the category of commutative monoids, instead you defined the notion (probably equivalent) interms of globular objects in the category of commutative monoids to avoid the problem of "exactness".
However, I have two question:
1) Is there a one-one correspondence between appropriate notion of chain complexes in the category of commutative monoids and globular objects in the category of commutative monoids?
2) What are the appropriate notion of chain complexes in the category of commutative monoids ?
I have an intiuitive feeling that via question (1) and (2) our ideas may converge.
In other words, I am feeling, that my definition of chain complex may answer the question 1) and question 2). Moreover, I am feeling may be, my exactness condition in my chain complexes will produce your contractractibility conditions in your globular objects and vice versa.
The above discusssion is just a feeling. I know I have to make my statements more concrete. I am working on it.
Also, it is very possible that the above discussion leads no where.
Just as a kernel pair describes the failure of an arrow to be monic, and a cokernel pair its failure to be epic, one could have a pair that describes the failure of a pair to be jointly monic/epic. I wonder if this idea is of use in this context.
Adittya Chaudhuri said:
However, I have two questions:
1) Is there a one-one correspondence between appropriate notion of chain complexes in the category of commutative monoids and globular objects in the category of commutative monoids?
2) What are the appropriate notion of chain complexes in the category of commutative monoids ?
I was trying to argue that the appropriate notion of chain complex in the category of commutative monoids is precisely the notion of globular object in the category of commutative monoids.
I conjectured that globular objects in the category of abelian groups are equivalent to chain complexes in the category of abelian groups, where we define . (I think checking this conjecture is just a matter of generalizing some calculations in HDA6.)
But we can't subtract morphisms between commutative monoids, in general. So I believe we should avoid the temptation to work with chain complexes, and simply use globular objects - which are, I believe, more fundamental!
But here's another idea: simplicial objects in the category of abelian groups are also equivalent to chain complexes of abelian groups: that's the [[Dold-Kan theorem]].
Maybe simplicial objects in the category of commutative monoids are even better than globular objects!
However, what I want to do now is think about your ideas regarding Mayer-Vietoris for the homology monoid of a graph. A graph is such a simple thing that many of the fancy ideas I'm talking about become much simpler in this case. We don't need to figure out the general theory of how to generalize homological algebra to commutative monoids to finish our paper!
(A graph is a very simple sort of globular set, and a reflexive graph is a very simple sort of simplicial sets.)
John Baez said:
(A graph is a very simple sort of globular set, and a reflexive graph is a very simple sort of simplicial sets.)
Thanks! Yes, I agree!!
John Baez said:
Maybe simplicial objects in the category of commutative monoids are even better than globular objects!
Interesting!!
John Baez said:
(a reflexive graph is a very simple sort of simplicial sets.)
Interestingly, it seems a graph in our sense can can also be seen as a simplicial set of dimension as in Directed Graphs as Simplicial Sets, however, morphisms in our sense is different from the morphisms in their sense. Hence, our categories and their categories may be different.
James Deikun said:
Just as a kernel pair describes the failure of an arrow to be monic, and a cokernel pair its failure to be epic, one could have a pair that describes the failure of a pair to be jointly monic/epic. I wonder if this idea is of use in this context.
Interesting! Thank you!
John Baez said:
A graph is such a simple thing that many of the fancy ideas I'm talking about become much simpler in this case. We don't need to figure out the general theory of how to generalize homological algebra to commutative monoids to finish our paper!
I agree!!
I think we are now thinking about two problems (probably independent) in the context of emergence of feedback loops in directed graphs while gluing graphs along vertices/ vertices and edges.
(1) Construction of an aprropriate Mayer Vietoris exact sequece of commutative monoids for our directed graphs.
(2) Understanding/characterising the elements of .
Now, I think, I found something in the direction of (2), although in a much simpler case (at the moment), that is when only contain vertices.
Since we have shown
Description of the idea:
Let , such that only contain vertices. Now, let and denote that set of paths in and the set of paths in , respectively. Let and denote the vertices of and vertices of , respectively.
Now, let us define the following:
Now, let .
Claim:
There is a category whose
Proof: There are obvious source, target and unit maps. Composition is defined by concatenation, which can be seen to be associative. Hence, is a category.
Definition (Emergent paths)
Let such that contain only vertices. Then, for each , define as the set of emergent paths of degree in the graph .
Definition (Emergent loops)
Let such that contain only vertices. Then for each , define the automorphism monoid in the category as the set of emergent loops based at the vertex in the graph . Furthermore, define the set theoretic intersection as the set of emergent loops of degree based at in the graph .
Now, it may be interesting to see how holonomy comes into picture i.e emergent holonomy . More precisely, for a commutative monoid , let and be two -labeled graphs such that contain only vertices. Now, we have induced holonomy maps
which I think naturally induces a functor , which I call as the emergent holonomy functor with respect to the gluing of the graphs and along vertices.
Inspiration of the above idea:
I used a similar idea to introduce a notion parallel transport functor along Haefliger paths on a principal Lie 2-group bundle over a Lie groupoid (groupoid object in the category of principal bundles) in the section 4 and section 5 of my paper PARALLEL TRANSPORT ON A LIE 2-GROUP BUNDLE OVER A LIE GROUPOID ALONG HAEFLIGER PATHS. Although I think the idea of Haefliger paths go back to André Haefliger (for example see section 4.1.3 of CLOSED GEODESICS ON ORBIFOLDS). Due to this, if my above construction seems correct, I would like to call my category as the Haefliger path category of emergence in .
This looks interesting. Let's see if I can explain the main idea simply using fewer symbols. I try to avoid formulas until the main idea has been explained in words. It's much easier to understand complex formulas if you already know what they say!
Here's what I guess you're saying:
Suppose we have a graph that's the union of two subgraphs and that have no edges in common, only vertices. Then there's a category where objects are vertices in and morphisms are paths between such vertices.
The set of morphisms is thus the disjoint union of sets , where consists of paths such that....
... here I'm confused. I would have guessed that should consist of paths that go back and forth between and times. Then:
But you seem to be saying something else. First, you say consists of vertices in , which are objects. So the morphisms start with , and you say
I don't see any need for those superscripts but that's a minor point. More importantly: what do all these symbols say?
You seem to be describing the set of paths that either start in and then go into , or start in and then go into . So this is exactly what I would have called . Okay, good!
So the only difference between what you're saying and what I would have guessed is that:
1) you're ignoring the set of paths that stay in or stay in , which I would call
2) you're instead using to mean the set of morphisms.
Is that correct?
Thanks!! Yes, we need another layer as you said. Somehow I forgot to consider.
Okay, great!
Objects of is and morphisms are "what I described" and additionally with your description of "paths that entirely lie either in or in ".
John Baez said:
Okay, great!
Thanks!
John Baez said:
Suppose we have a graph that's the union of two subgraphs and that have no edges in common, only vertices. Then there's a category where objects are vertices in and morphisms are paths between such vertices.
As objects, I meant not vertices in .
Now, it would be really great if the composite of a morphism in and a morphism in was a morphism in , since then we'd have a category enriched in -graded sets, but that's not true with this definition, since:
but
I think we can fix this by writing as the disjoint union of two subsets, say and . consists of paths in whose edges start in , while consists of paths in whose edges start in .
Then we have rules like: composing a path in and a path in , we get a path in . The rules are a bit complicated but we could probably express them in a nice way after some thought.
Adittya Chaudhuri said:
As objects, I meant not vertices in .
Okay, good.
John Baez said:
Then we have rules like: composing a path in and a path in , we get a path in . The rules are a bit complicated but we could probably express them in a nice way after some thought.
Thank you!! This idea looks very interesting!! As then, we can actually do algebraic operations on the emergent paths
Right! I hope we get a way of grading the homsets where the set of grades is some monoid , and composing a morphism in with one in gives a morphism in where .
But this monoid is more complicated than . As a set it's , but it has a funny addition.
Thanks.. yes I agree.
Anyway, I need to spend time continuing to think about Mayer-Vietoris. My progress has been very slow, and my excuse is that I'm getting started in my job at the University of Edinburgh, doing a lot of stuff like getting a library card, taking required online courses, meeting the category theory grad students, etc.
No no.. Its completely fine. I am then trying to think on grading the homsets in the way you suggested.
Let me recall the category as I defined before (along with adding the paths that lie entirely in or, in ).
Now, let .
Then, as I explained, we can define a category whose
Now, you said
I hope we get a way of grading the homsets where the set of grades is some monoid , and composing a morphism in with one in gives a morphism in where .
Claim:
Below I am trying to show such a monoid can not exist (when the grading is based on the number of times the paths in have made the transition from to and from to as a result of gluing and along vertices.)
Let us assume that such a monoid exists. In that case, note that by the way I defined composition in (i.e by concatenation), we need to have distinct grading elements for
Lets check the associativity!!
as (by the way we compose two elements in in the category .)
Now, consider as (by the way we compose an element in and an element in in the category ). Now, (by the same reasoning).
Hence, . However, from the way, we compose two elements in in the category , we have . Thus, the binary operation is not associative, and hence, can not be a monoid. (Proved)
Now, I will try to show that we can still get a nice grading-monoid(well behaved with the concatenation of paths) based on the holonomy functor
Let me first recall the set up.
For a commutative monoid , let and be two -labeled graphs such that contain only vertices. Now, we have induced holonomy maps
which induces a functor .
Now, I am defining a -grading which works perfectly with the composition law in the category (as is a functor) .
For each morphism in , let , the set of morphism in whose holonomy is . Note that , and thus defines a -grading.
Thus, using the category , we can now grade the paths in the graph in two different ways:
1) -grading representing the number of times the paths in have made the transition from to and from to as a result of gluing and along vertices.
2) For any monoid and labeled graphs and , we can have the -grading . Here, the paths in are graded by the holonomy that is induced on the directed graph by and due to the way and are glued along vertices.
As I have shown, 1 is not well behaved with the operation of concatnation of paths but 2 is well behaved.
However, I think this ill-behaviour of 1 with respect to the the composition law of the category precisely tells that the behaviour described by "the number of times the paths in have made the transition from to and from to as a result of gluing and along vertices is indeed an emrgent phenomenon. To the contrary, the well behaviour of 2 with respect to the composition law of the category tells that the holonomy of paths is actually not an emergent phenomenon when we glue and along vertices.
I am writing down somethings I noticed:
When contain only vertices then, I think the category is same as . However, interestingly, the description of defines a -grading on the -sets of on the basis of the number of times the paths in have made the transition from to and from to as a result of gluing and along vertices.
Thus, from the above discussion, I think the functor is same as the functor . I think basically it tells that for a monoid , horizontal composition of open -labeled graphs go to horizontal composition of open -labeled categories in a functorial way.
Hence, I think, we can intereprete every holonomy map as a -grading on the set (hence, on the set of paths on ) induced by the holonomies on paths in and holonomies of paths in by the monoid (which is well-behaved with the concatenation of paths) when we glue and along vertices.
That sounds right! By the way, this stuff is closely connected to @Jade Master's work and we must not forget that.
Thanks. Yes, definitely!! I never thought about any relationship between grading and emergence before you shared @Jade Master's idea on it.
Jade Master said:
The theorem I want to claim without proof is that
where is a variant of Grothendieck construction, or more accurately the displayed category construction, and is the free category on the pushout of and .
@John Baez Although I have not gone through every detail of Jade's idea, but now, it feels my Haefliger path category might be same as , because I also claimed . May be the Jade's idea is more general (I think she is also considering the case when contains edges ?)
I'll have to review Jade's work! I'm less familiar with the Grothendieck construction aspect and more familiar with the aspect - the free category on a pushout of graphs.
Here are some preliminaries to get ready for that second aspect.
First, a little theorem: whenever we have a monoidal category with coproducts where the tensor product distributes over the coproducts, the free monoid on an object is
so it's automatically -graded.
For example in this says that the free monoid on a set is
This consists of 'words' in the 'alphabet' , where is the set of words of length . The product in this monoid comes from the obvious maps
I forget if this little theorem applies directly to the free category on a graph, but I'm thinking maybe it does, as follows:
There is a bicategory of sets, spans of sets, and maps of spans, where composition of spans is done via pullback. A graph is the same thing as an endospan: a span from a set to itself. What's a category in these terms?
Since a one-object bicategory is a monoidal category, the category of spans from a fixed set to itself is a monoidal category. So there's an interesting monoidal structure on the category of graphs with a fixed set of vertices.
And a monoid in this monoidal category is just a category with as its set of objects! This is fun to check.
I believe has coproducts and the tensor product (composition of endospans) distributes over coproducts. If so, the free category on a graph is
@Jade Master generalized these ideas and applied them to study emergence.
Thanks! These ideas sound very interesting, and much more general in nature than what I was thinking with respect to Haefliger -paths(inspired from the notion of paths in Lie groupoids The fundamental group(oid) of a Lie groupoid). I am trying to understand Jade's ideas in more detail.
Thanks @Adittya Chaudhuri and @John Baez I'll take a look at Haefliger G-paths, I promise to explain myself in more detail as well. I hope to convince you that the Grothendieck construction perspective is worth it if you put the time in to understand. More soon!
Thank you so much @Jade Master
John Baez said:
And a monoid in this monoidal category is just a category with as its set of objects! This is fun to check.
I believe has coproducts and the tensor product (composition of endospans) distributes over coproducts. If so, the free category on a graph is
Interesting!! I think the composition law in the category associated to the monoid in the monoidal category associated to the 1-object bicategory is same as the monoidal operation in the monoid. I think this monoidal operation is same as the horizontal compostion law of the one object bicategory. Then, I think "the category about which you said" is precisely the underlying 1-category of the 1-object bicategory whose object set is singleton and morphisms are the horizontal morphisms of the 1-object bicategory.
Hence, as you exaplained, we get free category on a graph is same as the free monoid on defined as .
Although I am yet to see in a rigorous way, but I am more or less convinced that is same as my (may be with some little changes)
The framework I outlined was designed for applications where you have a single graph with as its set of vertices. In our discussions and are usually graphs with different (but possibly nondisjoint) sets of vertices, and disjoint sets of edges.
However I believe we can handle that situation as follows. Suppose and have sets of vertices and , and disjoint sets of edges. Throw more vertices into each graph so that now they both have the same vertex set , but still disjoint sets of edges.
Then we have two graphs - I'll still call them and - both of which are objects in . The graph you're calling is, I believe, the coproduct in the category . Thus, the free category on this graph is
and we can expand out using the distributive law, but remembering that is not symmetric. E.g.:
From this we can easily see that
where the cross-terms come from the 'emergent' paths that are not paths in and not paths in .
All this is very much like material in Jade's thesis. I believe we see here that it's easier to study 'emergent paths' than to only study 'emergent loops'. It's similar to how the fundamental groupoid of a space is easier to understand (in some ways) than the fundamental group.
By the way, from the calculations above I think easy to see that homsets in the category
is graded by the free monoid on two generators and . For example all the morphisms in have grade .
John Baez said:
From this we can easily see that
where the cross-terms come from the 'emergent' paths that are not paths in and not paths in .
So if you squint at this formula you can see how the terms in the sum correspond to paths in the graph pictured below:
shapegraph.PNG
First define a function by
Are you with me so far? So then there is a bijection
where if is the composite then .
I'm with you! By the way, I suspect everything I just explained is just a slight variant of stuff in your thesis.
John Baez said:
I'm with you! By the way, I suspect everything I just explained is just a slight variant of stuff in your thesis.
Well nowadays I think about this stuff a little bit differently from what I wrote back then. I hope my new ways are better, but also I've probably just forgotten some of what I wrote there :laughing: it was a while ago.
John Baez said:
By the way, from the calculations above I think easy to see that homsets in the category
is graded by the free monoid on two generators and . For example all the morphisms in have grade .
To me, these ideas look very beautiful, elegant and interesting!! Thanks to both of you!! Regarding this grading, I have one question.
I agree to the grading you defined, however, would it be more interesting if we can somehow(by quotienting or something) we can idenitfy with for all ? This quotiented grading is precisely what I had in . However, I think this quotiented grading does not behave well with concatenation of paths. I explained it in the claim portion here
Now, I understand the grading by two generators and is much better than the quotiented grading that I had in because, your grading gives also information on "lengths and sequence of the portions of the path in that lies in and lengths and sequence of the portions of the path that lies in ". Interesting!!
Thanks @Jade Master . I am trying to understand your ideas.
Adittya Chaudhuri said:
John Baez said:
By the way, from the calculations above I think easy to see that homsets in the category
is graded by the free monoid on two generators and . For example all the morphisms in have grade .
To me, these ideas look very beautiful, elegant and interesting!! Thanks to both of you!! Regarding this grading, I have one question.
I agree to the grading you defined, however, would it be more interesting if we can somehow(by quotienting or something) we can identitfy with for all ? This quotiented grading is precisely what I had in .
With this quotiented grading, we are treating
as graded over the free monoid on two idempotents and .
This monoid is a quotient of the free monoid on two generators, where we impose the additional relations , .
However, I think this quotiented grading does not behave well with concatenation of paths.
I think it does in the situation I'm talking about where the set of edges of is disjoint from the set of edges of .
Let me check.
Suppose
is an edge path where is an edge in and is an edge in . So this edge path has grade .
Suppose
is an edge path where is an edge in and is an edge in . This edge path has grade .
Now let's compose them:
This edge path should have grade . But since we're now treating as idempotent, . So this edge path should have grade . And this corresponds to it starting out in , then going into , and then going back into .
It seems okay.
Thanks!! I agree. I think I made a mistake while grading the elements of in , as I graded all the elements of by the same monoid element. But, in your case .
I see - yes, my more refined grading works with the composition of paths.
Thanks. Yes, I agree.
@John Baez and I were discussing in DM about the construction of a rig that captures the idea of necessary stimulation in a directed labeled graph where the labels can be both added (describing the cumulative influences) and multiplied (describing the indirect influences). In this post, I am trying to gather the relevant ideas that we already discussed in this direction.
First, let me recall the constructions of two different monoids by adjoining a single element with an existing monoid:
Now, lets see what happens when we apply (1) on the multiplicative monoid of a rig .
Thus, let me adjoin an absorbing element in the monoid . Now, since in any rig, the additive identity is the only absorbing element, hence, becomes our new additive identity in the (hopefully new rig) . But, interestingly, this also means that we implicitly applied (2) on the additive monoid . Thus, we now obtain two new monoids and which we think that they combine by (distributive law) into a new rig .
However, I am yet to verify the distributive law.
For the context, we were mostly discussing for the commutative ring (which is by default a rig)
I just verified the distributive law. Hence, is a rig.
Great! So now let's see if I can understand the interpretation of this rig when . I became confused thinking about it a while ago. Our new rig has four elements, and we are trying to use those as polarities meaning positive effect, zero effect, negative effect and 'necessary stimulus'. Which one is zero effect and which one is necessary stimulus?
I suppose you should tell me names for the 4 elements - they're a bit confusing since the new absorbing element could be called or .
Thanks!! I think there is an issue (in the context of necessary stimulus) here for the case i.e . Since , which in the sense of application means "positive and negative are added to indeterminate (as you exaplined before). However, from the above construction , which is a bit strange if we think of as necessary stimulus.
I don't think we should be talking about "indeterminate" here at all - right? The 4-element rig containing "indeterminate" is a completely different rig, right?
Yes, I agree. But then, how to interprete the addtion of and in the rig capturing necessary stimulus?
But, the general Mathematical construction (construction of a new rig by adjoining an absorbing element to the multiplicative monoid of the old rig) seems correct.
Okay, so you're saying the problem is that with the current interpretation we get
no effect necessary stimulus = necessary stimulus
This is crazy so we cannot use this rig to handle "necessary stimulus".
Let's think about what's going on. Multiplication is more important than addition in our framework so let's see if there is a reasonable multiplicative monoid
{positive effect, negative effect, no effect, necessary stimulus}
We know how to multiply everything except "necessary stimulus", e.g. we know we want
positive effect negative effect = negative effect
and so on. So: do you have a theory about what these should be?
necessary stimulus positive effect =
necessary stimulus negative effect =
necessary stimulus no effect =
necessary stimulus necessary stimulus =
I have good guesses about some but not others!
John Baez said:
Okay, so you're saying the problem is that with the current interpretation we get
no effect necessary stimulus = necessary stimulus
This is crazy so we cannot use this rig to handle "necessary stimulus".
Yes, I completely agree.
Accorsding to the general construction,
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
necessary stimulus no effect = necessary stimulus
necessary stimulus necessary stimulus = necessary stimulus
As is absorptive w.r.t .
Which I think is crazy.
However, if we consider only monoid (not a rig) like here
, then I think, the idea makes sense.I am now thinking of "if we adjoin to the additive monoid" instead of the multiplicative monoid?
I think it is making sense because of the following:
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
necessary stimulus no effect = necessary stimulus
necessary stimulus necessary stimulus = necessary stimulus
which I think is not crazy by meaning of "necessary stimulus"
I mean, although is positvely affecting , but unless activates (necessary stimulus) , there is no existence of the activity of (hence, positive effect does not count)
According to the general construction,
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
necessary stimulus no effect = necessary stimulus
necessary stimulus necessary stimulus = necessary stimulusWhich I think is crazy.
I'm not interested in that construction now, because it's giving crazy results. I'm asking about biochemical reality. What would biochemists say the answers to these questions should be?
necessary stimulus positive effect =
necessary stimulus negative effect =
necessary stimulus no effect =
necessary stimulus necessary stimulus =
That's all I care about now.
I am not completely sure what biochemists think about it, however, I would expect something like this
necessary stimulus positive effect = postive
necessary stimulus negative effect = negative
necessary stimulus no effect = no effect
necessary stimulus necessary stimulus = necessary stimulus.
and I expect this
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
necessary stimulus no effect = necessary stimulus
necessary stimulus necessary stimulus = necessary stimulus
Hence, necessary stimulus is the absorptive element of the additive monoid.
And the necesary stimulus is an almost identity element of the multiplicative monoid.
Adittya Chaudhuri said:
I am not completely sure what biochemists think about it, however, I would expect something like this
necessary stimulus positive effect = positive
necessary stimulus negative effect = negative
necessary stimulus no effect = no effect
necessary stimulus necessary stimulus = necessary stimulus.
Okay. Two of those match what I expect, but two of them are more tricky and I think I can argue that
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
Briefly, it may be that in a chain of causation if one step is necessary for the next to occur, it's necessary for the whole chain to occur.
However, I'm afraid biochemists may think differently.
It sounds reasonable and interesting, but I am also not much sure about biochemists.
Personally, I am very much convinced (thinking about chain of causations)
necessary stimulus positive effect = necessary stimulus
necessary stimulus negative effect = necessary stimulus
I think necessary stimulus is a particular "type of causation", applicable not only in biochemistry but anywhere.
One question about the concept of "necessary stimulus" is whether it's allowed to have two edges
both labeled "necessary stimulus". If we do, does this mean that both A and A' must be present for B to be formed?
Yes, it does. Basically in SBGN diagram, they use AND operator to join two biochemical entities like and
Of course we don't have an AND operator in our formalism. I guess we're studying a simplified version of the full formalism.
Yes, I also think so.
Another question:
Regarding the chain of causation, how the logic of "necessary stimulus" differs from an "unknown effect" ? I am not able to see much in the multiplicative monoid.
except, unknown .necessary=unknown. I think I understood.
Interesting. Of course it's possible for the same monoid to have many different meanings. We would see those only when we chose a semantics for our syntax. This paper is all about syntax - we talk in words about what our labeled graphs might mean, but we never discuss any functors that map labeled graphs to their meanings.
Thanks. Yes, I agree and realised this now (after your last post)
Although homology can be thought as a semantic (if we want)
Or the graded emergent paths.
Also, holonomy
Yes, those are nice semantics for open graphs. But none of them really know about whether one chemical promotes another!
I agree.
There could be some ODE semantics, or Markov process semantics, etc.
But I don't want to think about those now... that's another paper.
Yes, I understand and agree to your point.
By the way, maybe someday you could write a paper on semantics for regulatory networks.
Thanks!! I would love to, but I would love much more if we can jointly write the paper on the semantics of graphs with polarities focussing on regulatory networks.
I think I just discovred an interesting way to describe your four elements rig (Example 7.3 in the file) with the additive and multiplicative operation `exactly as you defined'. By the word "interesting", I meant a "a formal way to construct the rig from some known basic monoid/rig".
Below I am sketching my ideas:
Consider the multiplicative monoid of i.e according to our notation . Then, there is a rig , whose
Now, the fact that is indeed a rig follows from the the fact that is a (multiplicative)monoid (by the example 1.10 of the book
Jonathan S. Golan (auth.) - Semirings and their Applications-Springer Netherlands (1999).pdf)
Now,
then I verified that the addition and multiplication table of the respetively underlying additive and multiplicative monoids of the rig are precisely "what you described" as addition and multiplication tables for your rig in the Example 7.3 in the file.
Hence, I claim your rig is same as the rig .
It also precisely matches our intuition:
means no effect, means indeterminate, means positive effect and means negative effect.
Adittya Chaudhuri said:
Thanks!! I would love to, but I would love much more if we can jointly write the paper on the semantics of graphs with polarities focussing on regulatory networks.
Thanks, but I have too many different things I want or need to do. I have a few books I want to finish writing, I need to get back to work on software for epidemiology, etc. I think our current paper is a great first step and you could easily continue working on this sort of subject if you want to. The big question is: what will be the best thing for you to work on, to develop your career? It may be that more practical work in systems biology is better.
I think I just discovred an interesting way to describe your four elements rig (Example 7.3 in the file) with the additive and multiplicative operation 'exactly as you defined'.
That's GREAT! I really like this construction because it reminds me of the conjecture, made here, that there's a systematic way to turn any hyperring into a rig, by taking the subset of the power set of the hyperring generated by the singletons and the hyperring operations. It's a different construction but again it uses the power set.
One big difference is that Golan is using union as addition.
John Baez said:
Thanks, but I have too many different things I want or need to do. I have a few books I want to finish writing, I need to get back to work on software for epidemiology, etc. I think our current paper is a great first step and you could easily continue working on this sort of subject if you want to. The big question is: what will be the best thing for you to work on, to develop your career? It may be that more practical work in systems biology is better.
Yes, I completely understand the point you made about your ongoing/upcoming important academic engagements. For so many years and in so many ways, your works and your visions are inspiring me (your papers in 2-group gauge theory during my PhD), your blog posts, your ACT papers (during my post doctoral phase), your vision of `green Mathematics’, to name only a few. I am very glad, happy and honored to get an oppurtunity to work with you in building some interesting Math with applications in Real world during the last few months.
Yes, I will try my best to write the paper on menaingful (from the point of Systems Biology) functorial semantics of regulatory networks. I will highly appreciate your feedbacks on my attempt. I really enjoyed the style of working in Public Zulip because of all the reasons you mentioned in your very first post in this thread, and additionally it gives me an oppurtinity to look back at the processs of how an interesting idea emerge from inital preliminary ideas. I am looking forward to continue my future works with this philosophy in mind.
As of now, my goal, is to develop Math (based on ACT) such that it is interesting to both Mathematicians and Systems Biologists at the “same time”. To be more precise, from the point of Mathematics, I want to focus on “Mathematical characterisations of various failure of ACT-based compositionality” and from the point of Real World Applications, I want to expore how “the said study of emergence” can be useful in Systems Biology and as well as some other important areas of Real world . I think in our paper we did a similar thing for graphs with polaritites. To be very precise, at the moment, I want to write more papers like the one we are writing now.
John Baez said:
That's GREAT! I really like this construction because it reminds me of the conjecture, made here, that there's a systematic way to turn any hyperring into a rig, by taking the subset of the power set of the hyperring generated by the singletons and the hyperring operations. It's a different construction but again it uses the power set.
Thank you so much!! Yes, I remeber your conjecture on "doubly distributive hyperrings". Yes, I agree Golan's and your construction are similar in spirit but there are some differences. I think, I realised the following special case/ variant of your conjecture by my construction above:
For every non-trivial monoid , the power set can be seen as a hyperring. Then, by Golan's construction, we can make this hyperring into a rig, whose multiplicative monoid coincides with .
That sounds right. (This is true even for the trivial monoid - it's bad to discriminate against trivial objects!)
I noticed that Golan's construction is closely related to the idea of a [[group ring]] or [[group algebra]] or monoid algebra, as follows:
For a monoid and commutative rig we can define the monoid rig to consist of finite linear combinations of elements of with coefficients in , made into a rig in the obvious way:
When is finite and is the boolean rig, is isomorphic to the set of all subsets of , made into a rig using Golan's construction!
(For infinite we need to use the fact that is not just a rig but a quantale, which means that all infinite sums in converge. Then we can construct a rig of infinite linear combinations of elements of with coefficients in , and this matches Golan's construction.)
By the way, we can say this stuff more elegantly: for any commutative rig the 'free -module' functor
is symmetric monoidal, sending products in to tensor products in . Thus, it sends monoids in to monoids in , which deserve to be called -algebras. This is a nice way to avoid writing these formulas:
but the formulas are hiding in this abstract nonsense.
I'll add some of these ideas to our paper now, but I'll try to resist saying too much.
Thank you!! I am now trying to understand your ideas you just posted!!
Thanks!! These ideas look very beautiful and I really find these as very clean way of generalising the Golan's construction. I really like the Boolean rig and finite monoid construction. Indeed the description via the functor is very elegant. I feel (although I am not sure) instead of Golan's construction, we can mention about , the functor evaluated at the monoid , where is the Boolean rig and is the multiplicative monoid of for example 7.3.
John Baez said:
That sounds right. (This is true even for the trivial monoid - it's bad to discriminate against trivial objects!)
Thanks. Yes, I agree to your point.
I was thinking whether there is a way to capture the notion of boolean network model of gene regulatory networks via just choosing a suitable labeling monoid.
More fromally, I am talking about the following set up:
I think the above example provides a temporal description (over discrete time intervals ) of which influences (both direct and indirect) are "on" and which influences (both direct and indirect) are `off'. Although the general Boolean network modelling is more compicated than what I described above (as there the states of the vertices are updated over time by a Boolean function which depends on the previous state of all the vertices of the graph). However, my above construction is inspired from the following intuition:
Consider the labeled graph in the attached file
dynamiclabeledgraph.PNG
Now, let we define
,
if is even, and
, if is odd.
Let and are constant functions defined as for all .
On the otherhand, we define
,
if is even, and
, if is odd.
Then, the -labeled graph in the attached file gives a temporal description of the underlying -labeled graph which alternates between a positive feedback loop and a negative feedback loop over time.
An interpretation:
One may also define and in a suitable way so that it models an use and not use of an inhibitor drug on a patient affecting the regulatory network (described in the attached file) over time.
I don't understand this business about switching back and forth between positive and negative feedback depending on whether the time is even or odd. Is this just an assumption?
I'll admit I have trouble thinking about new issues, because I'd like to finish the paper before May 26th when other things start happening. I've added material to Section 8 explaining the symmetric monoidal double category of open graphs with edges labeled by elements of a set . This is an old result that I've proved twice before!
Now, we can say is a monoid and not change anything at all: we can still use the double category defined in exactly the same way.
But:
I guess 2 is best done by first doing 1, and 1 relies on
I don't want to drown in all the possible results we could state - there's too much here - but another good one might be this.
1 could be a warmup to studying emergent paths (or emergent loops), and 3 could be a warmup to studying emergent cycles and Mayer-Vietoris.
But I want to get to the interesting stuff we've discovered fairly efficiently, without too much boring formalism.
Regarding the future:
To be very precise, at the moment, I want to write more papers like the one we are writing now.
Great! But our paper is quite theoretical, and I suspect biologists won't appreciate it unless it is used to do something more practical. For this you might want to develop and/or use some software in AlgebraicJulia or CatColab. CatColab already has general software for finding motifs in monoid-labeled graphs. So far @Evan Patterson has illustrated for the monoids here, here, and here. However, I don't think it's actually been used to do research in biochemistry. You might talk to your colleagues about that. They may suggest projects I could never imagine, since I'm not a biologist.
FWIW, I'd love to get feedback from your biologist colleagues about what might be useful, even if it's quite far from this stuff!
Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of -fold slow. Jason Browns be Xiaoyan Li are working on a theory with -fold delays like you were thinking of right now. Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of , if that’s what you meant.
John Baez said:
I don't understand this business about switching back and forth between positive and negative feedback depending on whether the time is even or odd. Is this just an assumption?
I admit that "switching back and forth between positive and negative feedback depending on whether the time is even or odd" sounds crazy from the point of applications, and I think it happened due to my oversimplification. What I meant to say is that we may be able to define suitable functions which reflects the fact that over time, a positive feed back loop can change into a negative feedback loop and vice versa, and thus this event may be interesting from the point that over time some influences can "cease to exist" or may begin to "re exist" which affects the overall causal structure of the causal loop diagram. But, I admit that, from the context of our paper, this example may not be that important at the moment.
John Baez said:
I'll admit I have trouble thinking about new issues, because I'd like to finish the paper before May 26th when other things start happening.
Yes, I fully understand your point.
John Baez said:
I've added material to Section 8 explaining the symmetric monoidal double category of open graphs with edges labeled by elements of a set . This is an old result that I've proved twice before!
Now, we can say is a monoid and not change anything at all: we can still use the double category defined in exactly the same way.
Thank you. Yes, I agree.
John Baez said:
- There's a (symmetric monoidal) double category of open monoid-labeled categories.
Yes, it should follow from the fact that category of -labeled categories is cocomplete and we have an adjoint functor defined by the composition of two adjoints and and I think both of them we already verified.
John Baez said:
- I believe we want to prove there's a double functor sending open monoid-labeled graphs to open monoid-labeled categories, and
I think it should follow from the construction of the functor as the composition , and a theorem that you have already proved in your older papers on structured cospans.
John Baez said:
- We can if we want try to define a double category of open-monoid labeled graphs and Kleisli morphisms between them.
Yes, it sounds interesting. I am thinking about it.
John Baez said:
1 could be a warmup to studying emergent paths (or emergent loops), and 3 could be a warmup to studying emergent cycles and Mayer-Vietoris.
Yes, I agree.
John Baez said:
I don't want to drown in all the possible results we could state - there's too much here -
It reminds me of a result which we already discuused before about "How the cooresponding symmetric monoidal double categories behave under the change of labeling monoids"
I am attaching a diagram which I already shared with you in DM a couple of months back.
commutative diagram of double functors.png
Can we add this? (As we are working with several labeling monoids through out the paper.)
John Baez said:
- but another good one might be this.
- Since for any commutative monoid we've shown that preserves colimits, there should be a double functor sending open graphs to their (open) chain complexes.
Yes, it sounds interesting. Are you defining chain complex of a graph as a globular object in the category of commutative monoids? (as you explained before)
John Baez said:
But I want to get to the interesting stuff we've discovered fairly efficiently, without too much boring formalism.
Yes, I fully agree to your point.
John Baez said:
Regarding the future:
To be very precise, at the moment, I want to write more papers like the one we are writing now.
Great! But our paper is quite theoretical, and I suspect biologists won't appreciate it unless it is used to do something more practical. For this you might want to develop and/or use some software in AlgebraicJulia or CatColab. CatColab already has general software for finding motifs in monoid-labeled graphs. So far Evan Patterson has illustrated for the monoids here, here, and here. However, I don't think it's actually been used to do research in biochemistry. You might talk to your colleagues about that. They may suggest projects I could never imagine, since I'm not a biologist.
Thank you!! Yes, I fully agree that according to my goal (writing papers interesting to both Mathematicians and Biologists) is a two step process, i.e writing interesting Mathematical papers and then follow it up with the development of user friendly softwares. Yes, it could be of course a very interesting project if the "software based on Algebraic Julia and CatColab" can incorporate all the causalities like necessary stimulus, Logic operators etc.. used in SBGN-diagrams. I am looking forward for such a project. Yes, I will talk to my biologist colleagues about the ideas of such a project.
Evan Patterson said:
FWIW, I'd love to get feedback from your biologist colleagues about what might be useful, even if it's quite far from this stuff!
Thanks. Yes, I will ask them.
Kevin Carlson said:
Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of -fold slow. Jason Browns be Xiaoyan Li are working on a theory with -fold delays like you were thinking of right now.
Thanks!! That's really interesting!!
Kevin Carlson said:
Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of , if that’s what you meant.
Yes, I admit has the previledge to accomodate an "indeterminate" effect, but I am also thinking about our recent construction of power set rig over . Also, I would like to see if the causalities like "necessary stimulus" or "logical operators" can be incorporated in causal loop diagrams.
Kevin Carlson said:
Those aren’t quite the right monoids, by the way. In our delayed CLDs, the composite of two “slow” edges is just “slow”; there’s no notion of -fold slow.
Oh, huh - I think Evan told me otherwise, but I could easily be confused.
Jason Brown and Xiaoyan Li are working on a theory with -fold delays like you were thinking of right now.
Great! I've sort of heard about that.
Less importantly, the indeterminate diagrams do have signs from the multiplicative monoid of , if that’s what you meant.
That's what I meant - clearly if you just say people are going to think of the additive group. In my paper with Adittya we sometimes call the multiplicative monoid .
Can we add this? (As we are working with several labeling monoids through out the paper.)
We can. This is the sort of "obvious extension of the basic ideas" that I don't want to dominate the paper - they tend to make the paper a bit dry, like a paper written by category theorists who aren't mainly interested in how graphs with polarities are used in applications. However, it's a nice pushout of two ideas we've already mentioned.
Thanks. Yes, I agree to the point you made.
Yes, I fully agree that according to my goal (writing papers interesting to both Mathematicians and Biologists) is a two step process, i.e. writing interesting mathematical papers and then follow it up with the development of user friendly software.
Great! For me, at least, the second step is much harder than the first. I could write interesting mathematical papers endlessly while locked in a room with nothing but a laptop with a good internet connection - but creating software that's interesting to biologists would require talking to biologists a lot, and teaching them some category theory so they can understand me, and learning biology so I can understand, and either programming myself or (better) finding people who are good that, and who want to work on this project, and working with them.
If you're at all like me, then, you need to start the second step before you finish the first step, because it's much slower, and it may take several tries.
Thank you!! Yes, I think, for me too, the 1st step only requires a bare minimum like “a peaceful room, laptop with a good interenet connection, and pen-paper”, but yes, 2nd step is way harder than this. I understand and agree to the point you made.
John Baez said:
- We can if we want try to define a double category of open-monoid labeled graphs and Kleisli morphisms between them.
If we are thinking about using structured cospan for the purpose, we need to show that the Kleisli category contains finite colimits. I came across a discussion in this direction https://mathoverflow.net/questions/37965/completeness-and-cocompleteness-of-the-kleisli-category. I am not completely sure how to conclude the cocompleteness of our Kleisli categories from the MO question.
According to the Todd Trimble's comment in the MO question, it seems if the monad is idempotent then it ensures cocompleteness. However, I guess our monad is not idempotent as I think is not a full functor. However, I admit "what I just said" adds nothing to my initial cocompleteness question.
Even when the labeling monoid is trivial, I'm almost certain the Kleisli category (which in this case is the category of free categories) doesn't have pushouts. Consider the pushout of the categories and along the two inclusions of the long arrow In general categories, this would be a commutative square, which is not free; there's more to be said to prove there's no pushout at all (which is why Todd used the split idempotents in his example at your link, but that won't work here), but of the cocones I can list, including the projection onto and the four projections onto the arrow, none is universal.
I see. Thank you.
I guess Adittya is wanting cocompleteness because there's a well-known theorem that there's a symmetric monoidal double category of 'structured cospans' whenever we have categories and with finite colimits and a functor that preserves finite colimits. The loose 1-cells in this structured cospan category look like
We compose them using pushouts in and tensor them using coproducts in both and , and the fact that preserves them.
But the hypotheses here are stronger than necessary. Kenny Courser's thesis pointed out that it's enough for to have finite coproducts, to have finite coproducts and pushouts, and to preserve finite coproducts (Theorem 3.2.3 here). The pushouts are only needed in .
And I believe even this is more than we need. To compose the loose 1-cells we don't need to have all pushouts, only pushouts of diagrams like
where the object in the middle is of the form for some .
If we take and (the category of categories that are free on some graph, and functors between these), where sends any set to the discrete category on that set, do all pushouts of this form exist?
I'm pretty sure that the pushout of
exists if the arrows are monic. This may be good enough for defining a symmetric monoidal double category of 'monic structured cospans', I believe. I still need to check that composing two cospans of the form
with monic arrows, gives another cospan of that form with monic arrows.
If we drop the monicity constraint, I'm afraid we run into problems.
Thanks, you are right, I wanted the cocompleteness of the Kleisli category to apply that "well known theorem" on structured cospans. Thanks for pointing me to the more general version. However, as you said
"we only need to have only pushouts of diagrams like
"
and
"I'm pretty sure that the pushout of
"
is actually "very interesting" because I think occurances of motifs are characterised by Monic Kleisli morphisms but not general Kleisli morphisms. Here, from the point of applications also, I think the subcategory of monic Kleisli morphisms is more relevant than the whole Kleisli category.
But here I'm just suggesting that it may help to require that the feet of an open free category are monically included in its set of objects.
Yes, I got your point. But, I am saying "from the perspective of motifs", if we just focus on the subcategory of monic Kleisli morphisms (and not the whole Kleisli category), are we loosing much? I admit, we may not need the "monic condition" on the other morphisms. But I was trying to think from the point of motifs. I may be misunderstanding.
In that case, the 2-morphisms in the double category (associated to monic morphisms) will characterise the occurances of motifs.
I think, then the horizontal composition law of 2-morphisms should characterise a compatibility condition between
I don't love the idea of require that motifs be monic Kleisli morphisms between labeled graphs. I understand that to match biologists' intuitive ideas about motifs this requirement may be helpful. Monics are always appealing because a monic is like a "way of putting a picture of inside , without squashing down at all". But I would be more convinced that monicness is a useful requirement for motifs if I knew some theorem saying monic Kleisli morphisms are better in some way.
Anyway, I have a lot of writing to do and I want to get a lot done today, so I won't think about this much right now!
John Baez said:
But I would be more convinced that monicness is a useful requirement for motifs if I knew some theorem saying monic Kleisli morphisms are better in some way.
I understand your point. Although I may not be able to convince you "why monic" would be preferred Mathematically, if we consider this paper where I think the concept of defining motifs in terms of Kleisli morphisms first appeared, then "the Definition 2.8" and the discussion thereafter till the Proposition 2.9 assumes the monic condition for describing occurances of motifs.
Good point! They just say it "excludes degenerate instances" of motifs. Being a mathematician I say it's bad to discriminate against degenerate cases.... unless there's a very good reason. Excluding degenerate cases is psychologically appealing - the degenerate cases "look funny" - but it tends to cause trouble later on.
Now I got the point you are making. Thanks !! Yes, it fully makes sense.
Yes, I agree that pushouts of two free categories over a discrete free category exist. In fact, I don't see the need for these arrows to be monic. The pushout in the full category of categories is described by taking the pushout of object sets and generating morphisms by images generating morphisms of and of , subject to the images of relations from and from ; but there are no such relations in this case, so I believe the inclusion of free categories in categories creates any pushout of this form.
I was afraid that doing a pushout of that sort with a non-monic morphism might take a free category and identify two of its objects, making it non-free. But I hadn't actually sat down and checked.
It can do that, but my claim is that gluing objects of a free category doesn't make it unfree!
Yes, I was foolishly thinking that taking and identifying two objects would make a bunch of new morphisms spring into existence, which it does, and that this would be bad... which it's not! Freenesss can only be destroyed by identifying morphisms.
Let be the discrete graph on the set , let and , then I think, assiciated to any diagram in , we get a diagram , in (irrespective of whether is monic or not) as is a discrete category. Now, pushout of the diagram exists in . Now, if we apply the left adjoint functor on the diagram , we get "our pushout" i.e , and hence, the pushout exists and free.
Am I missing something?
That's roughly a correct argument, but you've written for the "same" morphisms in two different categories that obscures the point: the argument works because are in the image of which is the key special property of discrete categories you're using here.
Thanks. Yes, I agree to your argument. Yes, I agree I should have used different notations to denote the corresponding and in .
Adittya Chaudhuri said:
Apart from simple combinatorial examples like "how many possible routes" etc.. I am not able to find much real life examples which model indirect influences with multiplication in .
Sterman discusses them in his book Business Dynamics when explaining causal loop diagrams. He says we draw
when
But we could be more quantitative and use -labeled graphs instead of -labeled graphs, and write
when
Then when we have
Sterman says this implies
so we're using the multiplicative monoid of .
All of this is being rather sloppy about which variables we're holding fixed when taking partial derivatives. But there should be some truth to it.
Thanks very much for explaining!! Yes, now I am realising the practical relevance of using as a labeling monoid.
Hi! I think I observed another interepretation of a 2-element power set monoid. I am writing down my thoughts below:
Let means delay, means fast, means on time and means indeterminate. I am writing down the multiplication table below:
Consider the power set monoid generated on the two element set w.r.t the set -theoretic union as the binary operation. Then, if we denote the elements of the power set as
then, I think we get a monoid describing "delays and quickening" in a qualitative way.
I think the additive monoid gives the quantitative version of the above qualitiative monoid. However, unlike the multiplicative case, at the moment, I am not able to see a nice way to turn quantitative into qualitative via monoid homomorphism from to .
That's interesting, thanks! I'm not used to thinking of time qualitatively rather than quantitatively, but I guess it does indeed work.
Thank you.
In preparing my talk for Monday, I started reading this very nice book:
It studies 7 small labeled graphs which correspond to fundamental problems that can happen in systems. We can think of these as 'motifs', but they call them 'archetypes'.
The edges of these graphs are labeled with elements of the monoid , where the second factor describes whether or not there's a 'delay'.
Since these 'archetypes' describe problems, I would like to study gene regulatory networks and see how often these archetypes occur in those networks.
My hope is that they are rare.
Thanks. These look interesting. I will read about "archetypes" from the book you shared.
John Baez said:
That's interesting, thanks! I'm not used to thinking of time qualitatively rather than quantitatively, but I guess it does indeed work.
I think the product monoid may reasonably describe the "status of trains/flights" between various destinations connected by air routes/ train routes . By status, I mean whether the flight/train is delayed or arriving before time when we fix the source and target destination. By "reasonably" , I mean in an approximate way i.e qualitatively. However, for an accurate information, we may need to use the follwing product monoid . More precisely, negative real numbers will tell the "delay", positive real numbers will tell the "quickening" and will say "on time".
John Baez said:
We can think of these as 'motifs', but they call them 'archetypes'.
The edges of these graphs are labeled with elements of the monoid , where the second factor describes whether or not there's a 'delay'.
I was reading the book. These pathways are really interesting and so much "thoughts generating" in general. I realised delays play so much pivotal roles in human behaviour. I was wondering whether our immune system also behave "like us" when they fight against diseases. In their cases, may be its not "delay" but may be some "other distracting factor" produced by the disease to decieve our immune system.
I'm wondering how much the troublesome nature of those 7 system archetypes arises from having two paths from one vertex to another, with opposite signs, one with a delay.
Its like a "delayed version" of incoherent feedforward loop ?
Yes!
How does biology use incoherent feedforward loops?
As far as I know, for "good" purpose only. But I will read on it to say more precisely.
I feel "delay" is a distraction. In biology, it may be some biochemical serving the purpose of a delay to "fool our immune system" to fightback against a disease.
Biology uses incoherent feedforward for good, yes - but how is it good?
John Baez said:
For good, yes - but how is it good.
At the moment, I can not answer properly. I will read on it and tell.
I need to check, but now I think maybe the 'bad' only happens when one has two loops based at one point, with opposite signs, one with a delay.
Yes, true, but I think "the bad" is happening, because we (humans) often relate a delay with "non existence". As a result we are focussing on those loops which do not have a delay. But, it is just a thought. I will read on role of incoherent feedforward loops in regulatory networks.
I agree that we humans tend to treat a delayed signal as nonexistent. But I'm hoping this is visible in a purely mathematical way from the fact that we have two feedback loops of opposite sign, one with a delay. So the system starts acting one way and then 'too late' starts acting the opposite way.
Another question is "why the delayed loop should be always the harmful one" ? Unless we assume an external virus is causing such delayed loop in our immune system response.
John Baez said:
I agree that we humans tend to treat a delayed signal as nonexistent. But I'm hoping this is visible in a purely mathematical way from the fact that we have two feedback loops of opposite sign, one with a delay. So the system starts acting one way and then 'too late' starts acting the opposite way.
Thanks. I think I got your point. We can express the "non-existence" by a "sufficient delay". I agree.
John Baez said:
Biology uses incoherent feedforward for good, yes - but how is it good?
I found some "good" applications of incoherent delayed feedforward loops in the section 3.7 of Uri Alon's book An introduction to Systems Biology. I am explaining one such application.
Let and . Now, lets look at the concentration level of . At first the concentation of is increased due to direct stimulation of . Now, at the same time the concentration of also increases due to stimulation by . Now, once the concentration of reaches a threshold value, starts strongly repressing and over time redcuces the concentration of to . As a result it produces a pulse like dynamics for the concentration of . (Here, "delay in the inhibition of by " is happening because it is necessary that the concentration of reaches a threshold value before can act as an inhibitor to ).
Uri Alon said that such a pulse can be seen in the system that signals mammalian cells to divide in response to the proliferation signal EGF.
I am attaching a screenshot from the figure 1 of the paper Functional motifs in Biochemical networks (describing various information-processing systems in a cell), where I marked a portion in red. I think like this marked portion, this figure may conatin some interesting motifs.
motifsincell.PNG
I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.
John Baez said:
Here's a different idea. Cancer and autoimmune diseases seem to be 'system failures' of some sort, so maybe they could be detected very abstractly by detecting more 'bad motifs' activated in the gene regulatory network.
You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.
As a biologist thinking along these lines, Michael Levin,
Biological systems employ hierarchical regulatory networks to maintain structure and function at multiple scales, from molecules to whole organisms,
who is no stranger to ACT circles, as with his talk to the Topos Institute, has expressed such hopes.
David Corfield said:
You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.
I find this persspective of "hierarchical systems" very interesting to think about, or in particular, can we use ACT to talk about how inter-molecular interactions gives rise to
I think that dealing with cancer invlolves both (1) and (2) and in an inter-related way.
I have been thinking in the direction of (1) in toy-setups:
For example:
In the attached file let represents an interaction between entities and and represent an interaction between entities and .
communityinteraction.PNG
Lets assume both and represent "beneficial interaction" for the system
and the system , respectively.
Lets assume that and are not much aware of the interconnection between and represented by the graph .
I was trying to see and are bigger units than individually , , or . Now, a graph represents an inter-connection between the vertices, but a natural question arise how do we represent an interconnection between variuous subgraphs of a graph (when we know that such subgraphs can individually be treated as entities) in a way such that this higher interaction does not forget interaction beween vertices. This question can be be intuitively undertstood if we think , , and as individuals, , as two groups, and as an interconnection between the group and the group .
So, in the hierarchical order level 1 graphs are , and , but represents a level 2-graph.
Now, consider the symmetric monoidal double category whose loose 1-cells are open -labeled graphs. Now, I define the set as a subset of , and are composable triple of loose 1-cels in .
Then, I was thinking about defining a tuple (for a monoid ) as a -labeled graph, where the level-1 is a -labeled graph, but there is another "causal loop diagram kind of structure" (with a different labeling system ) which corresponds to influences among subgraphs or collective entities.
Most interesting part is a morphism " " which I want to define as a sequence of horizontally composable triples of 2-morphisms in which is compatible with new labeling system and .
I was thinking about these stuffs (in several directions) for some weeks.
David Corfield said:
As a biologist thinking along these lines, Michael Levin,
Biological systems employ hierarchical regulatory networks to maintain structure and function at multiple scales, from molecules to whole organisms,
who is no stranger to ACT circles, as with his talk to the Topos Institute, has expressed such hopes.
Thanks for sharing the talk. I watched the whole talk today. It is really very interesting!! I am very curious about a possible way to explore the concept of "self" (the way to incorporate properties in a system that can represent future states of the systems, and interestingly, every state trasition will be dependent on these set of representations) in a mathematical way.
David Corfield said:
You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.
People do work with 'hierarchical' Petri nets and hierarchical stock-flow models. Their ideas should be clarified, developed more generally, and applied to other kind of networks, like graphs with polarities. This should let us take a system and view it as a hierarchy, or view it as made of separate 'agents'.
But it fascinates me that in molecular biology the molecules don't seem to know which side they're on. So we could also take a gene regulatory network and try to figure out what it signifies, just by staring at it and thinking.
This could be harder. But right now I'm excited by the fact that the monoid-labeled graphs in System Hierarchy Basics are supposed to be intrinsic signs of trouble. Some are supposed to be signs of a system that's taking the easy way out and pursuing short-term solutions that cause trouble in the long run. One is supposed to be a sign of a system that's divided into two subsystems engaged in an 'arms race'.
It may well be overoptimistic to think these simple monoid-labeled graphs always have such clear and interesting meanings. But I think it would be good to go out on a limb and test hypotheses like this.
David Corfield said:
I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.
I agree: someone should try to apply System Archetype Basics to psychodynamics, and in a way I'd be surprised if nobody has. Another relevant archetype is the one called "escalation", which describes a scenario where two agents keep upping their level of some quantity to win some sort of arms race:
Adittya Chaudhuri said:
I found some "good" applications of incoherent delayed feedforward loops in the section 3.7 of Uri Alon's book An introduction to Systems Biology. I am explaining one such application.
Let and . Now, lets look at the concentration level of . At first the concentation of is increased due to direct stimulation of . Now, at the same time the concentration of also increases due to stimulation by . Now, once the concentration of reaches a threshold value, starts strongly repressing and over time reduces the concentration of to . As a result it produces a pulse like dynamics for the concentration of . (Here, "delay in the inhibition of by " is happening because it is necessary that the concentration of reaches a threshold value before can act as an inhibitor to ).
Nice! That makes sense.
John Baez said:
David Corfield said:
I wonder how much of psychodynamic theory could be seen in this light, both individual and couples. Books like Schopenhauer's Porcupines are pointing to dysfunctional dynamics. The escalation archetype and the shifting the burden/addiction archetypes would seem the most obvious. E.g., for the former, feel the need for attachment, try to get close to someone, this raises the fear of abandonment, hence withdrawal, and repeat.
I agree: someone should try to apply System Archetype Basics to psychodynamics, and in a way I'd be surprised if nobody has. Another relevant archetype is the one called "escalation", which describes a scenario where two agents keep upping their level of some quantity to win some sort of arms race:
I would caution that "System Archetype Basics" is just one part of a broad literature in System Dynamics that routinely refer to or build on ideas of system archetypes contributed over decades. I recall prominent attention to system archetypes starting at least as far back as 1990 (indeed, they were an central and well-developed part of the theory articulated in Peter Senge's exceptionally popular book "The Fifth Discipline", which was published 1990). Similarly, just off the top of my head, I recall a number of System Dynamics models focus on relationship dynamics and psychodynamics from that era and beyond. A broad and expanding set of system archetypes -- going well beyond those in "System Archetype Basics" -- are so tightly tied in with System Dynamics practice and so much part of the routine body of knowledge of that area that a very large literature weaves them into System Dynamics analyses of dynamics of systems in diverse application areas. From my recollection of some teaching material in the 1990s, I'm very confident that this includes attention psychodynamics -- although, as normal, such contributions would commonly be made without explicitly mentioning the phrase "system archetype" within the published work.
For those interested in exploring additional system archetypes visually, I'd suggest considering exploring the collection at the InsightMaker platform: https://insightmaker.com/tag/archetype?page=1. An important subset of these are contributed by my close colleague and longtime collaborator Dr. Geoff McDonnell MD, who over the years has been prolific contributor to InsightMaker models involving Health Care (https://insightmaker.com/tag/health-care), and -- critically -- Health more broadly https://insightmaker.com/tag/health; many of these include system archetypes of relevance to my work in computational public health and health care. Geoff's Insights -- which are often accompanied by unfolding stories -- are accessible at https://insightmaker.com/user/3xqay3rAaMCoKDZTgE670H.
This site is a great testimonial to the storytelling power of causal loop diagrams and system structure diagrams.
Is there someplace one can find, neatly listed, a bunch of causal loop diagrams that people have considered "system archetypes"? That's what I'd really like to see! (Oh: while I was writing this you seem to have provided a location.)
I will mention Senge's book in my talk, but personally I found it quite verbose, full of somewhat gaseous wisdom like "You can have your cake and eat it too - but not all at once" - and I couldn't even locate the causal loop diagrams.
System Archetype Basics is nice because it's crisp, the way a mathematician might like. Kim's 31-page Systems archetypes I is even more distilled.
John Baez said:
Is there someplace one can find, neatly listed, a bunch of causal loop diagrams that people have considered "system archetypes"? That's what I'd really like to see! (Oh: while I was writing this you seem to have provided a location.)
I will mention Senge's book in my talk, but personally I found it quite verbose, full of somewhat gaseous advice like "You can have your cake and eat it too - but not all at once" - and I couldn't even locate the causal loop diagrams.
Because archetypes are contributed over time to the literature, I'm not aware of any collection that has aspired -- and succeeded! -- in maintaining an up-to-date collection. A reasonable place to start seems to me to browse the collection at InsightMaker https://insightmaker.com/tag/archetype. That being said, that is cluttered with clones & elaborations of various insights, and I'd be surprised if that collection included more than 20-30% of those contributed in the literature, as part of teaching System Dynamics, etc.
Sadly, I agree with your assessment of Senge's work. Alas, based on my purely anecdotal experience, it's my perception is that same shortcoming afflicts many books aimed at management/business audiences.
John Baez said:
People do work with 'hierarchical' Petri nets and hierarchical stock-flow models. Their ideas should be clarified, developed more generally, and applied to other kind of networks, like graphs with polarities. This should let us take a system and view it as a hierarchy, or view it as made of separate 'agents'.
This sounds very interesting. I will read the paper.
John Baez said:
But it fascinates me that in molecular biology the molecules don't seem to know which side they're on. So we could also take a gene regulatory network and try to figure out what it signifies, just by staring at it and thinking.
I also find it fascinating. I am trying to write down (from my basic understanding) how "uncontrolled proliferation may take place"
Step 1:
A Ligand carrying a signal binds to the receptor moelcule in the cell boundary.
Step 2:
The receptor changes its configuration to activate signal transduction pathways in the cell (sequence of intra-cellular molecular events). Interestingly, initial signal is often amplified or repressed during the process by appropriate inhbitor or stimulator. Interestingly, there are also interconnections between pathways by crosstalk.
Interesting thing to note here is that in the signal transduction pathway, when a signal is passed from a node to a node it activates (or switch on the function of ). For example, in MAPK/ERK pathway, the "switch on" phenomena happens via adding phoshphate ion. In general, phosphorylation is temporary. To flip proteins back into their non-phosphorylated state (switch off state), cells have enzymes called phosphatases, which remove a phosphate group from their targets. However, "due to a mutation", some switch may not turn off and remain turn on", and hence some pathway remains active all through
Step 3:
Signal transduction network ends in producing a function of a cell like "cell proliferation", etc. However, due to the mutation, some path way remains active, and profileration continues and results in an uncontrolled proliferation, untill externally we can inhibit the pathway, by "chemotherapy", targeted therapy etc.
What I think that these mutation coupled with crosstalks between singnal transduction pathways create certain "bad motifs" over the time (as you conjectured), resulting in system failure.
What I just described is only for cell. But for cancer we may have to deal with thousands and thousands of cells.
John Baez said:
It may well be overoptimistic to think these simple monoid-labeled graphs always have such clear and interesting meanings. But I think it would be good to go out on a limb and test hypotheses like this.
I agree.
John Baez said:
This could be harder. But right now I'm excited by the fact that the monoid-labeled graphs in System Hierarchy Basics are supposed to be intrinsic signs of trouble. Some are supposed to be signs of a system that's taking the easy way out and pursuing short-term solutions that cause trouble in the long run. One is supposed to be a sign of a system that's divided into two subsystems engaged in an 'arms race'.
I find this point of view very interesting. I think Systems dynamics literature is written from the point of how "human made systems remain stable or unstable", but Systems Biology literature is written from the point "how natural biological systems remain stable". I think this is one of the many reasons why we find only "good motifs" in literature. May be, unlike Systems dynamics, people have not put much effort on finding out "bad motifs in biological systems", and explaining a system failure story via the presence of such bad motifs in our signal transduction networks. I feel (as you said) may be one has to create a list of Biological System Archetypes representing harmful motifs in biological systems. And , then explain systems failures like cancer via these Biological System Archetypes.
John Baez said:
Nice! That makes sense.
Thank you.
Nathaniel Osgood said:
This site is a great testimonial to the storytelling power of causal loop diagrams and system structure diagrams.
Thanks very much for sharing the site. It looks super interesting!!
John Baez said:
System Archetype Basics is nice because it's crisp, the way a mathematician might like. Kim's 31-page Systems archetypes I is even more distilled.
Nice. I will read.
Listening now to @Nathaniel Osgood's very interesting talk at TACT has me wonder whether there's anything to be said about how patterns, such as the system archetypes we've been talking about, translate across different logical frameworks.
Perhaps to consider things in the other direction, is it possible that archetypes emerge in the richer setting of stock and flow diagrams that we wouldn't see with mere causal loop diagrams?
David Corfield said:
Listening now to Nathaniel Osgood's very interesting talk at TACT has me wonder whether there's anything to be said about how patterns, such as the system archetypes we've been talking about, translate across different logical frameworks.
I am trying to understand your question in a very simple context i.e if we change the labeling system:
Let be a monoid homomorphism. Then, there is a functor from the category of -labeled graphs to the category of -labeled graphs. Now, due to the change in labeling systems, I think the logical structures of causal loop diagrams in these two categories should be distinct. Now, if is an archetype in a -labeled graph , then one may ask
"Is also an interesting archetype for logical structure represented by the -labeled graphs?"
In general, I do not know the answer. However, in a particular case, it may be interesting as I disccused before here
in the context of semiautomata.Right. The general phenomenon of entities transmuting across different settings is utterly widespread in mathematics, such as the splitting of a prime on change of ring. E.g., the prime , splits as the primes and in .
So @John Baez just now shows my original question and your response are close, where the shift from causal loop diagrams to causal loop diagrams with delay is just changing the monoid that labels the graph. That fairly clearly introduces new archetypes.
Back to psychotherapy meets systems archetypes, Success to the successful
image.png
is precisely what causes the Jungian mid-life need to rebalance. One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.
David Corfield said:
So John Baez just now shows my original question and your response are close, where the shift from causal loop diagrams to causal loop diagrams with delay is just changing the monoid that labels the graph. That fairly clearly introduces new archetypes.
Thanks. Yes, precisely!! Such "change" produced all those 7 harmul archetypes in that book, as `delay' played a vital role in each of those archetype.
David Corfield said:
John Baez said:
Here's a different idea. Cancer and autoimmune diseases seem to be 'system failures' of some sort, so maybe they could be detected very abstractly by detecting more 'bad motifs' activated in the gene regulatory network.
You'd imagine there would be a need for a hierarchical approach to say when a system is behaving well or badly. From the perspective of the cancer, things are going well. From the perspective of the cell, in apoptosis things are going badly. I wonder if such a perspective are open to ACT treatment.
I was wondering whether a generalised version of the archetype tragedy of the common
tragedyofthecommon.png would be relevant here.
Regulatory network describe how various biomolecules influence each other. However, from the causal loop diagram structure we usually do not study "when two or more regulatory networks are non-cooperating or coperating" . For example, in the attached file cumulative.PNG, the regulatory networks and are peforming great from their respective perspective. However, the way the network is interconnected to network via the network represents a non-cooperation between and . In a way, I feel the network represents an incompatibility between and . We may represent it as . Alternatively, we may also think that is affecting negatively to both and , i.e. .
I was reading the paper Cancer across the tree of life: cooperation and cheating in multicellularity, where the authors very interestingly argued about the following:
Multicellularity is characterized by cooperation among cells for the development, maintenance and reproduction of the multicellular organism. Cancer can be viewed as cheating within this cooperative multicellular system.
So, in the context of my attached diagram, one may interprete that the network represents a "cheating" .
Another question: What are some archetypes other than "tragedy of the common" where the success of individual subunits leads to a failure of the whole unit"?
Below, I am trying to understand and compare the effect of delay in causal loop diagrams in system dynamics and system biology.
From the point of systems dynamics:
According to my basic understandings on those 7 archetypes, it seems to me that human behaviour secretly prefers directed edges which are not labeled by the "delay" element in the monoid . I think this idea is misused by many organisations for their profits. In particular, delay may misguide us to think "delay" is same as "non-existence". However, I think "temptations to obtain something very precious after long period of time" oftens leads people to prefer delay (example: investment in insurance companies, etc.). I wander whether there exists any archetype which shows human behaviour overall prefers delay naturally .
From the point of systems biology:
I think may times delay are naturally preferred for producing desired outcomes like creating pulse like dynamics, which acts as a signal for cell division (as I explained here
). Thus, here it seems, delayed signals are integral part of the natural preferences in biological systems. However, in a faulty systems like cancer, autoimmune diseases etc. these "delays" often can be interpreted as "ways to ruin" our natural healthy biological systems, which I think may be similar to the perspective of systems dyanamics.Adittya Chaudhuri said:
Alternatively, we may also think that is affecting negatively to both and , i.e. .
This puts me in mind of when I was bringing up my kids, and the effect of another child coming to visit on their interpersonal dynamics. Some seemed to be able to bring out the best in them and left them harmonious; others left them completely disgruntled and at odds with each other.
There must be countless examples of such structures in economics too.
Presumably, following John's talk yesterday, there's a [[Mayer-Vietoris]] homological story to tell about how the introduction of a new graph component can adversely affect existing components.
David Corfield said:
This puts me in mind of when I was bringing up my kids, and the effect of another child coming to visit on their interpersonal dynamics. Some seemed to be able to bring out the best in them and left them harmonious; others left them completely disgruntled and at odds with each other.
There must be countless examples of such structures in economics too.
Thanks. This sounds very exciting. It sounds like "we can have various cospans like structure from to ". More precisely, there can be a set of graphs which represents various influences (with types divided into cooprative and non-cooperative) between the graph and the graph .
David Corfield said:
Presumably, following John's talk yesterday, there's a [[Mayer-Vietoris]] homological story to tell about how the introduction of a new graph component can adversely affect existing components.
Yes, there is indeed an interesting story sorrounding Mayer-Vietoris. Gluing two presheaf graphs and along vertices gives us a commutative monoid version of Mayer-Vietoris sequence, whose suitablly modified boundary map would gives us the information of "new emergent feedback loops" produced as "a consequence of such gluing".
However, I think in the diagram I drawn (attached
cumulative.PNG
),
here, graph and graph does not interesect along vertices but "along another graph namely "
You can consider joining more than two spaces, such as discussed here.
Thanks very much!! These look very much relevant to the case I was talking about.
I was actually thinking about the following direction:
Step 1:
By experimental and other studies we have constructed the regulatory network and the regulatory network . We realised every thing is going fine with them.
Step 2:
After some time we realise, something is wrong as both and are mysteriously malfunctioning.
Step 3:
Again, by experinmental and other studies we realise that both the regulatory netoworks are malfunctioning because of the presence of another regulatory network namely . To actually understand this step we may use "generalised Mayer-Vietoris" as you suggested, to discover emergent feedback loops.
Step 4: (This is where I am emphasizing)
We may create another causal loop diagram kind of structure whose vertices are causal loop diagrams themselves. I am calling it 2-level graphs with polaritites. A simple instance looks like this . Observe that we actually transformed the initial causal loop diagram into another causal loop diagram (by additional experimental and other studies). Now, I was wondering whether such transformation would give us a meaningful functorial semantic which tells (in a toy setup), how interactions between biomoleclues gives rise to interactions between regulatory networks, which may indicate a system failure in some way. For example, we may simply say represents a system failure because the causal loop diagram structure of is (in a way it forgets the detail of the tragedy of the common diagram, and only remembers the "basic cause" for the visible system failure ).
Adittya Chaudhuri said:
Yes, precisely!! Such a "change" produced all those 7 harmful archetypes in that book, as 'delay' played a vital role in each of those archetypes.
Interestingly 'delay' does not appear in the archetype 'success to the successful'. (David kindly shared the picture of this one.) I believe that's the only one where it doesn't appear.
But David wrote:
One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.
This suggests either some explicit 'delay', or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.
Yes. It's like the pay-off from continuing to allocate to A decreases (e.g., already sufficiently wealthy from investment banking), while the call to B increases (e.g., the neglected artistic talent), until a tipping point flips things.
Of course, in your case, you've wisely balanced your mathematical and musical talents!
John Baez said:
Interestingly 'delay' does not appear in the archetype 'success to the successful'. (David kindly shared the picture of this one.) I believe that's the only one where it doesn't appear.
But David wrote:
One has devoted oneself to developing one set of capacities to get on with the world, until the neglected capacities cry out for attention later.
This suggests either some explicit 'delay', or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.
Thanks. Yes, I agree.
John Baez said:
or (perhaps better) a bigger causal loop diagram where as the success of B diminishes, it eventually causes some other (bad) effect.
This is interesting!! Say, both 's success and 's success are needed in a bigger causal loop daigram(as you said) for the system to perform well. However, due to the presence of a success to succesful archetype, the whole system is collapsing after a delay (the time needed for the 's success to fall below a threshold due to the archetype). Thus, in an indirect way, the delay feauture is present in the bigger causal loop diagram in the disguise of success to successful archetype.
I've been distracted for a while, but I did a tiny bit of work on the paper today, as a warmup for more: I changed Example 6.7 so that it gives an example of a graph whose 1st homology monoid is not a free commutative monoid. We'd already seen that it was not free on its set of minimal elements, so the only extra thing to do is note that any free commutative monoid is free on its set of minimal elements.
Thats completely fine!! Thank you. I just checked the example 6.7. Yes, it looks great.
Good! Someone at my talk in TACT said our homology monoid of a graph was an example of 'functor homology', and they pointed me to this paper:
But it actually doesn't seem to have our construction as a special case, even though it contains some of the same words.
Is there any intention to carry over your generalized sign monoids, these polarities, to the Petri nets with signed links of
In your TACT talk you brought up the electric circuit analysis of Kirchoff's laws. This is often done in terms of cohomology. Is it there a reason to be looking at homology?
The best analysis of Kirchoff's laws, going by to Weyl but later Smale and others, uses both homology and cohomology:
It's good to think of current as a 1-cycle and voltage as a 1-cocycle; the pairing of 1-cycles and 1-cocycles applied to current and voltage gives power, i.e. the rate at which energy is getting turned into heat.
David Corfield said:
Is there any intention to carry over your generalized sign monoids, these polarities, to the Petri nets with signed links of
I hadn't thought about that. Right now I'm just struggling to finish the paper on monoid-labeled graphs, which has grown larger than I'd like.
At TACT I talked to my former student @Jade Master, who had worked on open Petri nets, and is currently developing a formalism for agent-based models using colored Petri nets. Since I'm really wanting to figure out the best category-based framework for agent-based models, I hope she comes to Edinburgh while our ICMS group is working on agent-based models here at Maxwell's house!
(There may be no one best category-based framework for agent-based models (ABMs), since ABMs use such a diverse crowd of modeling techniques. Still, we can try to bring a little order to the wild west.)
Interesting. And you have a 14-part series on ABMs!
Right. The first 8 posts were preliminary floundering:
The second batch of stuff was all very nice (in my humble opinion), but alas not sufficiently general for the models we want to create.
Then @Kris Brown came up with a more general framework here:
and I explained this starting in Part 9 of my series.
At our meeting we are trying to combine ABMs as described using Kris' framework with dynamical systems. Our goal is to use this to create ABMs for gestational type II diabetes and shocks in the labor market. Kris has an approach described here:
and we have been trying to follow that. But our economist @Owen Haaga has already bumped his head against the limitations of this approach.
Great, thanks! I have an idea I'd like to see if I can make some such formalism model psychodynamics, along the lines of
Screenshot 2025-06-06 14.07.29.png
Cool! Depending on how complicated your model is, we might have the software available for you to program it in and run it already, or it might take some months. As usual I recommend starting with a ridiculously oversimplified toy model, and then gradually, judiciously adding bells and whistles.
John Baez said:
Good! Someone at my talk in TACT said our homology monoid of a graph was an example of 'functor homology', and they pointed me to this paper:
- Manohar Kaul, Dai Tamaki, Weighted quiver kernel using functor homology.
But it actually doesn't seem to have our construction as a special case, even though it contains some of the same words.
Although I have not read the paper in detail, if I am not mistaking, the main difference between them and us is that they focus only on directed acyclic graphs and our focus lies on the directed graphs which contain cycles. I think Theorem 3.8 is their main theorem, where they computed their homology `groups' under the assumption of acyclic quivers. I have a feeling that their paper is more inclined towards an application on Topological ordering as a topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. On the contrary, our paper is inclined towards an application on causal loop diagrams and regulatory networks, where I think in all non-trivial cases we usually have cycles. So, it seems to me that the purpose of their homology groups and our homology monoids are very distinct.
David Corfield said:
Great, thanks! I have an idea I'd like to see if I can make some such formalism model psychodynamics, along the lines of
Very interesting!! I feel that there is an autonomy in each hierarchical layers like molecules ---> cells---> tissues----> organs-----> so on... which let them retain their layerwise idenities. From this perspective, I think a homeostatic system may possibly model such autonomy, and then such homeostatic system becomes a collective whole. Now, for integration , maybe a controller is needed (as Mark Solms commented in your post). This reminds me of the notion of hyperstructures by Baas (Please see page 3, the point (d)), where I think Baas explained such a controller in the context of "elections in democracies". I think Baas termed such a controler as globalizer (point (IV) in page 2).
So, it seems to me that the purpose of their homology groups and our homology monoids are very distinct.
Good. Some young mathematician rather confidently told me that this paper was doing the same thing as us, or that our work was a subset of these ideas.
Thanks! I do not see "how the results in that paper justify their claim".
I think they were being a bit overconfident.
I also feel so.
Hi! I observed something which I am discussing below:
We have used the left adjoints and (which produces discrete graphs and discrete categories respectively) to glue respectively, open graphs and open categories along respectively vertices and objects. Now, in the context of graphs, I think it is natural to ask whether we can glue graphs along edges and paths . I realised we may already have the necessary technical materials in our paper to accomodate such gluing. In particular, we have the functor , which is left adjoint to , and both and are finitely cocomplete.
Thus, if and are graphs of the form or , then a cospan of the form in (where we may asssume and are monic) may possibly allow us to glue the graph along edges and paths, and of course along vertices in the special case.
Yes, we can do such more general pushouts. The structured cospan philosophy is to restrict the allowed pushouts to get pushouts that are simpler and easier to understand; for example, we will show that when we push out two graphs along monomorphisms of graphs that have no edges, the homology behaves in a fairly simple way - but still complicated enough to write a section all about it.
Thank you. Yes, I got your point.
John Baez said:
Cool! Depending on how complicated your model is, we might have the software available for you to program it in and run it already, or it might take some months. As usual I recommend starting with a ridiculously oversimplified toy model, and then gradually, judiciously adding bells and whistles.
I could imagine requiring plenty of bells and whistles. For one thing, can DPO-rewriting cope with a certain amount of latitude in its pattern matching? An agent might not find an exact match for its component.
But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?
In the case of psychodynamics, there's also the powerful effect of projection, where one forces the external world into a pattern, e.g., sees something to get irate about when unnecessary. And then the further stage, projective identification, where one behaves in such a way as to generate the sought-for pattern in another.
But, as you say, start simple!
David Corfield said:
But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?
Yes, it is true. The relationship between ligand and binding partner is a function of charge, hydrophobicity and molecular structure as mentioned here.
David Corfield said:
But this must also be needed in biology, where some protein binds to a site on a membrane if the site has more-or-less the right shape, no?
From your description, I am now starting to imagine intracellular signal transduction as a string diagram of a nice monoidal category (which at the sametime can be represented as a sort of regulatory network), where the wires outside on the left of the big box represent the signals entering the cell, and the wires on the right side of the `big box' represent the signals transmitted by the "cell" ( modelled as the 'big box') to its environment. May be I am oversimplifying the setup. Although, I think equivalently, a cell can also be seen a structured cospan/decorated cospan, if we can define an appropriate nice category whose every object models the whole intra-cellular signal transduction pathways inside a cell .
@David Corfield wrote:
I could imagine requiring plenty of bells and whistles. For one thing, can DPO-rewriting cope with a certain amount of latitude in its pattern matching?
In DPO rewriting a pattern either matches some part of the state of the world or it does not. In our system, if it matches, it has a certain probability of being applied, where this probability is an arbitrarily specified function of time.
But you can write a lot of rules for a lot of different patterns, specifying different probabilities for each of these rules to be applied. You can create the effect of "latitude" this way. And if you have general ideas about which patterns count as similar, you can write code that automates this process of creating lots of rules for similar patterns, and assigning probabilities to each of them.
Of course the code will run more slowly if it has to look through hundreds or thousands of rules.
In general people find that fairly small and fairly simple collection of rules are enough to create surprising and thought-provoking effects. So it's always good to start with simple models before diving into something complicated.
Below I am discussing some thoughts on my perception of "good or bad motifs":
I think in causal loop diagrams used in Systems dynamics , the idea of "good and bad" comes from the "humanly meaning of the nodes" and "how that good or bad motifs" are afffecting another node/nodes of a bigger causal diagram of which that motif is a sub-causal loop diagram.
However, I think in the regulatory networks in biological systems, the idea of a "good or bad motif" is mostly related to functions of a higher level structural organisations like "functions of biological cells".
I think the idea of good or bad motif in a regulatory network is relative to cellular functions, and may be seen as a function , where is the set of all motifs in and is the set of all cellular functions. If , then, is good with respect to , if , then is bad with respect to and if the effect of on is unknown.
Where does the function come from - do we just have to figure it out ourselves? To me the interest of "good or bad motifs" is highest when we have an algorithm for spotting them in a causal loop diagram without adding extra information to that diagram.
The simplest example of what I mean would be this: we define the 7 causal loop diagrams in the book Systems Archetype Basics to be a 'bad motif'. In fact I'd like to do better and find common features of these causal loop diagrams which make them count as 'bad', and I think that should be possible. But at least we can write a program that takes any causal loop diagram and seek Kleisli morphisms from these 7 causal loop diagrams to : this does not require that we add additional information to 'by hand'.
Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.
One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.
John Baez said:
Where does the function come from - do we just have to figure it out ourselves? To me the interest of "good or bad motifs" is highest when we have an algorithm for spotting them in a causal loop diagram without adding extra information to that diagram.
Yes, in my defnition of , the function (at the moment) has to be figured out by ourselves by experiments and other studies. However, I am also equally excited about "the possibility of producing an algorithm" that takes in causal loop diagrams as inputs and gives out good, bad and indeterminate motifs as outputs with respect to a particular cellular function.
John Baez said:
The simplest example of what I mean would be this: we define the 7 causal loop diagrams in the book Systems Archetype Basics to be a 'bad motif'. In fact I'd like to do better and find common features of these causal loop diagrams which make them count as 'bad', and I think that should be possible. But at least we can write a program that takes any causal loop diagram and seek Kleisli morphisms from these 7 causal loop diagrams to : this does not require that we add additional information to 'by hand'.
This sounds very interesting and a concrete approach to the problem, and an exciting project towards characterising "bad motifs" in graphs with polaritites. At least with respect to systems dynamics, we are aware that those 7 causal loop diagrams do mostly bad. So, finding such motifs in regulatory networks in biological systems by building the software (about which you mentioned) and then communicating with the biologists for the possible interpretations of those motifs from the point of Systems Biology. I am feeling, we may come up with some interesting conclusions in this direction, if we manage to find a single such causal loop diagram in regulatory networks or pathways.
So, at the moment, are you conjecturing something like this?
Any motif which System dynamics community consider `bad' will also possibly be considered 'bad' by Systems Biology community with respect to some cellular function like Apoptosis, proliferation, etc. However, evolution acts as an external force to keep the occurances of such motifs in minimal numbers in regulatory networks of biological systems.
John Baez said:
Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.
One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.
This sounds super exciting!! It makes me ask "whether you are expecting something like this"?
For any `important system' (like biological system, social system, ecological system, etc.) represented as graphs with polarities (talking in terms of general labeling monoids), there exist a collection of motifs whose occurrences indicate that something bad is going to happen to the the system over time.
I don't think you'll really find something like "inherently bad motifs" and here's why: if "bad" elements of a system like diseases or cancer cells are afflicted by "bad" motifs, then that is good for the system as a whole. In particular, biological systems are probably riddled with "traps" that lie in wait for cell lines that start to escape the more basic restrictions on proliferation, starting with telomeres.
Please correct me if I am wrong:
Now, I am also starting to believe that many failures of large systems (including biological systems) may possibly be characterised by various patterns of interaction of its sub-systems. Similary, many success of large systems (including biological systems) may possibly be characterised by various patterns of interaction of its sub-systems.
So, I am now feeling motifs and System archetypes are more like tools to Systems Biology and Systems dynamics respectively, to characterise faulty systems, stable systems, etc.
John Baez said:
Of course it's an open question whether this concept or any other concept of 'intrinsically bad motif' makes sense. But I find it to be a very exciting possibility, for various reasons, so I want to explore it.
One reason is that I know an ecologist who is trying to determine the health of ecosystems based on empirical data summarized as a causal loop diagrams.
I guess there's an obvious way for ecologists to consider healthy systems and how bad motifs may feature in, say, trophic cascades. But I wonder whether the patterns themselves are intrinsically 'bad'.
When a headteacher institutes school policies to encourage and reward kind and considerate behaviour over rudeness, bullying, etc., there might have been a dynamic mix of good and bad behaviour before, but then the system is steered to a 'monoculture' of considerateness, like the moderators have achieved on this Zulip chat. :slight_smile:
Couldn't we consider this in terms of the 'success to the successful' archetype?
The same point here as @James Deikun is making.
Do we look on any event or process in the history of life on Earth as intrinsically bad, at least until humans come into the mix, but then that's surely telling us something? But we surely don't lament, say, the spread of grasslands millions of years ago, even if it was bad news for many species.
David Corfield said:
Couldn't we consider this in terms of the 'success to the successful' archetype?
I've always been suspicious of this being on the the list of 'bad' motifs because it's not like most of the others. Most of the others seem to involve 2 feedback loops with opposite sign, one with a delay. That seems to give a system that's in conflict with itself, or indecisive.
Although I am not sure, 'success to the successful' archetype is also reminding me the way one trains neural networks (which is not bad I think): Reward/gain if it idenitifies correctly and punishment/loss if it idenitifies incorrectly.
Interesting. So here,
the circuit on the left has 3 +s, while the circuit on the right is -+-. So both form positive feedback loops.
John Baez said:
That seems to give a system that's in conflict with itself, or indecisive.
I also feel so. I agree that "a human made system may become indecisive because of human's nature/character/etc".
My question is "why a natural system like "regularoty networks" becomes indecisive"? Can we think of the occurrence of some mutations, etc, as a system that's in conflict with itself, or indecisive?
Or rather, in general "such bad things happen" only when things go in an unanticipated way? Human made causal loop diagrms are fine (becaue we can think of ourselves as capable of anticipation). However, it is also making me think whether biological systems like cells are also anticipatory systems and usual regulatory networks are "a sort of manifestations" of such anticipations. In this context , when certain mutations are not anticiapated by system of cells, then we get uncontrolled prolioferations leading to cancer, etc.
David Corfield said:
Interesting. So here,
the circuit on the left has 3 +s, while the circuit on the right is -+-. So both form positive feedback loops.
I am feeling the key factor is "the limited pool of resources to be shared among many, and thus one needs to remove some". I think the ideology of Apoptosis is also like that (killing unwanted cells). Here, Apostosis is used for good.
I only just notices the little 's's and 'o's! Continuing to
these are both negative loops. Surely this is common in many self-regulating systems.
Hmm, it seems like some of the descriptive phrases on these archetypes are loaded with positive/negative connotations to encourage us to interpret them in a certain way.
I am feeling that the archetypes like Escalations describe a way to remove certain entities from systems with limited resources. Sometimes, its consequence is good and sometimes it is bad with some context.
We seem to be led to think of things like arms races or the outbreak of war, but isn't the same pattern there in the rivalry that makes us work or train harder, or the tech company that improves its product to keep ahead of the field? The term 'threat' is doing a lot of work.
David Corfield said:
We seem to be led to think of things like arms races or the outbreak of war, but isn't the same pattern there in the rivalry that makes us work or train harder, or the tech company that improves its product to keep ahead of the field? The term 'threat' is doing a lot of work.
I agree. It seems these "archetypes" have certain specific functional roles with respect to some context. If that specific functional role seems benefical to the context, we may say it is a good archetype, otherwise it is bad. In biology, delays are also used to do something good like "creating pulse like dynamics to signal for a cell division", but all the archetypes with delay I know in systems dynamics are mostly bad I think.
However, in biology, such a delay was anticipated by the cell, but in the systems dynamics such a delay was not anticipated (please see
).David Corfield said:
I only just noticed the little 's's and 'o's!
s means "same" which means +.
o means "opposite" which means -.
Apparently a lot of non-mathematician practitioners of system dynamics get confused by the symbols + and - and think that, for example, an edge means that always gets decreased. In fact it means something more like : if gets bigger gets smaller, but if gets smaller gets bigger. So, to make life easier (?), some system dynamicists have switched to using s and o.
("Something more like", because this is just one possible semantics: the quantities involved do need to be real number: a more qualitative semantics is also good.)
TL;DR: s means +, o means -
Adittya Chaudhuri said:
However, in biology, such a delay was anticipated by the cell, but in the systems dynamics such a delay was not anticipated
This is a very interesting topic, no doubt involving great complexity. In one sense, I take it that this anticipation has something to do with the idea of design, what something is designed to do and to cope with. So we might say of an engineered product when it fails that it was not anticipated that the product would be exposed to such conditions.
Biology increases the complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning. Then add in all the compromises of "design" arising from its evolutionary history.
David Corfield said:
This is a very interesting topic, no doubt involving great complexity. In one sense, I take it that this anticipation has something to do with the idea of design, what something is designed to do and to cope with. So we might say of an engineered product when it fails that it was not anticipated that the product would be exposed to such conditions.
Thanks!! I find your idea of "relating anticipation with designing" a very interesting and natural way of looking at these things !!
I am not completely sure I understand the `evolution part'. Are you saying that "evolution" is a kind of mechanism to upgrade the existing biological design to fix the unanticipated problems it is currently showing?".
I find it relatable to the way, the phone companies upgrade their softwares in phones which debug unanticiapated problems of the previous version in the current version. Thus those unanticipated problems of the previous version become anticipated problems of the current version. However, our experaince shows there is never a "final version" and we always need to upgrade after a period of time. Interestingly, if we choose to not upgrade the version, then even normal phone feautures will stop to perform well eventually, which I am relating with species extinction, etc.
David Corfield said:
Biology increases the complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning. Then add in all the compromises of "design" arising from its evolutionary history.
I think I understood your idea of of relating "Increment of complexity with the unfolding of the organism's phenotype and "anticipated" developmental shifts in its functioning" via evolution. Very interesting!! Thank you.
I was hinting at the following, but your response is interesting too.
I was thinking of the difference between a designed entity and an evolved one, where component parts and their functioning are far from optimised. Easier to think in the former case of harmonious functioning of parts according to a design and a clear notion of anticipation, whereas in the latter case there will be built-in competition of parts and competing anticipations.
But, now I come to think of, that feature of the latter, which one might expect would lead to a vulnerability when novel conditions arise, is seen as a strength in the Michael Levin article I mentioned above: The struggle of the parts: how competition among organs in the body contributes to morphogenetic robustness.
(It was that article that prompted me to wonder whether competition between Solms's 7 emotional drives could lead to a special mental robustness in humans here.)
I'm very sympathetic to all these speculations. I'm very eager to test them with data, and @Evan Patterson has created CatColab software for finding motifs in causal loop diagrams with delays. But I don't know databases of these - with delays, that is!
David Corfield said:
I was hinting at the following, but your response is interesting too.
I was thinking of the difference between a designed entity and an evolved one, where component parts and their functioning are far from optimised. Easier to think in the former case of harmonious functioning of parts according to a design and a clear notion of anticipation, whereas in the latter case there will be built-in competition of parts and competing anticipations.
But, now I come to think of, that feature of the latter, which one might expect would lead to a vulnerability when novel conditions arise, is seen as a strength in the Michael Levin article I mentioned above: The struggle of the parts: how competition among organs in the body contributes to morphogenetic robustness.
(It was that article that prompted me to wonder whether competition between Solms's 7 emotional drives could lead to a special mental robustness in humans here.)
Intersting point of view!! If we relate how "a big company works with thousands of workers performing various tasks in different hierarchies" which is ultimately beneficial for the whole company, then your point of view seems a natural way to think about it.
There's a lot of ACT work out there on cellular sheaves and graph Laplacians, which seems to get close to some ideas from this thread, such as first and second [edit: Zeroth] (co)homology of networks. E.g.,
But I guess these are exploring more how stable distributions can be achieved, rather than locating certain forms of loop. Even if the former considers directed paths, these are in quivers. Is it that in causal loop diagrams, there's no role for diffusion?
You can do diffusion on a graph, e.g. writing a version of the heat equation using a 'graph Laplacian'.
Yes, but then does this mark a difference between the work I mentioned and yours, where they're after heat equations that converge to steady state solutions via positive semi-definite linear operators, and you're interested in unstable network patterns?
By the way, in our paper we're interested primarily in the syntax of causal loop diagrams, i.e. various (double) categories of (open) graphs with edges labeled by elements of some monoid, not the functorial semantics of these diagrams, i.e. (double) functors from these (double) categories to others. We discuss semantics informally by discussing in words what the labels might mean, but we don't go into the functorial semantics.
So, we don't at all study any sort of 'heat equation semantics'. But this would be quite interesting to do, so thanks for the idea!
If I did study this, I would not only study the steady state solutions of these heat equations, but the time-dependent solutions and the equations themselves. It seems to be a general rule that the equations describing open dynamical systems are nicely compositional when one describes them using decorated cospans - while the solutions themselves are not, except for steady-state solutions with given boundary conditions.
I realized this the hard way, by writing a couple of papers on open Markov processes with Brendan Fong, Blake Pollard and Kenny Courser.
But anyway: yeah, there could be some linear differential equations with more lively dynamics that one could define using a signed graph. There's something a bit depressing about studying the heat equation and Markov processes, where solutions tend to converge to an equilibrium as time passes. While the math is quite pretty, something in me objects to a world that becomes ever more boring with the passage of time. (This is presumably part of why people get so worked up about the 'heat death' of the universe, and possible ways out, like the Poincare recurrence time.)
You might imagine that Nature would exploit this livelier dynamics, like aeronautical engineers reducing stability for faster control on fighter jets.
Sure, heat equations and diffusion equations only describe the pathetically dull approach to stasis. Real-world physics and biology are vastly more peppy.
But then isn't it extremely likely that these so-called "bad motifs" will appear as necessary components of well-functioning systems?
Note that System Dynamics Archetypes also concerns systems that are dynamic, not necessarily approaching equilibrium. If equilibrium were one's goal, one should never allow any positive feedback loop, since that causes exponential blowup unless it's counteracted by enough negative feedback. You'd expect a vertex with a + edge from itself to itself to be in that book.
But that book isn't telling us to avoid exponential blowup. It seems to be telling us to avoid "wishy-washy" behavior where on short time scales we have a feedback loop of one polarity, and with a delay we have a feedback loop of the opposite polarity.
But I really want to analyze some data and see if biology agrees or disagrees with that book. I won't be shocked if biosystems exploit some causal loop diagrams that the book regards as bad... but I'll be interested.
I think archetypes in that book involves "some meanings" associated to the nodes of the causal loop diagrams. I feel these meanings are general in nature but special enough for us to idenitify similar situations in various contexts in our life by using our inbuilt causality.
However, nodes in regulatory networks are just biochemicals/genes. The only data that is associated to these nodes are its concentration levels or expression levels. Inhibitors/stimulators often reduce/stimulate the concentration/expression levels in the nodes, which causes often delays or "faster than usual" situation. But, naturally I am not able to see a human causal structure in biological networks. So, there may be something else that says biomolecule should stimulate/inhibit the expresssion of gene . I think "that something else" is performing a cellular function like cell death etc in a controlled way. In a way, I think the cellular functions form the underlying causal structure of the regulatory networks like the human minds' causal structure lies underneath the causal loop diagrams in systems dynamics.
The above are just my thoughts. I may be copmpletely wrong.
John Baez said:
But anyway: yeah, there could be some linear differential equations with more lively dynamics that one could define using a signed graph. There's something a bit depressing about studying the heat equation and Markov processes, where solutions tend to converge to an equilibrium as time passes. While the math is quite pretty, something in me objects to a world that becomes ever more boring with the passage of time. (This is presumably part of why people get so worked up about the 'heat death' of the universe, and possible ways out, like the Poincare recurrence time.)
We have linear ODE semantics for CLDs and stock-flow diagrams in CatColab right now! They’re based on the well known mass action kinetics for Petri nets. Maybe that’s interesting to y’all.
It's definitely interesting. I suppose we could try to run the linear ODE semantics for CLDs containing 'bad motifs', and those without, and compare the qualitative features of the dynamics, and see if there's anything to say.
Adittya Chaudhuri said:
I think archetypes in that book involves "some meanings" associated to the nodes of the causal loop diagrams. I feel these meanings are general in nature but special enough for us to identify similar situations in various contexts in our life by using our inbuilt causality.
Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.
The above are just my thoughts. I may be completely wrong.
My guess that the 'bad motifs' in System Archetype Basics are actually objectively bad in some way seems inherently unlikely - nothing involving the word 'bad' should ever be that simple - but at least there's a chance to test this hypothesis. I really just want to look at some regulatory networks and see what they have to say.
John Baez said:
Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.
Thanks. I find your point of view very interesting.
John Baez said:
My guess that the 'bad motifs' in System Archetype Basics are actually objectively bad in some way seems inherently unlikely - nothing involving the word 'bad' should ever be that simple - but at least there's a chance to test this hypothesis. I really just want to look at some regulatory networks and see what they have to say.
True. I got your point.
I will actually start looking at some regulatory networks when I get a bit of free time. Right now I want to finish up our paper, and I have to run this ICMS workshop.
Thanks. That would be really nice. I am thinking of looking at the pathways in KEGG from "the point of crosstalk" i.e "how about finding a feedback loop" by combining different pathways. Since, pathways are a bit "acyclic" in nature it may be hard to find a loop unless we go for crosstalk. Although I am not sure.
John Baez said:
Right now I want to finish up our paper, and I have to run this ICMS workshop.
Yes, I got your point.
John Baez said:
Yes, that's possible. If so, the names of the nodes in the causal loop diagrams are conveying crucial extra information that's too subtle for us to have fully formalized, but which we easily understand in an intuitive way.
I think "the point of crucial extra information" is actually true in regulatory networks in biology. I think "the labeling of every edge" in regulatory network has "an experimental study" or "an analytical study" in backdrop, which is telling us about whether the influence is indeed positive or negative, etc. I feel in biology often such experimental studies (crucial extra information) are non-trivial.
I think this can be explained in a nice way as follows:
To answer the validity of the satement Effort quality of work , we just use our "common sense"/experiance" which manifests as our intuition. However, if I say, chemical A chemical B, my intuition here is likely to be insufficient , and we may need extra experimental/analytical study to verify this.
I was thinking about the "extra information on nodes" about which you said:
In Systems Dynamics , a general prototype statement may look like this
Symptomatic solutions Problem symptoms. Then, this statement is general enough for many modelers to choose his/her/their own cutomised problems in the nodes to model their problems using the same causal loop diagram : Symptomatic solutions Problem symptoms.
Now, in Regulatory Networks:
These prototypes may be constructed using "various classes of biochemicals which are in someway similar". Then, a biologist may use any biochemical as a node (suited to the purpose of investigation) which belongs to that class. However, it may happen that a biochemical can be present in more than one classes.
In our context, if your hypothesis is true then, instead of human biologists, I think it is "evolution/Nature" which treats certain classes of biochemicals as similar, and build archetypes accordingly.
Somehow, it is giving me a feeling that our usual regulatory networks are kind of "stratified version" of some other set of regulatory networks that are yet to be discovered" , and to find the appropriate archetypes for regulatory networks in biological systems which may then actually be considered as good or bad in a resonable way (as you conjectured), we may need to first "unstratify" the existing regulatory networks in a suitable sense. By unstratification, I meant to say at the "level of nodes".
@John Baez and I were discussing on the construction of Mayer-Vietoris for directed graphs with coefficients in a commutative monoid . We realised certain issues in the construction of the boundary in our earlier proposed method for the Mayer-Vietoris construction . @John Baez pointed out that although one way to resolve the issue is by passing to the Grothendieck group construction of the commutative monoids, he said that "doing so" is not favourable , and should be opted as our last option if we can not manage to construct the Mayer Vietoris otherwise.
In this context, below I am proposing an alternative way to construct the Mayer-vietoris in certain special cases. My construction is a bit from the point of seeing "the minimal elements of our first homology monoids with coeefficent in " as "homology class of simple loops" about which we already discussed thoroughly here .
Let us consider two graphs and which
(2) is freely generated by its minimal elements.
Let .
Let denote the set of homology classes of simple loops in .
Let be a function defined as
, where is a vertex in such that the edges and lie in respectively and or, and , but not in respectively, and or, and . If such does not exist, then . [In the definition of , I have used Axiom of choice]
Now, we know , where denotes the set of minimal elements of .
Using (2), it is clear that any map
can be uniquely extended to a morphism of commutative monoids :
.
Hence, in particular, the map defines a unique morphism of commutative monoids .
Then, I think the following is true:
Claim:
if and only if
If my claim is true, then I propose a commutative monoid analogue of the Mayer-Vietoris sequence in the special cases when
as the following:
,
where is given by , induced from the inclusions and .
Thanks for raising this issue and a potential solution, @Adittya Chaudhuri! I keep getting this stuff wrong. Let me try another approach. I want this approach to work whenever we use coefficients from a cancellative commutative monoid , i.e. one with
Of course this includes the case , which is the most important case for us.
First some review of notation:
For any graph we write for the commutative monoid of -linear combinations of edges of the graph , and for the commutative monoid of -linear combinations of vertices. We call the commutative monoid of i-chains. We have source and target homomorphisms
We call a 1-chain with
a 1-cycle and we write for the commutative monoid of 1-cycles (which are the same as 1st cohomology classes in this context).
In other words, is the equalizer of . Similarly, we define to be their coequalizer.
Any map of graphs induces homomorphisms on 0-chains and 1-chains, and these homomorphisms commute with and , so they also induce maps on 1-cycles.
Now consider two graphs and and monomorphisms from a discrete graph into and . Let me make up some intuitive notation:
is the [[biproduct]] of the commutative monoids and , so we have inclusions
and projections
obeying the usual biproduct axioms. All 4 of these maps are compatible with the source and target maps, e.g. . So, they induce maps on which we call by the same names:
and projections
These to obey the usual biproduct axioms
Let be the Grothendieck group of the cancellative monoid . I want to describe a map
whose kernel is the map
induced by the map of graphs
(the canonical map from the coproduct to the pushout). This will be a baby version of the Mayer-Vietoris theorem, restricted to the case of graphs, but generalized to commutative monoid coefficients.
After the warmup here's how the construction goes (if I'm correct). First note that
since by our assumptions every edge of is either an edge of or an edge of , but not both. I will identify with using this isomorphism.
We thus have maps
I now define preliminary versions of the maps and . We can take
and restrict these to to get maps I'll call
Claim 1. The equalizer of and is
(Here we need to be cancellative.)
Claim 2. The range of
is contained in
where is the free abelian group on the commutative monoid , sometimes called its Grothendieck group, and the inclusion is the obvious one.
I'll try to prove these later. But given these, notice:
By claim 2, gives a map I'll abusively call
But since is a discrete graph, the quotient map
is an isomorphism. So, composing with this isomorphism we get a map
with the same coequalizer as .
So, we're done... if we can prove the two claims!
I think I'll quit here for now: I hadn't realized how long the warmup would be, and my own calculations are very intuitive and informal so it will take even longer to formalize them. I'll try that later.
Well, let me do a bit more stage-setting. We have
so we get maps
I hope it's clear what they do: takes any linear combination of edges in and kills off all the edges in while leaving those in alone, and similarly kills off all the edges in . Given any
define
Claim 3. is in the image of
if and only if
Temporarily assuming this, we get
Claim 4. is in the image of
if and only if
Proof of Claim 4 from Claim 3. Since we have
or in other words
Now, if
then by cancellativity in , which follows from cancellativity in , we also get
so by Claim 3 we see is in the image of .
Conversely, if is in the image of , Claim 3 says
and
I made a bunch of mistakes which I've tried to fix. I should probably work this out on paper and then write it up somewhere else. The idea is supposed to be simple, but it's not looking simple.
Thanks very much!! I am now trying to understand your ideas.
John Baez said:
the map
induced by the map of graphs
(the canonical map from the coproduct to the pushout).
A small doubt here:
By definition, is the equalizer of the maps
.
Thus, by the universal property of the equalizer, there is a unique map
such that the necessary diagram commutes.
Are you saying ?
I think so. In general, we expect should be a functor from graphs to commutative monoids, so any map of graphs induces a map on first homology , and it sounds like you're describing how that functor works. It comes from the universal property of the equalizer, along with functoriality of and .
Btw, if you look at my outline, you'll see I wound up introducing the Grothendieck group, or group of differences, , of the cancellative commutative monoid , so that I could form the difference of maps . I didn't do that in the first draft of my posts; the first draft had a serious mistake, so you may need to reread the posts to see what I mean.
I don't really love this, and I think I see a way to avoid it, but as long as we need the hypothesis that is cancellative we might as well use and the difference .
I didn't actually explain the ideas in my argument, so let me try doing that now!
Claim 1 says is that and are equal iff the 1-cycle comes from an element of . The basic idea is that 1-cycles coming from don't "cross over from to " - they're the sum of a 1-cycle that lives in and a 1-cycle that lives in . So, when in this case, when we take the part of that's in by forming , we still get a 1-cycle.
We can then use cancellativity to show that is also a 1-cycle, since .
Claim 2 says that the maps
produce 0-chains that differ only on the subgraph . That is, for any
, the 0-chains and are linear combinations of vertices of , but the coefficients can only be different for vertices in .
Maybe you can draw an example of how this works for a choice of that comes from , and for one that doesn't. Or maybe I'll do it! There's no way I could invent these arguments without having a picture in my mind.
John Baez said:
I think so. In general, we expect should be a functor from graphs to commutative monoids, so any map of graphs induces a map on first homology , and it sounds like you're describing how that functor works. It comes from the universal property of the equalizer, along with functoriality of and .
Thank you. Yes, I understand your argument!!
Thanks very much for the explanation of your ideas. Overall, I understood your approach. I need a little more time to realise and understand the details of your ideas.
John Baez said:
Maybe you can draw an example of how this works for a choice of that comes from , and for one that doesn't. Or maybe I'll do it! There's no way I could invent these arguments without having a picture in my mind.
Thanks. Yes, I will draw some examples!
John Baez said:
I didn't do that in the first draft of my posts; the first draft had a serious mistake, so you may need to reread the posts to see what I mean.
Thanks!! Yes, I will read/reread the whole thing in detail.
Okay, I've finally proved something. As often the case with homological algebra, most of the work is just setting up the framework in the right way. Since we're dealing with commutative monoids rather than abelian groups we need to be a bit careful.
As before we have a two graphs and that are subgraphs of the graph , whose intersection is a discrete graph - i.e. a graph with no edges. And as before we have a cancellative commutative monoid .
I figured out how to lessen the amount of boring notation a bit. Unfortunately, the process of lessening the amount of boring notation is itself boring. But it's worthwhile! Here goes:
We'll treat and as submonoids of . Since every edge of is either an edge of or of , but not both, we have
where we write an equals sign because this is an 'internal direct sum': every element can be uniquely written as a sum of elements and .
Let's define monoid homomorphisms
by
It is also convenient to treat and as submonoids of .
For all our graphs we have source and target maps and sending edges to vertices, and lets abbreviate their actions on -linear combinations of edges
simply as and . We can also use these notations for the maps
and
without confusion, since these are restrictions of the maps and defined on all of .
Okay, now for the actual work! "You can wake up now", as an incredibly rude friend of mine once announced to the audience before giving his talk at a conference.
The natural map from the disjoint union to the union (really pushout) induces a map on homology
This map sends any pair to the sum , but it is not an isomorphism since there may be 'emergent cycles'. The following Mayer--Vietoris-like lemma clarifies the situation:
Lemma. If is a cancellative commutative monoid and are subgraphs of a graph whose intersection is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:
where the two arrows I'm unable to label are and
Proof. First we show that on the image of . Any element in the image of is of the form with and Since and , we have
Next we show that any with is in the image of . This equation says that . Since is a 1-cycle we have and thus
Since is cancellative so is , so we can subtract the equation from the above equation and conclude
Thus , so , and is in the image of .
I would really like to see a counterexample when is not cancellative - or even better, a proof that there's no counterexample! A counterexample would amount to this: a cycle and a chain that is not a cycle, such that is a cycle.
John Baez said:
Lemma. If is a cancellative commutative monoid and are subgraphs of a graph whose intersection is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:
Thanks!! I find the proof very nice!! I am trying to construct a counterexample/ to prove that there is no such!!
John Baez said:
Okay, now for the actual work! "You can wake up now", as an incredibly rude friend of mine once announced to the audience before giving his talk at a conference.
Interesting!! :)
John Baez said:
Claim 2. The range of
is contained in
where is the free abelian group on the commutative monoid , sometimes called its Grothendieck group, and the inclusion is the obvious one.
I find this claim very interesting!! I was trying to prove this. Although I am yet to prove the general statement, I find this statement true in all the examples I worked out today. I like the idea that whenever, there is something like this in the graph , we have of the form in the Grothendieck group. I feel in all the vertices which do not lie in the intersection needs to have this property when we start with an element in .
When is itself a cycle in , then, trivially it becomes , and hence lie in .
I'm trying to prove this Claim 2 now, and also state it in a way that avoids subtraction.
Ok. Then, I am trying to find the counterexample/ to prove there is no such. [Lemma]
Good, that's the main mystery. Start with a simple non-cancellative monoid like with " or " as the monoid operation, or maybe with .
Thanks!! Yes, I am trying.
John Baez said:
All 4 of these maps are compatible with the source and target maps, e.g. . So, they induce maps on which we call by the same names:
Somehow, I am not able to see why the property is true. However, I did not find it's use in inducing the map to . I am not sure whether you have used this property or not anywhere in your construction. I tried to construct a counter example.
IMG_0350.PNG
I think I might have wrongly interpreted!! It would be actually ?
I will look at your counterexample in a little while. First:
I worried about . In the text you quoted I was thinking of both of these as maps
Beware: I'm using the same notation in a different way in the paper now.
Since and are monoid homomorphisms it suffices to check they're equal on an element of that's either of the form
or
All elements are sums of these two kinds.
Consider the first case:
On the other hand
Next consider the second case:
On the other hand
So I think this is fine. But I don't think I'm using it in the paper now.
Thanks !! I got your argument. Somehow, I got confused. My counterexample does not makes much sense!!
I think in your counterexample you are not treating as a map
as I was when I made that claim . You seem to be treating it as a map
Thanks!! Yes!!
If you look at our paper now, or the place where I actually proved something, you'll see I defined differently than I did earlier. So this is potentially confusing. But the new approach is more useful.
Thanks!! I understand your point. I am checking the paper.
I'm hoping you can find a non-cancellative commutative monoid and a 1-cycle on and a 1-chain on that's not a 1-cycle, whose sum is a 1-cycle on . The -shaped graph you just drew may be useful here.
Thanks!! I am trying!
I think I have a counter example with the Boolean monoid (attached).
counterexample.PNG
WOW, THAT'S EXCELLENT!
I mean, it's sad that certain results depend on taking a cancellative commutative monoid, but it's great that you've figured out what can go wrong otherwise.
Thanks very much!!! Your suggestion for Boolean monoid works here.. I would not have tried with Boolean monoid if you would not have suggested about it!! So, thanks a lot to you !!
I've made some more progress. First, recall that I proved this:
Lemma. If is a cancellative commutative monoid and are subgraphs of a graph whose intersection is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:
where the two arrows I'm unable to label are and
The annoying thing about this is that it uses where the usual Mayer-Vietoris sequence would use . So that's what I will now fix.
First, note that there is a monoid homomorphism
given by
That is, kills off all vertices of that are not also in .
Now, a priori is a quotient of , but has no edges so the quotient map is an isomorphism . So let's use this isomorphism to identify these two monoids, and treat as a monoid homomorphism
This allows us to state the Mayer--Vietoris lemma in a nicer way:
Theorem. If is a cancellative commutative monoid and are subgraphs of a graph whose intersection is a discrete graph, then the following is an equalizer diagram in the category of commutative monoids:
where now the two arrows I'm unable to label here are and .
Proof. By the Lemma it suffices to show that a cycle has if and only if . One direction of the implication is obvious, so we suppose and aim to show that .
We let
Since is a cycle we have
There are three mutually exclusive choices for a vertex in : it is either
1) in but not ,
2) in or
3) but not in .
In case 1) we say the vertex is in and in case 3) we say the vertex is in , merely by way of abbreviation. The above equation thus implies three equations:
Since an edge of whose source is in must be an edge of , the first equation is equivalent to this:
Since we also know that
Adding the last two equations we get
This says , as desired!
Please carefully check my logic here!
Thanks!! I am trying to understand your ideas.
I just fixed some typos in those 3 huge sums, which would have made those equations completely false.
It's funny how my purely intuitive understanding of what's going on, based on mental pictures, became rather complicated looking when I finally wrote it up precisely (after a month of mistakes.) I hope it's finally correct.
As of now the proof looks great !! I am reading !!
I just read the proof. I found it really great!!! It looks correct to me. I really love the idea of dividing the vertex set in mutually exclusive classes. I find the ending argument really crisp and very beautiful!! I also very much like the idea of using the previous lemma to boil down the complicated statement of the theorem to "a simple statement".
I found only one point a little odd: If I understand the proof correctly, then I could not find the use of "2nd and 3rd equations" i.e for the case of and . Although I agree that the case of is addressed when you applied .
Great, I'm glad you like this argument. In the actual paper I left out the 2nd and 3rd equations, though I still mention that 3 equations exist.
I could shorten the argument even more by not mentioning all 3 cases.
John Baez said:
Great, I'm glad you like this argument. In the actual paper I left out the 2nd and 3rd equations, though I still mention that 3 equations exist.
Thanks!! I see! I will read the portion from the paper.
John Baez said:
I could shorten the argument even more by not mentioning all 3 cases.
I just read the relevant portion from the paper. I find it great!! (You mentioned there that 1 is the important one). I feel mentioning the 3 cases may be a better idea (from the perspective of readers) as it is clarifying the situation in a more vivid way. So, I think "the current version" in the paper is great!!
Thanks!
Now I just need to remember that this:
$${{\displaystyle\xrightarrow{f}} \atop {\displaystyle \xrightarrow[g]{}}} $$
gives this:
Luckily I can 'star' this message and save it for later.
@Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.
I want to finish a draft of this paper very soon and show it to the world, like tomorrow or the next day. After some feedback we can put it on the arXiv. Later, after some more feedback, we can submit it for publication.
I'm starting to work on the Conclusions. So far I've listed a couple of math problems I hope someone solves.
John Baez said:
Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.
Thank you. Yes, I will check "whether Example 9.3 gives a counterexample to Lemma 9.3 or Theorem 9.4".
John Baez said:
I want to finish a draft of this paper very soon and show it to the world, like tomorrow or the next day. After some feedback we can put it on the arXiv. Later, after some more feedback, we can submit it for publication.
That sounds great!!
John Baez said:
I'm starting to work on the Conclusions. So far I've listed a couple of math problems I hope someone solves.
Thanks!! I will read the conclusion portion.
John Baez said:
Adittya Chaudhuri - I added your counterexample to the paper as Example 9.3. Maybe you can see if it gives a counterexample to Lemma 9.3 or Theorem 9.4.
I think the counter example in Example 9.3 should work for Theorem 9.4 also because imply . Rest of the things remain same.
I'm not awake enough to see why the counterexample in Example 9.3 should also give a counterexample to Theorem 9.4, but I'll try to figure that out.
In the attcached file I tried to explain my argument
CounterexampleforTheorem 9.4.PNG
Note that the only additional change in the construction is the "presence of the map ". This minor change is working because and are intersecting at both and and there are only two vertices in the whole diagram namely again and .
Feedback loops are important in both regulatory networks in Systems Biology and causal loop diagrams in Systems dynamics. We explored how feedback loops can be understood as elements of our 1st homology monoids.
Now, a question came to my mind:
In regulatory networks, Systems biologist not only find feedback loops interesting, they also find feed-forward loops interesting. From their definition, we can not think of them as elements of our 1st homology monoids. However, they are very important from the point of their roles in Biology. So, my question:
What would be a suitable mathematical theory "in the spirit of our homology monoids" which can capture feedforward loops in directed graphs?
This is a great question, but I think you should study it in your next paper. Right now I'm solely focused on finishing this paper.
Please don't forget this question.
I expanded Lemma 9.2 to include a new version that doesn't require cancellativity, while keeping the old version. The new version says
is an equalizer even if isn't cancellative.
John Baez said:
I expanded Lemma 9.2 to include a new version that doesn't require cancellativity, while keeping the old version. The new version says
is an equalizer even if isn't cancellative.
I read through the portion you suggested. New Lemma 9.2 looks interesting!!
If I understand correctly, the first equalizer (when the coefficients are coming from not necessarily a commutative monoid ) says that if a cycle in the pushout graph is made up of a cycle in the graph and a cycle in the graph , then must lie in the image of and vice versa, which is true both mathematically (as you showed) and also intuitively.
While the second equalizer (when is assumed to be cancelative) says that if a cycle in the pushout graph is made up of a cycle in the graph and a chain in the graph then, must be a cycle in . (Similar statement holds if we consider the graph instead of graph ). This is also intuitive, because I think when we draw an example of a directed graph, we secretly assume that coefficients are from , which is a cancelative monoid.
However, as we see in Example 9.3 that our intuition for the second equalizer is not correct as we saw a counter example for the non-cancelative monoid , which says (when coefficents are from a non-cancelative monoid) there can exist a cycle in the pushout graph which is made up of a cycle in the graph and a chain (which is not a cycle) in .
John Baez said:
This is a great question, but I think you should study it in your next paper. Right now I'm solely focused on finishing this paper.
Please don't forget this question.
Thanks very much!! I am very glad that you find my question interesting!! I will definitely work on this idea.
Adittya Chaudhuri said:
If I understand correctly, the first equalizer (when the coefficients are coming from not necessarily a commutative monoid ) says that if a cycle in the pushout graph is made up of a cycle in the graph and a cycle in the graph , then must lie in the image of and vice versa, which is true both mathematically (as you showed) and also intuitively.
Right. This result is very simple so I skipped it at first. But then I decided it was bad to skip the only result I know that holds for non-cancellative commutative monoids. In a sense working with cancellative commutative monoids is "cheating" because they are almost like abelian groups: they embed in their Grothendieck group.
While the second equalizer (when is assumed to be cancellative) says that if a cycle in the pushout graph is made up of a cycle in the graph and a chain in the graph then, must be a cycle in . (Similar statement holds if we consider the graph instead of graph ). This is also intuitive, because I think when we draw an example of a directed graph, we secretly assume that coefficients are from , which is a cancellative monoid.
However, as we see in Example 9.3 that our intuition for the second equalizer is not correct as we saw a counter example for the non-cancellative monoid , which says (when coefficents are from a non-cancellative monoid) there can exist a cycle in the pushout graph which is made up of a cycle in the graph and a chain (which is not a cycle) in .
Right. I'd say our intuition is correct if we're working with a cancellative monoid, but not otherwise.
John Baez said:
Right. This result is very simple so I skipped it at first. But then I decided it was bad to skip the only result I know that holds for non-cancellative commutative monoids. In a sense working with cancellative commutative monoids is "cheating" because they are almost like abelian groups: they embed in their Grothendieck group.
Thanks!! Yes, I agree!!
John Baez said:
Right. I'd say our intuition is correct if we're working with a cancellative monoid, but not otherwise.
Yes, I agree. May be "when we draw something" our brain has a default coefficent system, which is a kind of cancellative commutative monoid. I know what I just said may not make any sense from the point of biology.
Our paper is done - we look forward to comments and corrections!
Abstract. In fields ranging from business to systems biology, directed graphs with edges labeled by signs are used to model systems in a simple way: the nodes represent entities of some sort, and an edge indicates that one entity directly affects another either positively or negatively. Multiplying the signs along a directed path of edges lets us determine indirect positive or negative effects, and if the path is a loop we call this a positive or negative feedback loop. Here we generalize this to graphs with edges labeled by a monoid, whose elements represent 'polarities' possibly more general than simply 'positive' or 'negative'. We study three notions of morphism between graphs with labeled edges, each with its own distinctive application: to refine a simple graph into a complicated one, to transform a complicated graph into a simple one, and to find recurring patterns called 'motifs'. We construct three corresponding symmetric monoidal double categories of 'open' graphs. We study feedback loops using a generalization of the homology of a graph to homology with coefficients in a commutative monoid. In particular, we describe the emergence of new feedback loops when we compose open graphs using a variant of the Mayer-Vietoris exact sequence for homology with coefficients in a commutative monoid.
I've started the article. A few typos:
nonnegataive (p. 9); Note however that it also give (p. 9); affects the vertex (p. 10, should be )
Also on p. 11, that should be , or the other way.
positve (p. 12); seein (p. 27)
Might there be something approximating the universal coefficient theorem in the case of commutative monoids? I guess what you're doing through pp. 33-35 comes closest.
Above, John said about System Archetypes theory,
It seems to be telling us to avoid "wishy-washy" behavior where on short time scales we have a feedback loop of one polarity, and with a delay we have a feedback loop of the opposite polarity.
I'm still dubious as to whether this structure is intrinsically bad, but leaving that aside, using the resources of the new paper, how do we pick out such wishy-washiness?
Presumably, we are working with some monoid of delays, say . Then find all feedback cycles through each point using homology calculations, and then see if any pair of cycles has the appropriate conflicting labels, e.g., and ?
David Corfield said:
I've started the article. A few typos:
nonnegataive (p. 9); Note however that it also give (p. 9); affects the vertex (p. 10, should be )
Also on p. 11, that should be , or the other way.
positve (p. 12); seein (p. 27)
Thanks very much!! Yes we need to fix these typos.
David Corfield said:
Then find all feedback cycles through each point using homology calculations, and then see if any pair of cycles has the appropriate conflicting labels,
Thanks!! I find this perspective super interesting!! I am thinking about it !!
David Corfield said:
I'm still dubious as to whether this structure is intrinsically bad
I also feel the same. That is why I want to broaden the question a bit:
What kind of patterns (other than the ones we already know) in a causal loop diagram should be worth studying? In this regard , I find your view of of considering feedback cycles locally i.e "Then find all feedback cycles through each point using homology calculations" very relevant. I will think on this perspective.
Adittya Chaudhuri said:
What kind of patterns (other than the ones we already know) in a causal loop diagram should be worth studying?
This is an important question. It seems that the systems archetypes approach is to take instances of known dysfunctional organisations and then to extract a common set of problematic patterns. I'm dubious that taking the latter in some uninterpreted, structural way and locating them inside the causal loop diagram representation of some other kind of organisation will guarantee locating dysfunction in the latter. Still worth exploring though.
I'm reminded of debates as to the value of different kinds of evidence in medical decision-making. On the face of it, the Evidence-Based Medicine (EBM) movement was looking to replace anecdote and common experience of the sense of how things go in patients' bodies, often informed by a story-like account of biological mechanisms, by pristine gold standard meta-analyses of well-run double-blinded randomized controlled trials. E.g., here
Of course, all kinds of mechanistic understanding seep into any reasonable approach to working with such trials, but what I want to draw attention to is one of the arguments given by EBM-ers for distrust of mechanistic knowledge. The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system.
It's possible then that something looking locally dysfunctional may play a useful role in the larger system.
David Corfield said:
This is an important question. It seems that the systems archetypes approach is to take instances of known dysfunctional organisations and then to extract a common set of problematic patterns. I'm dubious that taking the latter in some uninterpreted, structural way and locating them inside the causal loop diagram representation of some other kind of organisation will guarantee locating dysfunction in the latter. Still worth exploring though.
Thank you!! Yes, I agree to your point of view!!
David Corfield said:
but what I want to draw attention to is one of the arguments given by EBM-ers for distrust of mechanistic knowledge. The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system.
Thanks!! Yes!! I got your point!!
David Corfield said:
It's possible then that something looking locally dysfunctional may play a useful role in the larger system.
I find this perspective not only interesting from the point of intuition that I developed after seeing those 7 archetypes from that book, I find this perspective interesting from the point of Mathematics too. I am trying to write down my thoughts. I may be misunderstanding many many things, so please correct me if I am making any mistake!!
I will be using similar notation style as in our paper.
For a commutative monoid , let be a -labeled graph. Now, I am defining the following:
(2) I think the same relation can be extended to the set of paths, which will allow us to define a congruence relation on the category , and thus we will get a quotient category , whose objects are vertices of and morphisms are homology classes of paths.
(3) Now, for every , I will consider the Automorphism monoid , which contains the homology classses of loops passing through .
Now, I want to see your idea
"It's possible then that something looking locally dysfunctional may play a useful role in the larger system."
in terms of for each .
Next question, if we know for each , what can we tell about ? Is it fully determined, or, is it determined upto something?
Now, note that the homology classes of simple loops passing through is contained in . Thus, by our already established bijection between homology classes of simple loops and minimal elements of , we have a map defined by the restriction of on the set of minimal elements of which corresponds to the homology classes of simple loops passing through . Since, minimal elements generate , we can recover the whole .
Hence, if I am not making any mistakes, then, yes, @David Corfield I think your claim
"It's possible then that something looking locally dysfunctional may play a useful role in the larger system"
makes sense (from the point of Mathematics too), and thus probably, it will be sufficent for us to focus our attention locally to understand the global behaviour.
So, from the above arguments it seems (if I am not making any mistakes), then, I have an answer for this question:
Next question, if we know for each , what can we tell about ? Is it fully determined, or, is it determined upto something?
Ans: Yes, it is fully deteremined locally.
Interesting.
Adittya Chaudhuri said:
Ans: Yes, it is fully deteremined locally.
So maybe a better way to frame the EBM concern I mentioned
David Corfield said:
The concern is that almost always one will have only a partial account of the total mechanism, and that this often brings about misleading expectations of the results of interventions. Hence the need for empirical tests on the whole system
is that we often don't have full local information at a vertex. If we only have a sub-causal loop diagram of the "true" diagram, we can be misled.
But then in this form, it hardly seems surprising.
I guess then it could be the case that at a vertex we have all of its connecting edges from the "true" diagram but not all of its loops. We could lack knowledge of a distant edge that partakes in a crucial loop at .
David Corfield said:
Interesting.
Thank you!!
David Corfield said:
I guess then it could be the case that at a vertex we have all of its connecting edges from the "true" diagram but not all of its loops. We could lack knowledge of a distant edge that partakes in a crucial loop at .
Thanks!! This is interesting!! I tried to imagine what you said. Please see the attached diagram.
missinglink.png
Right. And the missing link could be a long way away from . I wonder whether people have looked into the stability of networks under rewiring at the periphery.
I'm reminded of work I read years ago done at the Santa Fé Institute by Stuart Kauffman. There was a rationale for why gene networks shouldn't be too inter-connected.
David Corfield said:
Right. And the missing link could be a long way away from . I wonder whether people have looked into the stability of networks under rewiring at the periphery.
I'm reminded of work I read years ago done at the Santa Fé Institute by Stuart Kauffman. There was a rationale for why gene networks shouldn't be too inter-connected.
Thanks!! This is very interesting!! Do you have a reference for the Kauffman work you mentioned?
David Corfield said:
And the missing link could be a long way away from . I wonder whether people have looked into the stability of networks under rewiring at the periphery.
I find this situation and the question very important and natural.
From your point of view, I am now imagining the situation as a mathematical question:
For a commutative monoid , let and are two -labeled graphs such that and differ only by the existence/non-existence of an edge .
Question: how differs from ?
I know that the above question is too general.
So, I want to know "what type of edge we can remove from so that and are in some way similar"?
Adittya Chaudhuri said:
Do you have a reference for the Kauffman work you mentioned?
It was from a seriously long time ago, maybe 30 years, when the "edge of chaos" was in vogue. Let's see. In this article from 2005 they're considering Kauffman networks, and on the first page they mention the phase between frozen and chaotic phases. It requires a certain degree of connectivity.
But there must have been so much more done since then.
Thanks a lot!! I will read the paper.
David Corfield said:
But there must have been so much more done since then.
I see. Thanks!! I will search for the recent works in this direction.
Adittya Chaudhuri said:
So, I want to know "what type of edge we can remove from so that and are in some way similar"?
Presumably this rests on how many minimal loops of contain .