Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: Networks in Biology


view this post on Zulip Adittya Chaudhuri (Jul 06 2025 at 13:57):

Lets talk about Networks in Biology in this thread!!

In our recent paper Graphs with polarities, @John Baez and I studied how monoid labeled directed graphs can model regulatory networks in biology when the monoid is suitably chosen. We discussed the work related to this paper here #theory: applied category theory > Graphs with polarities . Now I am trying to understand the content of this interesting paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics by Jorge G. T. Zañudo and Réka Albert (Suggested by @John Baez ). Let me start by spelling out my very initial understanding of the paper. I apologise priorly if my understanding sounds very naive and immature at the moment. I expect to get better with my understanding in the coming days. In this thread, I am assunming the notations used in our paper Graphs with polarities.

How the framework in the paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics relates to our framework in Graphs with polarities??

For a monoid MM, let (G, ⁣:EM)(G, \ell \colon E \to M) be a monoid labeled graph. Although in Graphs with polarities, we modelled regulatory network as a MM-labeled graph for some suitable choices of monoids MM, we have not studied the dynamics of MM-labeled graphs. I feel Biologists are usually interested in three kinds of dynamics:

For a detailed discussion related to "how the above three classes of dynamics modelling approach differ and how each has its own advantages", please see the paper Quantitative and logic modelling of gene and molecular networks by Nicolas Le Novère.

I feel the paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics discusses certain interesting and as well as useful aspects of logic based dynamics of our Z/2\mathbb{Z}/2-labeled graphs in Graphs with polarities and is discussed in terms of Boolean networks. In particular, the authors studied how stabilizing the expression or activity of a few select nodes of a regulatory network can drive the cell towards a desired fate or away from an undesired fate. I may be wrong but next, I will try to argue why I think a Boolean network describe a logic based dynamics of our MM-labeled graphs!!.

view this post on Zulip Adittya Chaudhuri (Jul 06 2025 at 19:27):

Below, I am trying to write my understanding of a Boolean model of the discrete dynamics of a regulatory network as explained in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks adopted in the framework of monoid labeled graphs.

Consider a MM-labeled graph (G=(E,V,s,t), ⁣:EM)(G=(E,V,s,t), \ell \colon E \to M). Let V={v1,v2,,vn}V= \lbrace v_1, v_2, \cdots, v_{n} \rbrace. Now, suppose for each i=1,2,ni =1, 2, \cdots n, consider a {0,1}\lbrace 0,1 \rbrace-valued function fi ⁣:N{0,1}f_i \colon \mathbb{N} \to \lbrace 0, 1 \rbrace the state function of the vertex viv_i. We define the state of the vertex viv_i at time tt as fi(t)f_{i}(t).

If fi(t)=0f_i(t)=0, we will say the vertex viv_i is off.

If fi(t)=1f_i(t)=1, we will say the vertex viv_i is on.

We define the tuple F(t)=(f1(t),f2(t),,fn(t))F(t)=(f_1(t), f_2(t), \cdots, f_{n}(t)) as the state of (G,)(G, \ell) at time tt.

Now for each ii, we have to equip an updating function

fi(t+1)=ϕi(t(ej)=vi,s(ej)=vj((fj(t),(ej)))f_i(t+1)= \phi_i \bigg( \prod_{t(e_j)=v_i, s(e_j)=v_j} \big( (f_j(t), \ell(e_j) \big) \bigg), where

ϕi ⁣:({0,1}×M)G(,vi){0,1}\phi_i \colon (\lbrace 0,1 \rbrace \times M)^{|{G(-, v_i)}|} \to \lbrace 0,1 \rbrace , where G(,vi)={e ⁣:t(e)=vi}G(-, v_i) = \lbrace e \colon t(e)=v_i \rbrace.

Thus, we can extend the above definition to an updating family of functions

ϕ=(ϕ1,ϕ2,,ϕn) \phi=(\phi_1, \phi_2, \cdots, \phi_n ) which updates the state of (G,)(G, \ell) from F(t)F(t) to F(t+1)F(t+1).

Thus, I am defining a Boolean model of the discrete dynamics of (G,)(G, \ell) as the tuple ((G,),ϕ)((G, \ell), \phi), with a given initial state F(0)=(f1(0),f2(0),fn(0))F(0)=(f_1(0), f_2(0), \cdots f_n(0)) at t=0t=0.

Above is an example of synchronous updating scheme, where the state of every vertex is updated simultaneously. However, in many cases, the updating scheme is asynchronous, in which we have to define the updating function accordingly. (I have to think how to talk about it in a formal way).

view this post on Zulip Adittya Chaudhuri (Jul 06 2025 at 19:45):

Next I want to understand the concept of attractor, a set of states in which the above dynamics eventually settles down. According to the authors in Cell Fate Reprogramming by Control of Intracellular Network Dynamics,

"The attractors of intracellular networks have been found to be identifiable with different cell fates, cell behaviors, and stable patterns of cell activity"

view this post on Zulip John Baez (Jul 06 2025 at 19:55):

It might be good to start by focusing on the synchronous updating scheme for boolean networks, even though it's oversimplified, and look at various biologically interesting behaviors that can naturally arise from this scheme. The paper you're writing about claims to make progress in this:

As it was pointed out in section II A, the state space of
a Boolean network with N nodes contains 2N2^N states. This
exponential dependence on the number of nodes of the
state space's size makes the problem of finding the attractors
of a Boolean network computationally intractable,
which means a full search of the state space can be performed,
in practice, only for small networks.

To overcome this challenge several types of methods have
been proposed to simplify the search space. The most
prominent of these approaches, the so-called network reduction
methods [35-38], shrink the effective network size
by removing frozen nodes (nodes that reach the same
steady state regardless of initial conditions) [38-40] and
dynamically irrelevant nodes (such as simple mediator
nodes or nodes with no outputs), and by simplifying the
Boolean functions (for example, due to the presence of a
node with a fixed state), thus reducing the effective state
space.

Although the reduction methods developed so far have
been successfully applied to several biological networks
[35-37, 41], they have some limitations [....]

The method that we propose is based on the idea that
certain components of the network can only stabilize in
one or a small number of attractors. This idea is itself
not new and is closely related to R. Thomas' rules
linking feedback loops with the appearance of complex
dynamical behavior in biological networks [19]. For example,
an approach that connects the dynamics of certain
network motifs to construct the attractors of the full
Boolean network was recently proposed by Siebert [28].
The novelty of our method is the efficiency of identifying
every motif that stabilizes in an asynchronous attractor
despite being coupled to the rest of the network...

This makes me want to figure out what their method is. I couldn't succeed in a few minutes of work. I don't even know what it means to say a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme. But it would definitely be worth spending an hour or two trying to figure out their method! If it's too hard, maybe some of the papers they refer to are easier.

By the way, here is a free version of the paper you linked to, on the arXiv:

The version you pointed to doesn't seem to be open-source.

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:19):

John Baez said:

It might be good to start by focusing on the synchronous updating scheme for boolean networks, even though it's oversimplified, and look at various biologically interesting behaviors that can naturally arise from this scheme.

Thank you!! Yes, I agree.

view this post on Zulip Kevin Carlson (Jul 07 2025 at 05:24):

I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:29):

Kevin Carlson said:

I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!

Yes, I meant to say that the transition from one state tt of a vertex vv to another t+1t+1 depends on two things:

1) State of the neighbours of vv at tt (state of neighbours also include the state of vv itself at tt).

2) Regulatory nature (stimulation, inhibition, etc.) of the neighbours of vv on vv.

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:35):

John Baez said:

This makes me want to figure out what their method is. I couldn't succeed in a few minutes of work. I don't even know what it means to say a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme. But it would definitely be worth spending an hour or two trying to figure out their method! If it's too hard, maybe some of the papers they refer to are easier.

Thanks!! Yes, I completely agree. I am trying to undertstand what they meant by "a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme"

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:37):

John Baez said:

By the way, here is a free version of the paper you linked to, on the arXiv:

The version you pointed to doesn't seem to be open-source.

Thanks very much for this version. Somehow, I could not find it yesterday and I downloaded the previous version from my institutional account.

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:40):

John Baez said:

The method that we propose is based on the idea that
certain components of the network can only stabilize in
one or a small number of attractors. This idea is itself
not new and is closely related to R. Thomas' rules
linking feedback loops with the appearance of complex
dynamical behavior in biological networks [19].

I am trying to understand "what they meant by the above paragraph".

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 05:46):

John Baez said:

Although the reduction methods developed so far have
been successfully applied to several biological networks
[35-37, 41], they have some limitations [....]

I need to understand "what they meant by their reduction method" and "what are its limitations".

view this post on Zulip John Baez (Jul 07 2025 at 05:52):

Kevin Carlson said:

I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!

Adittya mentioned a graph, but I don't see why. Maybe the paper talks about a graph? I prefer to think about a boolean network: a set of vertices and for each vertex vv a set SvS_v of vertices together with a boolean function BSvB\mathbb{B}^{S_v} \to \mathbb{B} telling us the state of that vertex as a function of the states of the vertices in SvS_v. If we start by assigning each vertex a boolean at time tt, we can update the state at time t+1t+1 using these functions in the hopefully obvious way.

At least this is my impression of how boolean networks work. Wikipedia puts it this way:

A Boolean network consists of a discrete set of Boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t.

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 06:04):

John Baez said:

Kevin Carlson said:

I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!

Adittya mentioned a graph, but I don't see why. Maybe the paper talks about a graph? I prefer to think about a boolean network: a set of vertices and for each vertex vv a set SvS_v of vertices together with a boolean function BSvB\mathbb{B}^{S_v} \to \mathbb{B} telling us the state of that vertex as a function of the states of the vertices in SvS_v. If we start by assigning each vertex a boolean at time tt, we can update the state at time t+1t+1 using these functions in the hopefully obvious way.

My inspiration for thinking Boolean network as "some extra structure" on a monoid labeled graph (G,)(G, \ell) (which gives us a discrete dynamics of (G,)(G, \ell)) comes from the page 2 of * An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks (please see the attached screenshot for the precise portion on the page).
boolnetwork.png

view this post on Zulip John Baez (Jul 07 2025 at 06:06):

What's the point of having the graph, if "experience shows us" that this is not enough and we need a boolean network as I defined it above? I would throw out the graph. Anyway, I would not worry about these details much until I figured out the "method" which is supposedly the main point of this paper. These are biologists writing, not mathematicians, so I don't expect them to be very precise, but they may (or may not) have an interesting "method".

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 06:09):

I understand your point but it said "not enough", but may be necessary. That is why I thought of keeping the both graph structure and as well as the "boolean network" structure in my updating function here #theory: applied category theory > Networks in Biology @ 💬 . I may be misunderstanding something!!

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 06:10):

John Baez said:

Anyway, I would not worry about these details much until I figured out the "method" which is supposedly the main point of this paper. These are biologists writing, not mathematicians, so I don't expect them to be very precise, but they may (or may not) have an interesting "method".

Thanks!! Yes, I agree to your point.

view this post on Zulip Adittya Chaudhuri (Jul 07 2025 at 16:33):

I am trying to write down my current understanding of stable motifs in a Boolean network as discussed in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 05:47):

Sorry I realised I was misunderstanding very fundamentally about Boolean networks. In particular, I realised that the graph structure in Boolean network is induced from the Boolean functions on the nodes. Before, I was misunderstanding by thinking that "we are starting with a graph structure and then adding the Boolean functions to it". I now realised what @John Baez explained here #theory: applied category theory > Networks in Biology @ 💬 . So, I decided to delete my yesterday night's post and redfine the notions according to my current realisations.

Boolean network:
A Boolean network BB consists of a finite set of nodes V={v1,v2,vn}V= \lbrace v_1, v_2, \cdots v_{n} \rbrace along with

Graph associated to a Boolean network:
Let B=(vi,Svi,fi)i=1,2,nB=(v_i, S_{v_i}, f_i)_{i= 1,2, \cdots n} be a Boolean network. We define a graph GB=(E,V,s,t)G_{B}=(E,V, s, t) whose

Remark:
Note that GBG_{B} does not permit multiple edges between vertices.

Discrete dynamics of a Boolean network:
Let B=(vi,Svi,fi)i=1,2,nB=(v_i, S_{v_i}, f_i)_{i= 1,2, \cdots n} be a Boolean network. Consider a family of functions {ϕi ⁣:NBSvi}i=1,2,n \lbrace \phi_i \colon \mathbb{N} \to \mathbb{B}^{S_{v_i}} \rbrace_{i=1, 2, \cdots n}, whose members are called the updating functions of the node viv_i. We define the discrete dynamics of BB as the the n-tuple FB=(f1ϕ1,f2ϕ2,,fnϕn)F_{B}= \big( f_1 \circ \phi_1, f_2 \circ \phi_2, \cdots ,f_n \circ \phi_{n} \big).

State of the Boolean dynamics:

Let (B,FB)(B, F_{B}) be the discrete dynamics of the Boolean network B=(vi,Svi,fi)i=1,2,nB=(v_i, S_{v_i}, f_i)_{i= 1,2, \cdots n}. We define the state of (B,FB)(B, F_{B}) at time tNt \in \mathbb{N} as the n-tuple

F(t):=(f1ϕ1(t),f2ϕ2(t),,fnϕn(t))BnF(t):= \big( f_1 \circ \phi_1(t), f_2 \circ \phi_2(t), \cdots ,f_n \circ \phi_{n}(t) \big) \in \mathbb{B}^{n},

and we define the state of the vertex viv_i at time tNt \in \mathbb{N} as the projection of F(t)F(t) to its ithi_{th} coordinate.

Based on the above formalisms, I will define the notion of stable motifs later today.

view this post on Zulip Nathaniel Virgo (Jul 08 2025 at 09:02):

There is something slightly strange about that definition, because it could be that one of the functions fif_i doesn't depend on one of its arguments, say jSvij\in S_{v_i}. Then the graph will have an edge from jj to ii even though there is no influence of jj on ii, if that makes sense. I'm not sure if this will matter for anything or not.

Maybe a better way to express that is: what's the right notion of morphism for Boolean networks? It seems like maybe you'd want such a network to be isomorphic to one where j∉Svij\not\in S_{v_i}.

view this post on Zulip John Baez (Jul 08 2025 at 09:24):

I have thought about Nathaniel's point when I was defining "stock flow diagrams", which have "links" pointing from "stocks" to "flows" together with a function saying how a flow depends on those stocks that have links pointing to it. The function could be independent of one of its arguments! Then one link would be unnecessary. But instead of trying to forbid this, I decided it was better to allow it.

To disallow it is a bit like trying to say a constant function is not really a function. "What? You're telling me 3 is a function of x? That's absurd!" It feels convincing in a certain gut-level way, but I think it's mathematically very inconvenient.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 11:09):

Thanks!!

Below, I am trying to write down below what I thought about the issue:

According to the (page 3- 4 (Expanded network) of attached puiblished version of An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks Screenshot of the portion is attached
An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.pdf
labeledBoolean.png
),

I think Biologists often prefer to work with the following labeled version of graphs associated to Boolean networks:

Labeled Boolean graph:
Let GBG_{B} be the graph associated to a Boolean network B=(vi,Svi,fi)i=1,2,,nB=(v_i, S_{v_i}, f_{i})_{i=1,2, \cdots ,n}. For a monoid MM, we define a MM-labeled Boolean graph of BB as a MM-labeled graph (as defined in our paper Graphs with polarities) (GB ⁣: ⁣:EM)(G_{B} \colon \ell \colon E \to M).

Remark:
Note that in the diagram attached, authors considered a Z/2\mathbb{Z}/2-labeled Boolean graph of some Boolean network BB.

Now, I propose a solution to the issue raised by @Nathaniel Virgo as follows:

Let us work in the Z/3\mathbb{Z}/3-labeled Boolean graph whose one of the element say 00 represents "no-influece" [We discussed this in Example 3.4 in our paper Graphs with polarities].

Then, I would like to label the unnecessary edges with the element 00 ??

view this post on Zulip John Baez (Jul 08 2025 at 11:46):

By the way, I think this issue is distracting from the more urgent goal of figuring out what "reduction approach" is being explained in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.

view this post on Zulip John Baez (Jul 08 2025 at 11:47):

Our earlier paper was setting up a formalism, and for that it's nice to pay careful attention to details. But if one is trying to understand a biology paper one needs to start by trying to understand the main thing they're trying to say, and later worry about details.

view this post on Zulip Nathaniel Virgo (Jul 08 2025 at 11:48):

(sorry, I didn't mean to distract, the detail just popped out at me when I read the post!)

view this post on Zulip John Baez (Jul 08 2025 at 11:49):

No problem - I'm just trying to act like Adittya's advisor, since he doesn't seem to mind that, and I'm trying to push him into doing actual biology, since he seems to want that.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 13:35):

John Baez said:

Our earlier paper was setting up a formalism, and for that it's nice to pay careful attention to details. But if one is trying to understand a biology paper one needs to start by trying to understand the main thing they're trying to say, and later worry about details.

Thanks!! Yes,  I got your point and fully agree with it!! I will focus on figuring out the "reduction approach" in that paper.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 13:35):

John Baez said:

I'm just trying to act like Adittya's advisor, since he doesn't seem to mind that, and I'm trying to push him into doing actual biology, since he seems to want that.

Thanks very much!! I am very glad and grateful for this!!

view this post on Zulip David Corfield (Jul 08 2025 at 14:06):

It would be interesting to look at commonalities and differences of approach in this field. In a paper along the lines of that Symmetries of Living Systems book I mentioned elsewhere,

the authors explain the steps of their work to understand something as complex as E. coli

image.png

The use of fibrations should make people here feel very at home.

view this post on Zulip John Baez (Jul 08 2025 at 15:23):

What's a 'TRN'? I'm assuming a 'GRN' is a gene regulatory network.

view this post on Zulip John Baez (Jul 08 2025 at 15:32):

Hmm, looking at the paper I don't see them defining 'TRN', but nor do I see them using that term except in the one section header that David gave us a screenshot of! So never mind.

There is a lot of interesting stuff in this paper. I see what seems like the bold claim that most of the gene regulatory network of E. coli (for example) is unnecessary to understand the behavior of this bacterium, or at least the most important aspects.

They start with a gene regulatory network (GRN) with 4692 genes and then trim it down to a 'minimal' GRN with just 42 genes. I don't yet understand the algorithm they use, and I also don't know what properties are supposed to be preserved by this massive simplification. But it sounds remarkable. I guess how remarkable it is depends on how much evolution penalizes inefficiency.

It could be that some seemingly unimportant aspects of the bacteria's GRN become important under some special conditions.

view this post on Zulip John Baez (Jul 08 2025 at 15:46):

I'm glad to see the key idea of 'fibration' for these GRNs seems to be taken from my friend Eugene Lerman and a collaborator: I remember Eugene working on it and it's nice to see someone has picked up on it!

Here's an interesting passage:

We have shown before [4, 10, 11, 24] that the use of fibrations allows for the breakdown of the network into its fundamental synchronized building blocks, the fiber building blocks. Each fiber belongs to a fiber building block, defined as the induced subgraph formed from the nodes in the fiber, and the nodes that send inputs to the fiber (the regulators).

[....]

We find that these fiber building blocks can be precisely characterized by just two numbers n (or φ) and ℓ, defining the |n, ℓ⟩ classes [10].

view this post on Zulip John Baez (Jul 08 2025 at 15:48):

Paper [10] is

view this post on Zulip John Baez (Jul 08 2025 at 15:50):

This passage in the paper makes my crackpot alarm ring:

The building blocks are classified into topological classes of input trees characterized by integer branching ratios and fractal golden ratios of Fibonacci sequences representing cycles of information.

Anyone who mentions 'fractal', 'golden ratio' and 'Fibonacci' in the same sentence makes me afraid.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 16:02):

On a different note (but might be related), for last few days (after @David Corfield shared the paper on symmetries of Living systems), I was trying to understand the Math of Graph fibrations from the paper fibrations of graphs by Boldi and Vigna.

Last to last week, one of my collegue at my current institute (ISI, Kolkata, Stat-Math Dept.) gave a talk on discrete AA-homotopy of undirected simple graphs, where he was also talking about "local isomorphism" properties imply casrtesian morphisms lifting in graph fibrations. Although his definition of graph is a bit restrictive than ours. Interestingly, I find the notion of local isomorphisms present in the "paper on graph fibrations" (for eg: Please see page 25 of Fibration of graphs) relatable.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 16:41):

Now again, I am continuing from here #theory: applied category theory > Networks in Biology @ 💬 towards network reduction methods discussed in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 17:51):

Expanded network of a Boolean network:

Let B=(vi,Svi,fvi)i=1,2,nB=(v_i, S_{v_i}, f_{v_i})_{i= 1,2, \cdots n} be a Boolean network. Then, the expanded network of a Boolean network BB is the graph GExp(B)G_{Exp(B)} of a Boolean network Exp(B)Exp(B) obtained from GBG_B via the following two operations:

where sjs_j's are either the states of one of the kk elements of SviS_{v_i} or one of these states' negations. In the same way, one can also represent the negation of the update function fiˉ\bar{f_i} in a similar form. Then, composite nodes (v1v2vk),(vk+1vk+2vl),(v_1v_2 \cdots v_{k}), (v_{k+1} v_{k+2} \cdots v_{l}), \ldots so on... are added for every set of nodes involved in a conjunctive clause (AND-dependent relationship) and for every constituent node of the composite node an edge is drawn from the constituent node to the composite node. A simpler example of this operation is attached in the screenshot.

examplesyn.png

Stable motifs are certain strongly connected components of GExp(B)G_{Exp(B)}, which I am going to define next!!

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 17:55):

Before defining stable motifs, I will say a few things about "what Exp(B)Exp(B) gives us"!!

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 18:13):

According to the authors in that paper, Exp(B)Exp(B) buys us the following:

Consequence of (1):

Consequence of (2):

view this post on Zulip David Corfield (Jul 08 2025 at 18:59):

John Baez said:

What's a 'TRN'? I'm assuming a 'GRN' is a gene regulatory network.

"A gene regulatory network (GRN) is often referred to as a transcriptional regulatory network
(TRN), although the former is more general than the latter." (Stewart et al. book, p. 139)

view this post on Zulip David Corfield (Jul 08 2025 at 19:06):

John Baez said:

Anyone who mentions 'fractal', 'golden ratio' and 'Fibonacci' in the same sentence makes me afraid.

I don't think it's too concerning. From the book with Ian Stewart:

image.png

view this post on Zulip John Baez (Jul 08 2025 at 19:11):

Yes, I saw this chart in the paper I was reading. Maybe this is a case of people who haven't noticed that crackpots love to drop references to fractals, Fibonacci numbers and the golden ratio much more than ordinary scientists, and thus haven't learned that a sentence like the one they wrote is alarming rather than intriguing to many of us.

You may have heard of the physicists Paul Davies. He got in trouble talking about black holes and the golden ratio.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:11):

I realised I misunderstod here #theory: applied category theory > Networks in Biology @ 💬 while "discussing Z/2\mathbb{Z}/2-labeled Boolean graph. I feel the following may be the correct interpretation of labeledBoolean.png

The definition of fi ⁣:BSviBf_i \colon \mathbb{B}^{S_{v_i}} \to \mathbb{B} naturally gives a Z/2\mathbb{Z}/2-labeled graph, when logical operators like NOT is used in the definition of fif_i. Previously, I misinterpreted by thinking that "we are incorporating an addtional labeling on the graph associated with the Boolean network".

view this post on Zulip John Baez (Jul 08 2025 at 19:18):

What graph do you draw for fA=Bf_A = B OR CC? Or is that boolean expression not allowed here?

view this post on Zulip John Baez (Jul 08 2025 at 19:19):

By the way, I have a lot of questions about what you've been explaining - I don't understand it.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:20):

For fA=Bf_{A}=B, I would draw B+AB \xrightarrow{+} A

view this post on Zulip John Baez (Jul 08 2025 at 19:20):

Okay. But that's not what I asked about.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:24):

John Baez said:

Okay. But that's not what I asked about.

I understood the problem. I am not able to distinguish between fA=Bf_{A}= B AND CC and fA=Bf_{A}= B OR CC by drawing a Z/2\mathbb{Z}/2-labeled graph. However, I am able to distinguish them in the expanded graph by introducing the composite node.
IMG_0377.PNG

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:27):

BC is the composite node here.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:28):

John Baez said:

By the way, I have a lot of questions about what you've been explaining - I don't understand it.

I will try my best to answer. Also, Sorry for explaing not well. I am trying to improve my explanation.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:35):

I think FA=F_A= NOT CC in BB tranforms into FA=CˉF_A=\bar{C} in Exp(B)Exp(B), and hence CAC \xrightarrow{-} A in GBG_B becomes Cˉ+A\bar{C} \xrightarrow{+} A in GExp(B)G_Exp(B). Here, BB denotes the Boolean network and Exp(B)Exp(B) denotes the expanded network of BB.

view this post on Zulip John Baez (Jul 08 2025 at 19:38):

I'll probably ask my questions tomorrow, after rereading what you wrote.

view this post on Zulip Adittya Chaudhuri (Jul 08 2025 at 19:39):

Okay, I am also learning and writing at the same time. So, may be tomorrow, my understanding will get better than what I have today.

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 06:09):

I think the dynamics of a Boolean Network BB (i.e a Boolean network along with its updating functions on nodes) naturally gives us a Z/2\mathbb{Z}/2-labeled graph GBG_{B}, whose signed edges represent the influences of the ttht_{th} state of a neighbour wSviw \in S_{v_i} of a vertex viv_i on the (t+1)th(t+1)_{th} state of the vertex viv_i. I think the graph structure represents a synergistic (cumulative) influence of ttht_{th} states of all the neighbours of a vertex viv_i on the (t+1)th(t+1)_{th} state of the vertex viv_{i} via "logical combinations" (especially AND) of the state of the neigbouring vertices vSviv \in S_{v_i} at time tt, which determines the state of the vertex viv_i at time t+1t+1.

In the attached file, I tried to explain it through examples.

Booleannetworkgraphs.PNG

I feel the logical operator "NOT" is the reason for using the Z/2\mathbb{Z}/2-labeled graph structure on the graph associated to the Boolean network."

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 06:44):

The last post is my current understanding of what I think a Boolean network and its associated graph means. It seems according to this understanding I need to reformulate some the definitions accodingly here #theory: applied category theory > Networks in Biology @ 💬 . However, before formalising fully, as @John Baez suggested before, first, I am trying to figure out the authors' reduction methods discussed in that paper based on the understianding of my "last post". Although I am not fully sure, it seems my last post makes more sense than what I was saying till yesterday night.

view this post on Zulip David Corfield (Jul 09 2025 at 10:35):

I guess if you were allowed two time steps, you could represent something acting as AA OR BB, by negative edges from AA and from BB to a third node and a negative edge from that to a fourth node.

view this post on Zulip John Baez (Jul 09 2025 at 12:23):

Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally, so instead of trying to understand this I'll ask some questions about the "big picture" - basically, why are they doing all this stuff.

Let me state my understanding so far. I'm going to completely avoid using math symbols or formulas, because those are often distracting when you're trying to understand basic ideas.

They start with a boolean network, which is a simple and clear concept, at least in my definition back here.

Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say

If it's Monday I'll eat a sandwich when I see it's 2 pm; if it's Tuesday I won't eat a sandwich when I see it's 2 pm

Does it being 2 pm affect my sandwich eating positively or negatively? This question has no answer; it's not a good question.

So, I'm already confused about what they're trying to do.

Then they try build another graph, starting from the first one, called the "expanded network" of the Boolean network. Adittya says it's obtained from the first graph using two operations.

1) First, for each node we introduce a "complementary node" which is a kind of negation of the original one. E.g. if we had a node that means "I'm eating a sandwich", we'd put in a new node meaning "I'm not eating a sandwich". This is simple enough.

2) Then, we introduce a bunch of nodes called "composite nodes", using a [procedure](Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally, so instead of trying to understand this I'll ask some questions about the "big picture" - basically, why are they doing all this stuff.

Let me state my understanding so far. I'm going to try to avoid using any math symbols or formulas, because those are often distracting when you're trying to understand basic ideas.

They start with a boolean network, which is a simple and clear concept, at least in my definition back here.

Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say

If it's Monday I'll eat a sandwich when I see it's 2 pm; if it's Tuesday I won't eat a sandwich when I see it's 2 pm

Does it being 2 pm affect my sandwich eating positively or negatively? This question has no answer; it's not a good question.

So, I'm already confused about what they're trying to do.

Then they try build another graph, starting from the first one, called the "expanded network" of the Boolean network. Adittya says it's obtained from the first graph using two operations.

1) First, for each node we introduce a "complementary node" which is a kind of negation of the original one. E.g. if we had a node that means "I'm eating a sandwich", we'd put in a new node meaning "I'm not eating a sandwich". This is simple enough.

2) Then, we introduce a bunch of nodes called "composite nodes", using a procedure that's too complicated for me to understand - since I don't know the goal of this procedure.

Adittya did kindly try to explain the goal of doing 1) and 2):

Consequence of (1):

I still feel very confused. It almost seems that the authors don't like boolean networks, so they are trying to build a graph that expresses most of the information in the original boolean network. But they don't seem to be claiming their expanded graph captures all the information in the original boolean network, do they?

Is there any way to say what information they're trying to capture, and what information they're throwing out? Presumably the information they're trying to capture is supposed to be "more important" in some way.

view this post on Zulip David Corfield (Jul 09 2025 at 15:14):

John Baez said:

I'm glad to see the key idea of 'fibration' for these GRNs seems to be taken from my friend Eugene Lerman and a collaborator: I remember Eugene working on it and it's nice to see someone has picked up on it!

Looking around for other applications of fibrations, I see there's the FibLang work of @Fabrizio Romano Genovese@fosco and @Caterina Puca (here and here). These look to understand natural language and its acquisition via fibrations. They rely on the result that any functor factors through a fibration. There's also work on locating obstructions to a functor being a fibration by Fabrizio and others.

On the other hand, I see that those using graph fibrations in biology are also interested in slight departures, which get called "13.5.1 Pseudosymmetries 246, 13.5.2 Quasifibrations 249, 13.5.3 Pseudo-balanced colorings".

I wonder if there's some useful cross-over. Grammatical language would resemble the latter's reduced gene networks, the logic of cognition and of the cell.

view this post on Zulip John Baez (Jul 09 2025 at 16:53):

David Corfield said:

It would be interesting to look at commonalities and differences of approach in this field. In a paper along the lines of that Symmetries of Living Systems book I mentioned elsewhere,

I'm enjoying this paper, not because it uses fibrations, but because it makes sense to me both biologically and mathematically.

view this post on Zulip David Corfield (Jul 09 2025 at 18:35):

Yes, it's nicely organised. The book has Makse also as one of its authors and covers similar ground, but at much greater length especially on the pure mathematics, as well as several further topics.

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 19:28):

John Baez said:

Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say

Sorry, I was a bit distracted from the afternoon till night. I was attending a post marriage ceremony of one of my closest friend (who is coincidentally also a Mathematician, and works in Algebraic geometry). I just returned from the ceremony venue.

For me, given a Boolean network B\mathsf{B}, the signed edges of the graph GBG_{\mathsf{B}} may not say whether one node affects another node "positively" or "negatively". I may be misunderstanding something fundamentally, but my interpreation for them are the following:

fB:=(A)AND(A1)AND(NOTA2)AND(A3)f_{B}:= (A) \,\,\text{AND} \,\, (A_1) \text{AND}\,\, (\text{NOT} \,\, A_2)\,\, \text{AND}\,\, (A_3). In otherwords, the cumulative effects of the ttht_{th} states of the elements of G(,B)G(-, B) (via logical operarations) determines the (t+1)th(t+1)_{th} state of BB.

In a way, logical operations on ttht_{th} state of the source vertices determines the (t+1)th(t+1)_{th} state of the target vertex.

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 19:33):

John Baez said:

Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally,

I am also feeling the same!!

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 20:16):

John Baez said:

I still feel very confused. It almost seems that the authors don't like boolean networks, so they are trying to build a graph that expresses most of the information in the original boolean network.

I feel following are some features of the expanded network:

I am still trying to understand "how they are using their definition of stable motifs in their network reduction methods". In their network reduction method, they are starting their method by keeping states of the nodes of the stable motifs fixed. Although, I am not able to see "why they suddenly define the notion of stable motifs in their expanded network" and "then fix their states" and thus probably justifying the name "stable" .

It is also not clear to me

I feel they think their definition of a stable motif is good enough to assume that the states of the nodes of the stable motif will be constant eventually. However, at the moment, I am not able to see this from their definition of stable motif .

I am thinking about these things!!

view this post on Zulip Adittya Chaudhuri (Jul 09 2025 at 20:32):

John Baez said:

But they don't seem to be claiming their expanded graph captures all the information in the original boolean network, do they?

Is there any way to say what information they're trying to capture, and what information they're throwing out? Presumably the information they're trying to capture is supposed to be "more important" in some way.

I feel they are trying to define the notion of stable motifs in their expanded network which by definition is not possible to define in the original Boolean network. (However, I doubt whether we really need conceptually to define stable motifs in their way in expanded network, or, it may suffice to define certain usual motifs in the signed graph associated to Boolean network.) According to authors: their definition of a stable motif is good enough to assume that the states of the nodes of the stable motif will be constant eventually. However, at the moment, I am not able to see this from their definition of stable motif. Then, they use this fact to fix the states of the stable motifs as the starting point of their network reduction method.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 09:30):

Hi!! I think I found a formal way to express the definition of Boolean network and its associated signed graph, which expresses the information that I informally discussed here #theory: applied category theory > Networks in Biology @ 💬 .

Below, I am wrting down what I thought about it:

Boolean network:
A Boolean network B\mathsf{B} consists of the following:

Let T1 ⁣:NZ,n(n1)T_{1} \colon \mathbb{N} \to \mathbb{Z}, n \mapsto (n-1) .

We define the update function of the vertex viv_i as the function fi=ϕiψiT1 ⁣:NBf_{i}= \phi_i \circ \psi_{i} \circ T_1 \colon \mathbb{N} \to \mathbb{B}.

Thus, by definition, we have fi(t+1)=ϕiψiT1(t)f_{i}(t+1)= \phi_i \circ \psi_i \circ T_1(t).

I intereprete the above as the following:

The logical operations on ttht_{th} state of neighbouring nodes of viv_i determines the (t+1)th(t+1)_{th} state of the node viv_i. Here, I would expect that the definition of the function ϕi ⁣:BSviB\phi_{i} \colon \mathbb{B}^{S_{v_i}} \to \mathbb{B} will involve logical operations "NOT and AND". The composition of function ψiT1\psi_{i} \circ T_1 gives us the ttht_{th} states of the neighbouring nodes of viv_i.

Next, I will construct a Z/2\mathbb{Z}/2-labeled graph GBG_{\mathsf{B}} from B\mathsf{B} as follows:

Thus, I feel the Z/2\mathbb{Z}/2-labeled GBG_{B} behaves like a graphical model of the dynamics of the Boolean network BB (i.e how the Boolean network B\mathsf{B} changes from the ttht_{th} state to the (t+1)th(t+1)_{th}-state.)

view this post on Zulip John Baez (Jul 10 2025 at 10:09):

I'm struggling to understand what you wrote. What's the role of the functions ψi\psi_i? Say it in words without symbols, please. Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?

My definition of 'boolean network' didn't involve integers at all.
I just have enough information to update the boolean at each vertex, as a function of the booleans at the vertices that affect it. I don't understand why your definition is so much more complicated, and the introduction of integers is the most mysterious part.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 12:04):

John Baez said:

What's the role of the functions ψi\psi_i? Say it in words without symbols, please.

The functions ψi\psi_{i} tells the states of the neighbouring vertices of viv_{i} at a given time.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 12:18):

John Baez said:

Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?

I wanted to express the fact that the function fif_i tells that the (t+1)th(t+1)_{th} state of the vertex viv_i is determined by the ttht_{th} states of the vertices of the neighbours of viv_i. For this, I needed a function that takes in the time tt and gives out the time t1t-1. Since, in N\mathbb{N} I can not subtract, I took Z\mathbb{Z}. I formalized this by taking the composition fi:=NT1ZψiBSviϕiBf_i:= \mathbb{N} \xrightarrow{T_1} \mathbb{Z} \xrightarrow{\psi_i} \mathbb{B}^{S_{v_{i}}} \xrightarrow{\phi_i} \mathbb{B}, where the function T1 ⁣:NZ,n(n1)T_1 \colon \mathbb{N} \to \mathbb{Z}, n \mapsto (n-1).

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 12:29):

John Baez said:

My definition of 'boolean network' didn't involve integers at all.
I just have enough information to update the boolean at each vertex, as a function of the booleans at the vertices that affect it. I don't understand why your definition is so much more complicated, and the introduction of integers is the most mysterious part.

I am Sorry!! May be there are many redundant information in my definition that I am yet to see. I am trying to make my definition more simpler.

I feel my ϕi\phi_{i} is your Boolean function at the vertex viv_i.

To incorporate the "updating information formallly in the setting" as I explained in my previous post, I wrote the other functions, which I guess is making my definition complicated. However, I agree that ϕi\phi_i is the most important information for updating the Boolean at each vertex.

view this post on Zulip John Baez (Jul 10 2025 at 13:44):

Adittya Chaudhuri said:

John Baez said:

Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?

I wanted to express the fact that the function fif_i tells that the (t+1)th(t+1)_{th} state of the vertex viv_i is determined by the ttht_{th} states of the vertices of the neighbours of viv_i. For this, I needed a function that takes in the time tt and gives out the time t1t-1. Since, in N\mathbb{N} I can not subtract, I took Z\mathbb{Z}. I formalized this by taking the composition fi:=NT1ZψiBSviϕiBf_i:= \mathbb{N} \xrightarrow{T_1} \mathbb{Z} \xrightarrow{\psi_i} \mathbb{B}^{S_{v_{i}}} \xrightarrow{\phi_i} \mathbb{B}, where the function T1 ⁣:NZ,n(n1)T_1 \colon \mathbb{N} \to \mathbb{Z}, n \mapsto (n-1).

Okay, so Z\mathbb{Z} is time. But I don't think "time" should be part of the definition of a boolean network. Time, and in particular integer-valued-time, is part of one possible semantics for boolean networks.

view this post on Zulip John Baez (Jul 10 2025 at 13:46):

I wanted to express the fact that the function fif_i tells that the (t+1)th(t+1)_{th} state of the vertex viv_i is determined by the ttht_{th} states of the vertices of the neighbours of viv_i. For this, I needed a function that takes in the time tt and gives out the time t1t-1. Since, in N\mathbb{N} I can not subtract, I took Z\mathbb{Z}.

If you're trying to compute the next state as a function of the current state, you just need to describe a function that maps states to states. You don't need to mention time at all. (And you definitely don't need to subtract 1, since subtracting one takes us one time step into the past, not the future!)

view this post on Zulip John Baez (Jul 10 2025 at 13:52):

As I've said before, I think a boolean network is

1) a finite set VV of nodes,
2) a set SvS_v of nodes for each node vv, and
3) a function ϕv:BSvB\phi_v : \mathbb{B}^{S_v} \to \mathbb{B} for each node vv.

That's all.

A state of a boolean network is a function

s:VB s: V \to \mathbb{B}

From a boolean network it's easy to define a time evolution function

F:BVBV F: \mathbb{B}^V \to \mathbb{B}^V

that maps states to states. (I'll leave this as an exercise: here we use the functions ϕv\phi_v.)

Thus, if we have a state ss, we get a sequence of states F(s),F(F(s)),F(s), F(F(s)), \dots which describe how the state evolves in time, and we can define the state at time nNn \in \mathbb{N} to be Fn(s)F^n(s).

view this post on Zulip John Baez (Jul 10 2025 at 13:55):

This seems much simpler than bringing in the mysterious functions and ψi\psi_i, and all this stuff about integers, including the function T1:NZT_1 : \mathbb{N} \to \mathbb{Z}.

Also - this is a smaller stylistic point - there's no need to index the vertices by numbers i=1,,ni = 1, \dots, n. Indexing by numbers is often a mistake. You can use the vertices to name themselves! For example when you write "the vertex viv_i" you can just write "the vertex vv", and when you write SviS_{v_i} you can just write SvS_v, and when you write ϕi\phi_i you can write ϕv\phi_v. That simplifies some expressions, and it avoids bringing in these irrelevant numbers i=1,,n i = 1, \dots, n, which just distract us from the actual ideas.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:19):

John Baez said:

As I've said before, I think a boolean network is

1) a finite set VV of nodes,
2) a set SvS_v of nodes for each node vv, and
3) a function ϕv:BSvB\phi_v : \mathbb{B}^{S_v} \to \mathbb{B} for each node vv.

That's all.

A state of a boolean network is a function

s:VB s: V \to \mathbb{B}

From a boolean network it's easy to define a time evolution function

F:BVBV F: \mathbb{B}^V \to \mathbb{B}^V

that maps states to states. (I'll leave this as an exercise: here we use the functions ϕv\phi_v.)

Thus, if we have a state ss, we get a sequence of states F(s),F(F(s)),F(s), F(F(s)), \dots which describe how the state evolves in time, and we can define the state at time nNn \in \mathbb{N} to be Fn(s)F^n(s).

Thanks very much!! I find your ideas very interesting and much much more elegant and natural than my approach of incorporating the dynamics of a Boolean network by creating an "unnecessarily complicated definition of a Boolean network".

In particular, I find your idea of genarating the time evolution function F ⁣:BVBVF \colon \mathbb{B}^{V} \to \mathbb{B}^{V} of a Boolean network from the underlying structure of your Boolean network very beautiful. In otherwords, (as you mentioned as part of the exercise you proposed) every state s ⁣:VBs \colon V \to \mathbb{B} induces by restiriction a family of maps {sv ⁣:SvB}vV\lbrace s_{v} \colon S_{v} \to \mathbb{B} \rbrace_{v \in V}, which gives us a map sˉ ⁣:VB\bar{s} \colon V \to \mathbb{B} that takes vv to ϕv(sv)B\phi_{v}(s_{v}) \in \mathbb{B}.

Thus, F(s):=sˉF(s):=\bar{s}. I also liked very much the idea of using "n-times self composition of FF" as the evolution of the Boolean network at time t=nt=n much much more elegant than my approach of using the map NZBSv\mathbb{N} \to \mathbb{Z} \to \mathbb{B}^{S_{v}} to talk about time evolution.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:21):

John Baez said:

This seems much simpler than bringing in the mysterious functions and ψi\psi_i, and all this stuff about integers, including the function T1:NZT_1 : \mathbb{N} \to \mathbb{Z}.

I fully agree. Thanks very much!!

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:25):

John Baez said:

Also - this is a smaller stylistic point - there's no need to index the vertices by numbers i=1,,ni = 1, \dots, n. Indexing by numbers is often a mistake. You can use the vertices to name themselves! For example when you write "the vertex viv_i" you can just write "the vertex vv", and when you write SviS_{v_i} you can just write SvS_v, and when you write ϕi\phi_i you can write ϕv\phi_v. That simplifies some expressions, and it avoids bringing in these irrelevant numbers i=1,,n i = 1, \dots, n, which just distract us from the actual ideas.

Thanks!! Yes, I agree to your point. I will use these notational conventions from now on.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:30):

John Baez said:

As I've said before, I think a boolean network is

1) a finite set VV of nodes,
2) a set SvS_v of nodes for each node vv, and
3) a function ϕv:BSvB\phi_v : \mathbb{B}^{S_v} \to \mathbb{B} for each node vv.

That's all.

Thanks!! Now, the above definition makes full sense to me!!

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:43):

I think then associated to any Boolean network BB (i.e your definition), we can still construct a Z/2\mathbb{Z}/2-labeled graph GBG_{B} as

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:51):

Although I directly do not feel the need of "constructing this Z/2\mathbb{Z}/2-labeled graph at all". Except, yes, since this is a graph in a "graph theoretic sense" we may understand the Boolean network from the point of graph theory or homology theory of monoids, or by identifying the motifs in this Z/2\mathbb{Z}/2-labeled graphs via Kleisli morphisms.

Also, may be another interesting aspect of representing a Boolean network as a Z/2\mathbb{Z}/2-labeled graph is the availability of "ACT compositional theorems based on structured cospans" for creating a compositional framework for your definition of Boolean networks.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 20:55):

However, I think it could be interesting if we define a double category of Boolean networks and then define a double functor from this double category to the double category of Z/2\mathbb{Z}/2-labeled graphs (as we defined in our paper) as a functorial semantic of Boolean network (your definition) ?

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:03):

So, first I want to check the following:

Does the assoiciation BGB \mathsf{B} \mapsto G_{\mathsf{B}} gives a functor from the appropriate category of Boolean networks to the category Gph/GZ/2\mathsf{Gph}/G_{\mathbb{Z}/2}?

view this post on Zulip John Baez (Jul 10 2025 at 21:08):

I'm glad my ideas seem useful.

Adittya Chaudhuri said:

I think then associated to any Boolean network BB (i.e your definition), we can still construct a Z/2\mathbb{Z}/2-labeled graph GBG_{B} as

I'm not sure this rule for associating edges with ++ or - is well-defined. Suppose ϕv\phi_v is a function of two boolean variables which I will call AA and BB given by

ϕv \phi_v (A,B) = (A and B) or (not(A) and not(B))

Do you label the edges with ++ or - in this case?

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:10):

I want to use only "AND and NOT" operators. For the time being I am excluding the OR operators. I know I may sound "very unnatural", but I agree I am confused how to create the signed graph structure induced from the OR operator.

view this post on Zulip John Baez (Jul 10 2025 at 21:19):

If the paper we're discussing develops an interesting "network reduction approach", it makes sense to study these questions. If it doesn't, these questions may not be worth studying.

view this post on Zulip John Baez (Jul 10 2025 at 21:20):

I still do not understand their "network reduction approach". Do you?

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:21):

John Baez said:

I still do not understand their "network reduction approach". Do you?

Not yet clear. I am trying to improve my understanding of their method in a considerable way by tomorrow. At least, now we have a "definition of what I can think a Boolean network is".

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:22):

John Baez said:

If the paper we're discussing develops an interesting "network reduction approach", it makes sense to study these questions. If it doesn't, these questions may not be worth studying.

I agree.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:24):

They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually. This is the starting point of their network reduction method.

view this post on Zulip Adittya Chaudhuri (Jul 10 2025 at 21:38):

On a different note, I just found this paper Decomposition of Boolean networks: An approach to modularity of biological systems by Claus Kadelka, Reinhard Laubenbacher, David Murrugarra, Alan Veliz-Cuba, and Matthew Wheeler, which studies "a decomposition theory of Boolean networks in "a non-category theoretic way". I will read what they have done.

view this post on Zulip John Baez (Jul 10 2025 at 22:08):

They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually.

Do they assume it or do they prove it? Is it true or not? You can't make something true just by assuming it.

I'm getting more and more suspicious of this paper.

If you decide this paper is junk, maybe you should switch to reading some better ones, like the paper you just mentioned (if it's better), or this one that David pointed out:

which seems pretty good. In fact I think this paper, and related papers by these authors, and the book

provide an easy way to start doing interesting mathematics connected to gene regulatory networks, I thank @David Corfield for pointing us toward these papers!

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 05:52):

John Baez said:

They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually.

Do they assume it or do they prove it? Is it true or not? You can't make something true just by assuming it.

I am not able to see the proof in the main body of the paper. In the main body I think they assumed it. Previously, I was mostly focussed on the "main body of the paper itself" and somehow did not look at the Appendix part. Now, when I looked at the Appendix portion, it seems they might have given the proof. I think I need to read and understand their proofs (Page 15 onwards in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks) before I can conclude whether they have assumed or proved that "states of the vertices in this component will be fixed eventually".

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 06:10):

John Baez said:

If you decide this paper is junk, maybe you should switch to reading some better ones, like the paper you just mentioned (if it's better), or this one that David pointed out:

which seems pretty good. In fact I think this paper, and related papers by these authors, and the book

provide an easy way to start doing interesting mathematics connected to gene regulatory networks, I thank David Corfield for pointing us toward these papers!

Yes, I also thank very much to @David Corfield for pointing us to these papers. I went through the abstract of the paper Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria. It looks very interesting to me too. Since I was already reading about the math of Graph fibrations from the paper fibrations of graphs by Boldi and Vigna, I find it would be interesting (both from the point of usefuleness in Biology and as well as from the point of developing interesting Mathematics inspired from Biology) to understand how these concepts can be used in the doing useful things for studying networks in Biology/GRN in particular.

I also read the abstract of the "paper on decomposition of Boolean networks" that I shared before. At the moment I am finding the David's suggested paper and the book more interesting. So, I would prefer to read these David's suggested paper and the book before I start reading the "decomposition paper".

Also, along with this, I would like to understand the proofs present in the Appendix of the paper An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks to verify whether this paper's claim and methodology are compatible with each other.

view this post on Zulip David Corfield (Jul 11 2025 at 08:53):

I'd be very interested in thinking about this graph fibration approach. Here's a question:

We know that a graph homomorphism is a fibration if and only if the functor induced between the respective free categories is a discrete fibration. We also know that any functor between categories can be factorized through a fibration. What then if we find a graph homomorphism that is close to being graph fibration (as in 13.5 of the book)? We might measure this by inducing a functor on the respective free categories, and then using techniques for measuring departure from being a discrete fibration, as here. Would the factorization of this functor through a fibration correspond to factoring the graph homomorphism through a graph fibration?

Perhaps this is an area to investigate later, but there's bound to be a great range of ideas from CT to bring over to the biology.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 11:42):

David Corfield said:

I'd be very interested in thinking about this graph fibration approach.

Thanks!! I am very glad to know that.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 12:00):

David Corfield said:

We also know that any functor between categories can be factorized through a fibration.

By "fibration" do you mean a [[Grothendieck fibration]] or fibrations in [[canonical model structure on Cat]] or, in some other Model structure on Cat\mathsf{Cat}? However, from the reference you suggested (which measures departure from being a discrete fibration), I think you meant Grothendieck fibrations ?

If I assume Grothendieck fibration, then I am not able to see how this statement is true when we are not working in the category of groupoids (which is certainly the case for ours because we are working in the framework of directed graphs, and hence the we would get free category of graphs rather than free groupoid of graphs). More precisely, fibrations of [[canonical model structure on groupoids]] are precisely the [[isofibration]], which are same as Grothendieck fibrations (Example 4.2 in [[isofibration]]). However, I think this statement is not true when we are working in Cat\mathsf{Cat} (For eg: please see here).

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 12:05):

David Corfield said:

there's bound to be a great range of ideas from CT to bring over to the biology.

I agree !!

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 12:11):

David Corfield said:

We might measure this by inducing a functor on the respective free categories, and then using techniques for measuring departure from being a discrete fibration, as here.

Thanks very much!! I find this idea quite interesting!! I will read the paper your shared.

view this post on Zulip David Corfield (Jul 11 2025 at 12:51):

Adittya Chaudhuri said:

By "fibration" do you mean...

It's a reference to the result recorded at [[comprehensive factorization system]], factorizing a functor into a final functor and a discrete fibration.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 13:31):

Thanks for clarifying!! Now, I understand your question:

"Would the factorization of this functor through a fibration correspond to factoring the graph homomorphism through a graph fibration?

Interesting!! I am thinking on this question.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 14:36):

At the moment, I am not able to see any "answer to your question". I will work with some small examples first.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 14:58):

David Corfield said:

It's a reference to the result recorded at [[comprehensive factorization system]], factorizing a functor into a final functor and a discrete fibration.

I was not aware of this particular factorisation till today. So, thanks very much again for mentioning about this factorisation. I would be curious to know what would be a suitable analog of a final functor "for graphs".

view this post on Zulip David Corfield (Jul 11 2025 at 15:38):

We might see for ϕ:GH\phi: G \to H, a homomorphism of graphs, whether the induced functor between free categories, F(ϕ):F(G)F(H)F(\phi): F(G) \to F(H), factorizes in such a way that the intermediate category is free on some graph.

view this post on Zulip Kevin Carlson (Jul 11 2025 at 18:51):

I’m pretty sure that any discrete fibration π:CD\pi: C\to D over a free category has free domain. A morphism ff in CC lies over a composite of indecomposables in DD, which gives a unique factorization of ff into morphisms lying over indecomposables, repeatedly applying the discrete fibration property; and a morphism over an indecomposable is actually indecomposable because π\pi reflects identities.

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 20:48):

Thanks @Kevin Carlson for the explanation !!

@David Corfield , then, it seems for every homomorphism of graphs ϕ ⁣:GH\phi \colon G \to H, the induced functor between free categories F(ϕ) ⁣:F(G)F(H)F(\phi) \colon F(G) \to F(H) factorises (via comprehensive factorization system) such that intermidiate category is free on some graph.

In a way, I think it also means that the comprehensive factorization system of Cat\mathsf{Cat} induces a comprehensive factorization system of the Kleisli category of the monad associated to the Free-forgetful adjunction between Gph\mathsf{Gph} and Cat\mathsf{Cat} .

view this post on Zulip Adittya Chaudhuri (Jul 11 2025 at 22:34):

@John Baez here #theory: applied category theory > Graphs with polarities @ 💬 you mentioned that a Boolean network is a "sort of labeled hypergraph". Inspired from your description, I was trying to think how the underlying graph structure of a Boolean network influences the states of a particular vertex in the Boolean network . In particular, I was trying to see whether the Boolean "rig" can be used to capture indirect and cumulative influences.

To undestand this, I produced a mental image of "such a graph". I am attaching a simple example of such a mental image.
compositeindirectBoolean.PNG

It is also possible that I am misunderstanding something while creating the mental image.

view this post on Zulip David Corfield (Jul 12 2025 at 06:35):

You might be interested in Makse et al.'s ways of representing hypergraphs in the chapter where they extend fibrations to hypergraphs:

image.png

view this post on Zulip David Corfield (Jul 12 2025 at 06:58):

Given the "noise" of biological systems perturbing perfect fibrational symmetry, leading to the search for what Makse et al. call "quasifibrations", there might be a useful framing where given a graph GG one looks to find a graph HH and homomorphism ϕ:GH\phi: G \to H, which in some sense minimizes the size of HH while maximizing the degree to which ϕ\phi is a fibration.

view this post on Zulip John Baez (Jul 12 2025 at 09:58):

By the way, I'm not sold on calling fibrations 'symmetry' I see that those authors want to push for a generalized concept of symmetry more suited to biology than physics: they make a big case for it at the beginning of their book. I agree that it's very interesting to study systems that admit a kind of 'projection', e.g. a fibration, down to some simpler system! But I don't yet want to call this 'symmetry'.

Earlier Stewart and Golubitsky had made a case for groupoid symmetries in biology:

I'm completely sold on calling groupoid actions 'symmetries': there's enough similarity in the technical machinery between groups and groupoids that we can generalize lots of methods from the former to the latter.

If every fibration p:XYp: X \to Y gives rise to a groupoid, such that pp can be seen as 'modding out by the groupoid action', then I'll be very happy and I'll see how to understand fibrations as 'symmetry'. But I don't know any result like this.

Mind you, discussing semantic boundaries, like precisely counts as 'symmetry', is not very interesting to me. Far more interesting is: what can you do with fibrations of graphs in biology? I could say a bit about this sometime.

David wrote:

We know that a graph homomorphism is a fibration if and only if the functor induced between the respective free categories is a discrete fibration.

I believe it, but why do "we know" it? Is this some sort of well-known result?

view this post on Zulip David Corfield (Jul 12 2025 at 11:37):

Just a snatched moment.

That's one way of defining graph fibration as inducing a fibration on free categories.

view this post on Zulip John Baez (Jul 12 2025 at 11:46):

Okay, but who does that? The big book gives a more nuts-and-bolts definition of graph fibration (Definition 6.7) as a graph homomorphism such that blah-di-blah-di-blah. Do they give the more high-level definition somewhere?

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 11:54):

John Baez said:

If every fibration p:XYp: X \to Y gives rise to a groupoid, such that pp can be seen as 'modding out by the groupoid action', then I'll be very happy and I'll see how to understand fibrations as 'symmetry'. But I don't know any result like this.

The only partial result that I can think of:

Let π ⁣:[E1E0][X1X0] \pi \colon [E_1 \rightrightarrows E_0 ] \to [X_1 \rightrightarrows X_0 ] be a fibration of categories equipped with a splitting [[cleavage]] C ⁣:E0×π0,X0,tX1Cart(E1) C \colon E_0 \times_{\pi_0, X_0, t} X_1 \to Cart ({E_1}), where Cart(E1)Cart ( {E_1}) is the set of cartesian arrows in EE with respect to the functor π\pi. Now, consider the category Fib(π)\mathsf{Fib}(\pi),

Then, the map CC defines an action of the category [X1X0][X_1 \rightrightarrows X_0] on the set E0E_0 in the following sense:

The last two conditions (above) should follow from the fact that CC is a splitting cleavage.

In otherwords, its a rewording of the fact that "contravariant functors correspond to the splitting fibrations". So, the above description defines a contravariant functor

C ⁣:[X1X0]Fib(π)C^{*} \colon [X_1 \rightrightarrows X_0] \to \mathsf{Fib}(\pi)

view this post on Zulip John Baez (Jul 12 2025 at 12:12):

The big book seems to be arguing that a fibration resembles a map p:XX/Gp: X \to X/G where GG is a group acting on a set, in that the quotient X/GX/G is a kind of 'simplified version' of XX where any two points related by a symmetry are identified. So it might interesting to see if anything along those lines can be turned into a theorem.

But I don't really care too much about whether graph fibrations are 'like symmetries'. I think the biological applications are more interesting.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:19):

I think given an action of a group GG on a set XX, we can define an action groupoid [X×GX][X \times G \rightrightarrows X]. Then, I think the natural projection map π ⁣:[X×GX][G]\pi \colon [X \times G \rightrightarrows X] \to [G \rightrightarrows *] is a fibration of groupoids .

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:22):

John Baez said:

The big book seems to be arguing that a fibration resembles a map p:XX/Gp: X \to X/G where GG is a group acting on a set, in that the quotient X/GX/G is a kind of 'simplified version' of XX where any two points related by a symmetry are identified. So it might interesting to see if anything along those lines can be turned into a theorem.

I see. I got your point!!

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:22):

John Baez said:

But I don't really care too much about whether graph fibrations are 'like symmetries'. I think the biological applications are more interesting.

I agree.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:31):

Now, lets see what what happens if we assume David's definition of fibration of graphs , and apply the discussion in #theory: applied category theory > Networks in Biology @ 💬 :

I am assuming the following definition:

A homomorphism of graphs ϕ ⁣:GH\phi \colon G \to H is a called a graph fibration if Free(ϕ) ⁣:Free(G)Free(H)\mathsf{Free}(\phi) \colon \mathsf{Free}(G) \to \mathsf{Free}(H) is a discrete fibration.

Since, we are in the framework of discrete fibrations, we do not need to worry about cleavage, as the choice of cartesian lift is uniquely determined by the definition of discrete fibration. Also, note that discrete fibrations are splitting fibrations by deafault. Thus we can legitimately apply the above discussion to get a functor τ ⁣:Free(H)Fib(Free(ϕ))\tau \colon \mathsf{Free}(H) \to \mathsf{Fib}(\mathsf{Free}(\phi)).

view this post on Zulip John Baez (Jul 12 2025 at 12:33):

Adittya Chaudhuri said:

I think given an action of a group GG on a set XX, we can define an action groupoid [X×GX][X \times G \rightrightarrows X]. Then, I think the natural projection map π ⁣:[X×GX][G]\pi \colon [X \times G \rightrightarrows X] \to [G \rightrightarrows *] is a fibration of groupoids .

Right, good point! But notice that this projection is doing something completely different from what the authors of that book are discussing: simplifying XX by identifying points in the same orbit. This is simplifying the action groupoid down to the group itself!

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:35):

Thanks. I see. I got your point. I think I need to read the definition of fibration in that book in more detail.

view this post on Zulip John Baez (Jul 12 2025 at 12:37):

I think it's just a map of graphs s.t. the resulting map of free categories is a discrete fibration - but they say it a lot more slowly, since they're not using category theory.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:38):

Thanks. I see. But in that case, we may think of the following #theory: applied category theory > Networks in Biology @ 💬

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:39):

However, I agree, at the moment my description is not "expressing" what you said here #theory: applied category theory > Networks in Biology @ 💬

view this post on Zulip John Baez (Jul 12 2025 at 12:40):

What I really want to think about is not this abstract nonsense but the five-step method of simplifying graphs explained in this paper:

Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.

They seem to be using graphs (not even signed graphs???) as a way of describing a certain class of discrete possibilistic dynamical systems, but they don't spell this out clearly enough for me.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:42):

I got your point. I will read and try to understand their "5-step graph simplification method".

view this post on Zulip John Baez (Jul 12 2025 at 12:42):

By a discrete possibilistic dynamical system I mean a set SS together with a map TT from SS to its power set P(S)P(S), which describes how each state (= element of SS) can evolve to any state in the subset T({s})T(\{s\}).

Of course 'discrete possibilistic dynamical system' is just a silly name for a relation from a set to itself. But there's a nice analogy between probability theory and possibility theory, which this term is supposed to evoke. In probability theory we assign a number between 0 and 1 to each event, which tells us how probable that event is. In possibility theory we just assign either 0 or 1 to each event, which tells us whether that event is possible.

view this post on Zulip John Baez (Jul 12 2025 at 12:44):

If we replace the power set monad by the Giry monad, we get a discrete probabilistic dynamical system, where each state evolves to a probability distribution on states.

view this post on Zulip John Baez (Jul 12 2025 at 12:45):

I guess people usually say 'stochastic' instead of 'probabilistic' here, and similarly they say 'nondeterministic' instead of 'possibilistic'.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:47):

Thanks!! I see. I was not aware of these things. I will read on them. But I got the overall idea of them from the definitions that you wrote here.

view this post on Zulip John Baez (Jul 12 2025 at 12:51):

Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:52):

You said something called "possibility theory". Is it a separate branch of Math like Probability theory?

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:54):

Or did you mean just "directed graphs" ?

view this post on Zulip John Baez (Jul 12 2025 at 12:54):

When I said directed graphs, I meant directed graphs. When I said possibility theory, I was referring to possibility theory:

... there's a nice analogy between probability theory and possibility theory, which this term is supposed to evoke. In probability theory we assign a number between 0 and 1 to each event, which tells us how probable that event is. In possibility theory we just assign either 0 or 1 to each event, which tells us whether that event is possible.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:55):

Sorry, I somehow missed that portion!! Thanks, I got the point now.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 12:58):

John Baez said:

Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.

Thanks. Now, I realised what you meant here.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 13:03):

John Baez said:

Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.

Why are we not considering "directed path" instead of a "directed edge" for this reachability condition?

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 13:06):

David Corfield said:

You might be interested in Makse et al.'s ways of representing hypergraphs in the chapter where they extend fibrations to hypergraphs:

image.png

Yes. Thanks very much!!

view this post on Zulip John Baez (Jul 12 2025 at 13:08):

Adittya Chaudhuri said:

John Baez said:

Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.

Why are we not considering "directed path" instead of a "directed edge" for this reachability condition?

I was talking about a discrete possibilistic dynamical system: we can go from vv at time tt to ww at time t+1t+1 if there's an edge from vv to ww. This seems to be what that book is talking about. Their 5 methods of simplifying graphs are supposed to be motivated by simplifying their dynamical systems.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 13:09):

I see. Thanks.

view this post on Zulip David Corfield (Jul 12 2025 at 16:41):

John Baez said:

Okay, but who does that?

The big book has on p. 200:

image.png

view this post on Zulip John Baez (Jul 12 2025 at 16:47):

Okay, thanks! I'm glad they ultimately come clean and use category theory to define graph fibrations. Their presumably 'easier' approach earlier in the book is quite a slog for those of us blessed with knowledge of Grothendieck fibrations.

view this post on Zulip John Baez (Jul 12 2025 at 16:55):

It's interesting that they only consider graph morphisms ϕ:GH\phi : G \to H, not the more general Kleisli morphisms that Adittya and I studied (which simply arbitrary functors F:GHF : G^\ast \to H^\ast between the free categories on these graphs). A Kleisli morphism can send an edge of GG to a formal composite of edges of HH. Kleisli morphisms are no good if one is considering each edge as taking one time step to traverse, as they sometimes seem to be thinking (e.g. when they're thinking of gene regulatory networks as synchronized processes.) But they're fine if one is not worried about time steps. Then you could say a Kleisli morphism between graphs is a fibration iff F:GHF : G^\ast \to H^\ast is a fibration - i.e., it's a fibration iff its a fibration.

view this post on Zulip David Corfield (Jul 12 2025 at 17:02):

Regarding the issue of symmetry, I recall making considerable use of that Alan Weinstein paper Groupoids: unifying internal and external symmetry for my paper on groupoids, later Ch. 9 of my 2003 book. He considered there "local symmetries" of a tiled floor, so that there are several different kinds of point: outside corners, outside edge points, inside edge points, internal intersections, etc.

I think things are similar here in that one looks for local symmetries between nodes, though perhaps one further step is taken in that we're looking to relate nodes only in terms of isomorphic input trees. The activity of a node only depends on messages sent to it, not on what it sends.

Hence they define a network groupoid as follows:

image.png

This enables them to speak in a third way about the situation already described twice via fibrations and via balanced colorings. More on the translation is in

view this post on Zulip John Baez (Jul 12 2025 at 17:07):

Oh, thanks yet again! Do you know if they prove a theorem relating graph fibrations to these 'network groupoids' of graphs? In what I've read so far, they seem to be hinting at a relation, but I haven't seen them come out and say anything.

view this post on Zulip David Corfield (Jul 12 2025 at 17:12):

No doubt all is revealed in that Stewart 2024 article I just mentioned. Unfortunately for me it's behind a paywall.

view this post on Zulip John Baez (Jul 12 2025 at 17:13):

Okay, I'll bring my powers to bear on it.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 20:18):

Hi!! Below I am trying to clean up the relation (according to my understanding) between what they consider as fibrations and what I think as fibration and why they are relevant in their context. (By "them", I meant the authors of that "Big book").

Proposition:
The following are equivalent:

I feel (2) and (3) will be equivalent for some obvious reasons. However, I think from the context of applications in Biology/ the context in that big book, I feel (1) is the most important. Below, I am trying to sketch the proof of equivalence between (1) and (2) and demonstrate what (1) buys us.

Equivalence between (1) and (2):

(1) \Rightarrow (2)

Let ee be an edge in HH and pϕ1(t(e))p \in \phi^{-1}(t(e)). Then by (1), ϕ(G(,p))H(,t(e))\phi \big(G(-,p) \big) \cong H(-, t(e)). Hence, there exists a unique edge eˉ\bar{e} in GG such that ϕ(eˉ)=e\phi(\bar{e})=e and t(eˉ)=pt(\bar{e})=p.

(2) \Rightarrow 1

Let vv be a vertex in GG. Then, the morphism of graphs ϕ ⁣:GH\phi \colon G \to H induces by restriction a function ϕG(,v) ⁣:G(,V)H(,ϕ(v))\phi|_{G(-,v)} \colon G(-,V) \to H(-,\phi(v)). Then, by (2), the map ϕG(,v)\phi|_{G(-,v)} is a bijection.

What (1) gives us?

The local isomorphism formulation (1) says if pp and qq are in the same fibre of ϕ ⁣:GH\phi \colon G \to H, then, G(,p)G(,q)G(-,p) \cong G(-,q), i.e their input sets are isomorphic.

However, I am not able to see the following:

"if G(,p)G(,q)G(-,p) \cong G(-,q), then pp and qq are in the same fibre".

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 21:00):

David Corfield said:

Hence they define a network groupoid as follows:

image.png

I am trying to write down my understanding of a network groupoid:

Let GG be a graph. Then, the network groupoid NGN_{G} is a groupoid

Note that the vertices of the connected components of NGN_{G} have the same input sets and thus can be identified "according to the context of that paper/big book".

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 21:07):

John Baez said:

Oh, thanks yet again! Do you know if they prove a theorem relating graph fibrations to these 'network groupoids' of graphs? In what I've read so far, they seem to be hinting at a relation, but I haven't seen them come out and say anything.

I think the following may be true:

Claim:
For any fibration ϕ ⁣:GH\phi \colon G \to H, there is a morphism of groupoids ϕN ⁣:NGNH\phi_{N} \colon N_{G} \to N_{H}.

I think in the construction of morphism level map, we need to use (1) of #theory: applied category theory > Networks in Biology @ 💬

view this post on Zulip John Baez (Jul 12 2025 at 21:14):

Are you also claiming that your claim is false for graph homomorphisms ϕ:GH\phi: G \to H that are not fibrations?

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 21:23):

Say,

I am saying the following:

At the moment, I am not able see how to construct ϕN(f) ⁣:H(,ϕ(v))H(,ϕ(w))\phi_{N}(f) \colon H(-, \phi(v)) \to H(-, \phi(w)) without using the bijection G(,v)(H,ϕ(v))G(-,v) \cong (H, \phi(v)). Also, I think to prove ϕN(f)\phi_{N}(f) is a bijection we may need to use the bijection G(,w)(H,ϕ(w))G(-,w) \cong (H, \phi(w)).

view this post on Zulip John Baez (Jul 12 2025 at 21:40):

I asked a yes or no question, and it would be nice to get a yes or no answer, or else "I don't know". I have trouble understanding what you're trying to say with all these symbols. The great thing about a yes or no question is that it can often be answered with just one word, "yes" or "no", transmitting a lot of information in a single word.

view this post on Zulip Adittya Chaudhuri (Jul 12 2025 at 21:46):

Sorry!! I got your point. Yes, I think we need "fibration".

view this post on Zulip John Baez (Jul 12 2025 at 21:56):

Okay, good. The result would not be so interesting otherwise!

view this post on Zulip Adittya Chaudhuri (Jul 13 2025 at 05:47):

Thank you. Yes, I got your point.

view this post on Zulip David Corfield (Jul 13 2025 at 11:36):

The big book's preferences are to convey this "symmetry" they're interested in via graph fibrations, that we've discussed, and via balanced colorings. A network is colored by coloring its nodes. The coloring is balanced if any two nodes of the same color receive inputs from neighbors with the same colors.

And this is generalised to typed networks, e.g., repressors and activators,

Nodes of the same color have inputs that match each other, as regards both the type of edge and the color of the source of that edge. That is, nodes of the same colors ‘receive’ the same colors from in-neighbors. (p. 17)

Balanced colorings and fibrations are equivalent -- they're recording isomorphisms between the whole input tree, not just the one-step input set. So more work needs to be done in the case of the network groupoid, since this is defined solely in terms of the input sets. In addition, one looks for coloring subgroupoids

Many results about balanced colorings can be interpreted using coloring subgroupoids,
but the theory is technical. See (Stewart, 2024). (p. 122)

They seem to be suggesting that there's little to gain from the groupoid approach as it requires conditions on subgroupoids that are imitating the definition of a balanced coloring:

image.png

view this post on Zulip David Corfield (Jul 13 2025 at 15:30):

Intriguing the explanation for the relevance of fibrations for biology in that a common evolutionary mechanism is gene duplication. This corresponds (in many cases) to a preservation of the dynamics as maintained by the same fibrational symmetry, but then allowing later symmetry-breaking as different regulators input the two copies. They show how this process leads to "computational" structures, such as memory and clocks.

view this post on Zulip Adittya Chaudhuri (Jul 13 2025 at 18:07):

David Corfield said:

Balanced colorings and fibrations are equivalent --"they're recording isomorphisms between the whole input tree, not just the one-step input set.

I agree. I find the way the "authors are recording isomorphisms between the whole input tree, not just the one-step input set" really interesting!! Thanks.

For any fibration, if we color vertices of the same fibre by the same color, then by the definition of fibration of graphs (combining (1) and (2) in #theory: applied category theory > Networks in Biology @ 💬 ), "this colouring" is a balanced coloring.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 05:09):

David Corfield said:

So more work needs to be done in the case of the network groupoid, since this is defined solely in terms of the input sets. In addition, one looks for coloring subgroupoids

Hmm.. I agree (at least from the defintion of a network groupoid), however, I have not read through the technical portion related to colored subgroupoids.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 05:27):

David Corfield said:

They seem to be suggesting that there's little to gain from the groupoid approach as it requires conditions on subgroupoids that are imitating the definition of a balanced coloring:

image.png

If my claim is true here #theory: applied category theory > Networks in Biology @ 💬 , then I may expect that the association of the network groupoid to a graph can be extended to a functor from the category Gphfib\mathsf{Gph}_{fib} to the category of groupoids (Gphfib\mathsf{Gph}_{fib} is a category whose objects are graphs and morphisms are graph fibrations).

However, at the moment I am not sure about the above result's interpretation or implications interms of "balanced coloring". I will think on this.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 07:38):

David Corfield said:

Intriguing the explanation for the relevance of fibrations for biology in that a common evolutionary mechanism is gene duplication. This corresponds (in many cases) to a preservation of the dynamics as maintained by the same fibrational symmetry, but then allowing later symmetry-breaking as different regulators input the two copies. They show how this process leads to "computational" structures, such as memory and clocks.

In that "big book", has they defined "what they meant by a "broken symmetry" in Mathematical terms? I mean if we consider "graph fibrations" as a kind of "symmetry", then natrually the definition of a "symmetry break" that comes to my mind is a homomorphism of graphs which is not a fibration. I think you already hinted about this here #theory: applied category theory > Networks in Biology @ 💬 , and also here #theory: applied category theory > Networks in Biology @ 💬, or am I misunderstanding something?

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 07:46):

I just found there is a chapter called "quasifibration" (Chapter 13.5.2) in that big book. I will read this portion. May be quasifibrations are the right language to describe broken symmetries (although I am not sure).

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 08:13):

I just found a paper Quasifibrations of Graphs to Find Symmetries in Biological Networks by Boldi, Leifer and Makse, where I think the notion of quasifibration is introduced.

view this post on Zulip David Corfield (Jul 14 2025 at 08:18):

The quasifibration section is dealing with situations in the noisy biological world where a network is close to being such that there's a fibration from it to a reduced target. I was wondering earlier whether we could leverage our CT knowledge to say something about this situation, with the idea of factorizing a graph morphism into a final functor (or graph equivalent) and discrete fibration.

The broken symmetry idea comes to prominence in Chap. 18: Synthetic Biology Through Broken
Symmetries: the Cell as a Computer

We have seen how the functions of biological networks can be pictured as an orchestra of synchronized instruments captured by the fibration symmetry. In this chapter we elaborate on an even stricter and more taxing criterion for functionality of the network. We ask whether minimal biological circuits can perform core logic computational programs. We show this through the mechanism of ‘fibration symmetry breaking’. These circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.

Gene duplication by itself would be pointless

18.3 Gene duplication in evolution mimics lifting in a fibration

Two networks related by a surjective fibration behave identically. Or rather conditions are such so that nodes in the same fibre synchronise.

But a further step brings in symmetry breaking if the copies gain new different inputs. That's the rest of the chapter.

view this post on Zulip David Corfield (Jul 14 2025 at 08:19):

I guess there could be a connection between the two topics when a network has had a symmetry broken and there's no longer a fibration to the base, one could think of it as a quasifibration perhaps.

view this post on Zulip David Corfield (Jul 14 2025 at 08:26):

Adittya Chaudhuri said:

I just found a paper Quasifibrations of Graphs to Find Symmetries in Biological Networks by Boldi, Leifer and Makse, where I think the notion of quasifibration is introduced.

Right, so that's one approach they advocate. Another is

13.5.3 Pseudo-balanced colorings and repair algorithm without knowledge of balanced coloring

This is in

As I say, could be a place to wield some CT-power.

These two approaches aren't mentioned in the context of symmetry-breaking (chap. 18).

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 14:58):

David Corfield said:

The quasifibration section is dealing with situations in the noisy biological world where a network is close to being such that there's a fibration from it to a reduced target. I was wondering earlier whether we could leverage our CT knowledge to say something about this situation, with the idea of factorizing a graph morphism into a final functor (or graph equivalent) and discrete fibration.

Thanks. Yes, I agree. I read the definition of quasifibration (as in the Definition V.1 in Quasifibrations of Graphs to Find Symmetries in Biological Networks in page 7). From this definition, I felt they measured the noise in the fibration by the degree to which a graph homomorphism fails to be a graph fibration by taking account of

As you suggested before, I also feel that there could some interesting (at least Mathematically) relation between Definition V.1 in Quasifibrations of Graphs to Find Symmetries in Biological Networks and Proposition 2.9 in How Nice is this Functor? Two Squares and Some Homology go a Long Way.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 15:14):

David Corfield said:

Two networks related by a surjective fibration behave identically. Or rather conditions are such so that nodes in the same fibre synchronise.

But a further step brings in symmetry breaking if the copies gain new different inputs. That's the rest of the chapter.

Thanks. Yes, I got your point. I am reading about it from Circuits with broken fibration symmetries perform core logic computations in biological networks

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 15:16):

David Corfield said:

I guess there could be a connection between the two topics when a network has had a symmetry broken and there's no longer a fibration to the base, one could think of it as a quasifibration perhaps.

Yes, but I realised "symmetry breaking" and "quasifibrations" are two very different concepts. Previously, I was misunderstanding.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 15:22):

David Corfield said:

Another is

13.5.3 Pseudo-balanced colorings and repair algorithm without knowledge of balanced coloring

This is in

As I say, could be a place to wield some CT-power.

These two approaches aren't mentioned in the context of symmetry-breaking (chap. 18).

Thanks. I will read about this other approach. Since fibrations correspond to balanced coloring, I am not surprised if there is an analogue of quasifibrations in terms of colorings: "Pseudo-balanced coloring".

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 15:30):

Adittya Chaudhuri said:

However, I am not able to see the following:

"if G(,p)G(,q)G(-,p) \cong G(-,q), then pp and qq are in the same fibre".

In this context, I feel Theorem II.1(2) page 3 in Quasifibrations of Graphs to Find Symmetries in Biological Networks is relevant.

view this post on Zulip Adittya Chaudhuri (Jul 14 2025 at 17:03):

@John Baez mentioned about the Lerman's paper before here #theory: applied category theory > Networks in Biology @ 💬. While reading the "steps" in that complexity reduction paper, I am guided to the Lerman's paper Modular dynamical systems on networks. The paper looks very interesting!!

view this post on Zulip David Corfield (Jul 15 2025 at 07:48):

Adittya Chaudhuri said:

However, I am not able to see the following:

"if G(,p)G(,q)G(-,p) \cong G(-,q), then pp and qq are in the same fibre".

I don't see where you're getting this from. Isomorphism of input sets isn't enough to say that there's any fibration with pp and qq in the same fibre. The identity map 1G:GG1_G: G \to G is a fibration, and GG could be highly symmetrical, all vertices having isomorphic input trees, but each fibre is a singleton.

There is the notion of a minimal fibration of GG, where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 10:43):

David Corfield said:

I don't see where you're getting this from. Isomorphism of input sets isn't enough to say that there's any fibration with pp and qq in the same fibre. The identity map 1G:GG1_G: G \to G is a fibration, and GG could be highly symmetrical, all vertices having isomorphic input trees, but each fibre is a singleton.

Yes, I agree. I did not mean it. In that "particular post", I was willing to have a notion of fibration where such thing happens. I said about it here #theory: applied category theory > Networks in Biology @ 💬 .

I wanted to say the following:

If there is a graph GG where we have a equivalence relation on the set of vertices defined as vwv \sim w if there exists a bijection f ⁣:G(,v)G(,w)f \colon G(-,v) \to G(-,w), such that s(e)s(f(e))s(e) \cong s(f(e)) for all eG(,v)e \in G(-,v). Then, by Theorem II.1(2) page 3 in Quasifibrations of Graphs to Find Symmetries in Biological Networks, there exists a graph HH and a surjective graph fibration ϕ ⁣:GH\phi \colon G \to H, whose fibres are the said equivalence classes of \sim.

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 11:05):

Then, by Theorem 26 of Fibration of Graphs, the (above) constructed ϕ ⁣:GH\phi \colon G \to H can be chosen to be a minimal fibration.

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 11:14):

David Corfield said:

There is the notion of a minimal fibration of GG, where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).

Thanks!! I think one direction is true for any fibration i.e In any fibration, if two vertices are in the same fibre, then their input tress are isomorphic (Propotion 23 of Fibration of Graphs).

view this post on Zulip David Corfield (Jul 15 2025 at 11:17):

Adittya Chaudhuri said:

If there is a graph GG where we have a equivalence relation on the set of vertices defined as vwv \sim w if there exists a bijection f ⁣:G(,v)G(,w)f \colon G(-,v) \to G(-,w), such that s(e)=s(f(e))s(e)=s(f(e)) for all eG(,v)e \in G(-,v).

Surely, it can't be an identity, s(e)=s(f(e))s(e)=s(f(e)). No, it's not

image.png

It's that the sources are equivalent.

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 11:18):

Sorry, yes. I just edited.

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 11:29):

David Corfield said:

There is the notion of a minimal fibration of GG, where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).

I think the other direction follows because otherwise, the base of the minimal fibration will not be fibration prime because there would be a pair of isomorphic universal total graphs, which will violate the Corollary 27 of Fibration of Graphs.

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 19:00):

John Baez said:

What I really want to think about is not this abstract nonsense but the five-step method of simplifying graphs explained in this paper:

Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.

In the Step 1, they reduce a directed graph GG (although I think similar method is applicable for signed graphs), by defining a minimal fibration ϕ ⁣:GB\phi \colon G \to B. So, the new reduced graph of GG in Step 1 is BB.

Now, I have the following questions:

Regarding (2)

If I am not mistaking anything fundamental, I observed that the information of feedback loops (hence homology of the graph) may be changed in this reduction process. (I tried to construct a simple example of "this change" in the attached file).
feedbackinflost.PNG

Question:

Under what condition a minimal fibration preserves homology monoids of directed graphs? I have a feeling that when the total graph is connected, then a minimal fibration may preserve homology (I am not sure)

Question:

Under what condition a minimal fibration preserves feed-forward information?

However, may be their focus of attention is very different from "feedback or feed-forward information" during the reduction process.

Regarding (1)

In the attached sreenshot
complexityreductiondynamics.png

they said that DeVille Lerman's paper's result (attached
surjectivefibrationslemma.png
)

imply ϕ ⁣:GB\phi \colon G \to B induces synchronization on nodes ii and jj in GG if ϕ(i)=ϕ(j)\phi(i)= \phi(j). Then, they said "This guarantees that the dynamics for both the original network and the base network are the same, all the collapsed nodes dynamics are identical to the representative node they were collapsed into."

I am yet to understand how the above implications are following from the DeVille Lerman's paper's result. I will work on it.

view this post on Zulip David Corfield (Jul 15 2025 at 19:58):

For sure, the number of loops will change. But then with the claim of synchronicity each identified loop would be undergoing the same dynamics.

There may be a place for your graph homology at the later stages

Steps IV and V: Small-scale structure inside the SCCs: logic circuits and cycles The last two steps consist of understanding the small-scale structures within the SCCs: the logic circuits (Step IV) and the bigger cycles connecting them (Step V).

view this post on Zulip Adittya Chaudhuri (Jul 15 2025 at 20:12):

Thanks. I got the point you are making.

view this post on Zulip Peva Blanchard (Jul 15 2025 at 20:51):

I didn't follow the whole discussion, but there's been quite some work in distributed computing related to graph coverings (and fibrations). Roughly speaking, they are about characterizing whether some problem (e.g. electing a leader) can be solved on a process network, assuming that processors have limited knowledge. For instance

Also, Vigna collected some notes on his website about fibration-based techniques in computer science.

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 08:36):

Interesting!! Thanks for sharing the references on the applications of fibration of graphs.

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 21:33):

Hi!! I read Step 1 - 3 of Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.

Below, I am trying to writing down my current understanding of Step 1.

Step 1: Collapsing a graph into ints minimal base by surjective fibrations:

A graph HH is said to be fibration prime iff every surjective fibration ϕ ⁣:HH\phi \colon H \to H' is an isomorphism. A fibration of graphs θ ⁣:XY\theta \colon X \to Y is called minimal iff θ\theta is surjective and YY is fibration prime.

We start with a graph G=(E,V,s,t)G=(E, V, s, t). Now, either by Theorem 26 or by section 4.1 of Fibration of graphs, we can construct a fibration prime graph BB along with a minimal fibration ϕ ⁣:GB.\phi \colon G \to B. The reduced version of GG is BB.

Justification: Why it is a reasonable notion of reduction:

Thus, I am convinced that by Step 1 we can legitimately reduce a graph GG to a graph BB via a surjective fibration which preserves the dynamical systems, as claimed by the authors:
complexityreductiondynamics.png

I will write about Step 2 and Step 3 tomorrow.

view this post on Zulip John Baez (Jul 16 2025 at 21:45):

This is great stuff! Thanks for working through it and explaining it.

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 21:46):

Thanks very much!! I am trying to finish the rest of the steps by tomorrow. Lets see!!. I mean by today.. Its 3:17 Am here :)

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 21:50):

I realised, Lerman-DeVille wrote quite a number of papers in this direction. I will read them. I am really enjoying their treatments of interconnection of open networks (via labeleing of vertices of a graph by some objects of an approriate category like Man\mathsf{Man}) interms of suitable notion of vector fields on the total phase space which arises from the graph structure by which those manifolds/objects in that category are interconnected.

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 21:57):

Networks of open systems by Lerman

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 21:58):

Networks of hybrid open systems by Lerman and Schmidt

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 22:02):

I found this interesting paper Symmetry groupoids and patterns of synchrony in coupled cell networks by Stewart et al., which gives an equivalent framework to surjective fibrations.

view this post on Zulip Adittya Chaudhuri (Jul 16 2025 at 22:04):

DeVille and Lerman said that they discovered that "the framework in Stewart's" paper (via balanced equivalence relation) can be realised as surjective fibrations of graphs introduced by Boldi and Vigna.

view this post on Zulip David Corfield (Jul 17 2025 at 06:55):

Adittya Chaudhuri said:

(via balanced equivalence relation)

In the "big book" they end up talking about "balanced colorings" and "equitable partitions" of nodes. Lots of terms for the same idea.

view this post on Zulip David Corfield (Jul 17 2025 at 07:02):

Interesting. There are also applications to neural networks:

Intriguing that such a simple idea from the point of view of CT has potentially profound uses. After all, a graph fibration is just a presheaf on the free category of the base graph.

view this post on Zulip David Corfield (Jul 17 2025 at 07:07):

Hmm, so what is the construction of a minimal fibration of GG in the presheaf perspective? It's as though we're given a category (here F(G)F(G)) and want to know for which HH can F(G)F(G) be seen as an element of Presheaf(F(H))Presheaf(F(H)). Like going in the opposite direction to finding the universal cover of a space.

view this post on Zulip David Corfield (Jul 17 2025 at 07:28):

The (potentially infinite) input trees are part of the universal cover of the graph, so we're in the territory of discrete directed topology. I guess if we come from continuous topology, we're not used to there being a minimal space. When we see the line as helix sitting above a circle as its (unique) universal cover, we know that there are circles below that our circle can cover. But in the discrete case, things bottom out.

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 14:10):

David Corfield said:

Interesting. There are also applications to neural networks:

Intriguing that such a simple idea from the point of view of CT has potentially profound uses.

Interesting!! Thanks!! Yes, indeed it is.

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 14:18):

David Corfield said:

After all, a graph fibration is just a presheaf on the free category of the base graph.

Yes, in a way I think this "presheaf perspective" of a discrete fibration also tells us that every graph fibation ϕ ⁣:GH\phi \colon G \to H induces a canonical action of the category F(H)F(H) on the category whose objects are fibres of ϕ\phi and morphisms are functions between the fibres (as I discussed before about this perspective here #theory: applied category theory > Networks in Biology @ 💬 ).

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 14:57):

David Corfield said:

Hmm, so what is the construction of a minimal fibration of GG in the presheaf perspective?

Thanks. Yes, it is a natural question. I am thinking on this.

view this post on Zulip David Corfield (Jul 17 2025 at 15:40):

David Corfield said:

for which HH can F(G)F(G) be seen as an element of Presheaf(F(H))Presheaf(F(H))

That was a loose way of speaking. I mean for which HH can F(G)F(G) be seen as arising from the Grothendieck constuction for some K:Presheaf(F(H))K: Presheaf(F(H)), i.e., F(G)=KF(G) = \int K.

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 17:56):

David Corfield said:

David Corfield said:

for which HH can F(G)F(G) be seen as an element of Presheaf(F(H))Presheaf(F(H))

That was a loose way of speaking. I mean for which HH can F(G)F(G) be seen as arising from the Grothendieck constuction for some K:Presheaf(F(H))K: Presheaf(F(H)), i.e., F(G)=KF(G) = \int K.

Thanks, thats fine. Yes, if I am not misunderstanding anything, then a prescription of such HH is in the proof of the Theorem 2 (2), Page 25 of Fibration of graphs.

More precisely, let GG be a graph with an equivalence realtion on the set of vertices which satisfies the local in-isomorphism property:
image-3.png

I think we can always define such a relation canonically by defining xyx \sim y iff there is a bijection ψx,y ⁣:G(,x)G(,y)\psi_{x,y} \colon G(-,x) \to G(-,y) such that s(x)s(ψx,y(a))s(x) \cong s(\psi_{x,y}(a)). Then, by Theorem 2 (2), Page 25 of Fibration of graphs, there exists a graph BB (whose construction is also given in the proof) and a surjective fibration of graph ϕ ⁣:GB\phi \colon G \to B whose fibres are the said equivalence classes. Thus, Free(ϕ) ⁣:Free(G)Free(B)\mathsf{Free}(\phi) \colon \mathsf{Free}(G) \to \mathsf{Free}(B) is a discrete fibration. Let pϕ ⁣:Free(B)opSetp_{\phi} \colon \mathsf{Free}(B)^{{\rm{op}}} \to \mathsf{Set} be the presheaf associated to Free(ϕ)\mathsf{Free}(\phi). Then, by Grothendieck's correspondence between discrete fibrations and presheaves (Theorem 2.1.2 of Categorical notions of fibration), we have pϕFree(G)\int p_{\phi} \cong \mathsf{Free}(G).

In fact, more is true:

By Theorem 26 of Fibration of graphs, we can choose BB to be minimal, and also algorithmic construction of such BB is given in the section 4.1 of "Fibration of Graphs" paper by Boldi-Vigna.

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 20:48):

Hi!! I am writing down some of the things that I was thinking today:

As I explained here #theory: applied category theory > Networks in Biology @ 💬 , I am convinved that if we have a graph GG, then a reasonable way to reduce the complexity of GG (preseriving the dynamics of the G) is to collapse the graph GG to a graph BB by a minimal fibration ϕ ⁣:GB\phi \colon G \to B. Recall, by definition of a minimal fibration, ϕ\phi is surjective and every surjective fibration ψ ⁣:BB\psi \colon B \to B' is an isomorphism. Although there are some ways (for example, Theorem 26 or section 4.1 of "Fibration of Graphs" paper by Boldi-Vigna ), which gives prescription of the required minimal fibration for the reduction of GG. I was wondering about a particular way of doing the same, in particular, by a suitable action of a group AA on the graph GG.

Let me explain!!

Consider a graph GG and group AA. Then according to the Section 2.3 of "Fibration of Graphs" paper by Boldi-Vigna, a left action of AA on GG is defined as a group homomoprhism ρ ⁣:AAut(G)\rho \colon A \to {\rm{Aut}}(G). Now, if y=xay=x \cdot a for some aAa \in A, then, the action of aa gives a bijection between G(,x)G(,y)G(-,x) \cong G(-,y). Hence, the equivalence relation induced by the action ρ\rho of AA on the the vertex set VV satifies the Local In-isomorphism property image-3.png

Hence, by the Theorem 2 (2) Page 25 of "Fibration of Graphs" paper by Boldi-Vigna, we have a surjective fibration ϕA,ρ ⁣:GBA\phi_{A, \rho} \colon G \to B_{A}, whose fibres are the orbits of ρ\rho. The vertices of the graph BAB_{A} are the orbits of ρ\rho and edges from an orbit ϕA,ρ1(x)\phi_{A, \rho}^{-1}(x) to a orbit ϕA,ρ1(y)\phi_{A, \rho}^{-1}(y) are the edges coming to an element of ϕ1(y)\phi^{-1}(y) from all the elements of ϕA,ρ1(x)\phi_{A, \rho}^{-1}(x).

Questions:

Answers to above questions may give us some relationships between reduction of complexities of a graph via action of certain groups on the graph.

view this post on Zulip Adittya Chaudhuri (Jul 17 2025 at 21:15):

On a different note, I am not very happy with the definition of a left action of a group on the graph as given in the Section 2.3 of "Fibration of Graphs" paper by Boldi-Vigna.

Let me argue why I think so:

Recall, a morphism of graphs ϕ ⁣:GH\phi \colon G \to H is called a fibration if Free(ϕ) ⁣:Free(G)Free(H)\mathsf{Free}(\phi) \colon \mathsf{Free}(G) \to \mathsf{Free}(H) is a discrete fibration.

In the light of the above definition of fibration of graph, it is natural for me to see how an action ρ ⁣:AAut(G)\rho \colon A \to \rm{Aut}(G) of a group AA on the graph GG induces an action on the category Free(G)\mathsf{Free}(G). I think, the action is not that interesting because, it induces canonically a strict action of the discrete 2-group [AA][A \rightrightarrows A] on Free(G)\mathsf{Free}(G). Thus, I would prefer a higher analog:

Definition:
Let GG be a graph and A=[A1A0]\mathbb{A}=[A_1 \rightrightarrows A_0] be a strict 2-group. Then a left action of AA on GG is defined as a functor ρ ⁣:A×Free(G)Free(G)\rho \colon \mathbb{A} \times \mathsf{Free}(G) \to \mathsf{Free}(G) such that

Note that the above is a standard definition of an action of a strict 2-group on a category, whose smooth analog are standard in the 2-group gauge theory literature.

I am not sure but it might be interesting to investigate (at least Mathematically), what kind of discrete fibrations over "suitable quotient categories" these higher actions may produce, and under what conditions it give a suitable notion of minimal discrete fibrations.

view this post on Zulip John Baez (Jul 18 2025 at 08:17):

Thanks for working on this, guys! I've been following along but distracted by thinking about hydrogen.

I have some questions about fibrations of graphs and whether we can really think of two graphs GG and HH related by a fibration ϕ:GH\phi: G \to H as describing "the same dynamics" (in some sense).

Define the cyclic graph CnC_n to be the graph with nn vertices - think of them as elements of the cyclic group Z/n\mathbb{Z}/n - and an edge from each vertex ii to the next vertex i+1i+1.

Question: The quotient map Z/mnZ/n\mathbb{Z}/mn \to \mathbb{Z}/n induces a graph homomorphism ϕ:CmnCn\phi : C_{mn} \to C_n. Is this a graph fibration?

Question: Is the graph CnC_n minimal iff nn is prime?

view this post on Zulip David Corfield (Jul 18 2025 at 08:30):

Adittya Chaudhuri said:

Thanks, thats fine. Yes, if I am not misunderstanding anything, then a prescription of such HH is in the proof of the Theorem 2 (2), Page 25 of Fibration of graphs.

Yes, we know that there's an algorithm to find a minimal fibration. My speculation was a more abstract one, beyond the case of graphs, of wondering how to reconstruct a general category as a [[category of elements]]. This seems rather similar to finding out about a space which spaces it covers.

view this post on Zulip David Corfield (Jul 18 2025 at 08:37):

Adittya Chaudhuri said:

I was wondering about a particular way of doing the same, in particular, by a suitable action of a group AA on the graph GG.

I have doubts about this approach. The "big book" shows many cases where a graph has lots of fibrational symmetry but little global symmetry. But the group actions you mention are in terms of global symmetries, i.e. elements of Aut(G)Aut(G)

Adittya Chaudhuri said:

a left action of AA on GG is defined as a group homomoprhism ρ ⁣:AAut(G)\rho \colon A \to {\rm{Aut}}(G).

Sure, group actions generate surjective fibrations, but rarely minimal ones. That's the mantra of the book, that we need to generalise from global symmetry to local (input) symmetry.

So the answer to your questions

Adittya Chaudhuri said:

are both 'No'.

view this post on Zulip John Baez (Jul 18 2025 at 08:39):

By the way, my claimed example of a fibration ϕ:CmnCn\phi: C_{mn} \to C_n should be one that does come from a group action, namely the obvious action of Z/m\mathbb{Z}/m on CmnC_{mn}.

view this post on Zulip David Corfield (Jul 18 2025 at 08:45):

John Baez said:

Question: Is the graph CnC_n minimal iff nn is prime?

I think in this case the minimal fibration will be to a single vertex and arrow. All the vertices of your graph have isomorphic input trees.

view this post on Zulip John Baez (Jul 18 2025 at 08:46):

John Baez said:

Question: Is the graph CnC_n minimal iff nn is prime?

I think not: I think CnC1C_n \to C_1 is always a fibration!

view this post on Zulip John Baez (Jul 18 2025 at 08:46):

Okay, our messages crossed in the aether.

view this post on Zulip John Baez (Jul 18 2025 at 08:47):

This problem thinks 11 is prime. :upside_down:

view this post on Zulip David Corfield (Jul 18 2025 at 08:48):

Your construction does appear in a nice paper I was reading yesterday

image.png

view this post on Zulip John Baez (Jul 18 2025 at 08:54):

Oh, great! They even had the wisdom to use the best notation.

Personally I don't feel going around and around C30C_{30} in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in C1C_1. This is the simplest of many examples along these lines. So I think we need to be careful about what we claim when we simplify a network to a smaller network using fibrations. For example, the fact that we can dramatically simplify the gene regulatory network of E. coli using fibrations doesn't imply that all the extra complexity of the full E. coli network is irrelevant to its behavior.

I still think fibrations are important.

There's probably a paper that spells this out carefully; I haven't had time to read much. I felt the paper on E. coli went a wee bit overboard.

view this post on Zulip David Corfield (Jul 18 2025 at 10:11):

John Baez said:

Personally I don't feel going around and around C30C_{30} in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in C1C_1.

Right. So this marks the difference between a general approach to discrete dynamical systems, as in the article I just mentioned, and this fibration program. The latter is sold on the idea of synchronisation, which duly appears in the subtitle of the "big book", and goes back to Stewart and Lerman's work from several years ago.

Clearly, one could have the nodes of the C30C_{30} network as synchronised, especially if they're started off in the same state. But is there some particular reason for synchrony in biology? I guess we need to read

image.png

view this post on Zulip David Corfield (Jul 18 2025 at 10:34):

They seem to be looking at things from the other end.

"how can a predictable pattern of synchrony, which is essential for survival, be achieved in biology?"

We know there is life. This must involve robust (approximate) synchronization. No life without synchrony. In a network this is only possible when there's fibrational symmetry.

Nature uses the trick of lifting a network by gene duplication to a new network with the same dynamics. But then the functions of these copies can diverge as new connections form. But without synchronization persisting over the course of the duplication, there would be very little chance of a biological system maintaining viability.

They're claiming also that empirically we see this synchronization:

20.4 Gene coexpression synchronization analysis from massive dataset

view this post on Zulip John Baez (Jul 18 2025 at 13:17):

Interesting! I'm skeptical of "gene coexpression synchronization" in anything but a very loose sense of the word "synchronization", or maybe some biological concept of synchronization that's unlike what we normally think of from that word. I'll have to read 20.4.

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:15):

John Baez said:

Define the cyclic graph CnC_n to be the graph with nn vertices - think of them as elements of the cyclic group Z/n\mathbb{Z}/n - and an edge from each vertex ii to the next vertex i+1i+1.

Question: The quotient map Z/mnZ/n\mathbb{Z}/mn \to \mathbb{Z}/n induces a graph homomorphism ϕ:CmnCn\phi : C_{mn} \to C_n. Is this a graph fibration?

Thanks. Yes, I think it is a surjective fibration . In particular, I think it is minimal if and only if n=1n=1 as @David Corfield mentioned here #theory: applied category theory > Networks in Biology @ 💬

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:20):

David Corfield said:

Yes, we know that there's an algorithm to find a minimal fibration. My speculation was a more abstract one, beyond the case of graphs, of wondering how to reconstruct a general category as a [[category of elements]]. This seems rather similar to finding out about a space which spaces it covers.

Thanks!! I see. But, are you not saying about my construction of pϕ ⁣:Free(B)opSetp_{\phi} \colon \mathsf{Free}(B)^{\rm{op}} \to \mathsf{Set} here #theory: applied category theory > Networks in Biology @ 💬 ?

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:28):

David Corfield said:

Adittya Chaudhuri said:

I was wondering about a particular way of doing the same, in particular, by a suitable action of a group AA on the graph GG.

I have doubts about this approach. The "big book" shows many cases where a graph has lots of fibrational symmetry but little global symmetry. But the group actions you mention are in terms of global symmetries, i.e. elements of Aut(G)Aut(G)

Adittya Chaudhuri said:

a left action of AA on GG is defined as a group homomoprhism ρ ⁣:AAut(G)\rho \colon A \to {\rm{Aut}}(G).

Sure, group actions generate surjective fibrations, but rarely minimal ones. That's the mantra of the book, that we need to generalise from global symmetry to local (input) symmetry.

So the answer to your questions

Adittya Chaudhuri said:

are both 'No'.

Interesting!! Thanks!! I realised the flaws in my way of thinking about a relation between global symmetries and minimal fibration symmetries especially in biology world. I agree to your points. For example, a counter example for question (1) could be ϕ:CmnCn\phi : C_{mn} \to C_n, where n1n \neq 1 and Z/n\mathbb{Z}/n acts in the usual way on CmnC_{mn}.

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:36):

John Baez said:

By the way, my claimed example of a fibration ϕ:CmnCn\phi: C_{mn} \to C_n should be one that does come from a group action, namely the obvious action of Z/m\mathbb{Z}/m on CmnC_{mn}.

Yes, I agree. I think you meant the action of Z/n\mathbb{Z}/n on CmnC_{mn} , or, am I misunderstanding anything?

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:41):

David Corfield said:

image.png

Thanks!! The paper looks interesting.

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 16:53):

John Baez said:

Personally I don't feel going around and around C30C_{30} in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in C1C_1. This is the simplest of many examples along these lines. So I think we need to be careful about what we claim when we simplify a network to a smaller network using fibrations.

Yes, I fully agree to this point. I also realised "my intution for minimal fibration is not great" as pointed out by @David Corfield here #theory: applied category theory > Networks in Biology @ 💬 . So, I thought of improving my "intuition" for minimal fibration, and thus I tried to see how minimal fibrations look for some common graphs like "feedback loops, feedforward loops, \infty graph (attached
Minimalfibration.PNG
)

I realised, although "length" of both feedback loop and feedforward loop is lost during reduction, unlike the feedback loop, one may still recover the whole feedforward loop ((B) in the attached file) from the reduced minimal graph .

I constructed the above minimal fibrations using the proof of Theorem 26 in the paper "Fibration of graphs" by Boldi and Vigna. In particular, the construction uses xyx \sim y iff the input trees of xx and yy are isomorphic.

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 17:40):

John Baez said:

For example, the fact that we can dramatically simplify the gene regulatory network of E. coli using fibrations doesn't imply that all the extra complexity of the full E. coli network is irrelevant to its behavior.

I still think fibrations are important.

There's probably a paper that spells this out carefully; I haven't had time to read much. I felt the paper on E. coli went a wee bit overboard.

I think the paper is Fibration symmetries uncover the building blocks of biological networks by Flaviano Morone, Ian Leifer, Hernán A. Makse.

At the moment my understadning on the importance of fibrational symmetry can be summarized in the following screenshots:
fibr.png

Taken from this paper

Coupling.png

Taken from this paper

importanceoffibrations.png

Taken from this paper

surfib.png
infib.png

Taken from this paper

I think the importance of fibrations is dependent on how we are modelling the total phase space of regulatory networks from the individual phase spaces of the nodes. If we model it according to Lerman or Stewart, or in a similar approach, then, the notion of minimal fibration may become a natural way of reducing complexity of the network preserving "synchrony" (although I am yet to understand the word synchrony in a proper way).

I think the real question may be

"if we model the dynamics of our regulatory network in Lerman or Stewart's way, how much realisitic the results will be from the point of Biology?

view this post on Zulip Adittya Chaudhuri (Jul 18 2025 at 17:43):

David Corfield said:

They seem to be looking at things from the other end.

"how can a predictable pattern of synchrony, which is essential for survival, be achieved in biology?"

We know there is life. This must involve robust (approximate) synchronization. No life without synchrony. In a network this is only possible when there's fibrational symmetry.

Nature uses the trick of lifting a network by gene duplication to a new network with the same dynamics. But then the functions of these copies can diverge as new connections form. But without synchronization persisting over the course of the duplication, there would be very little chance of a biological system maintaining viability.

They're claiming also that empirically we see this synchronization:

20.4 Gene coexpression synchronization analysis from massive dataset

Thanks!! Interesting!! I will read your suggested portion.

view this post on Zulip Peva Blanchard (Jul 18 2025 at 21:08):

I don't know if it will help, but just a word about synchronization.

In the distributed computing literature, "synchronization" is usually expressed in terms of a scheduler. Intuitively, the scheduler specifies how events occur in a system: e.g., all processes proceed simultaneously, or only one at a time, or else. There are many ways to formalize this, but the easiest I know is as a set of (legal) executions.

Let S(G)S(G) be the set of legal executions on a network GG for the scheduler SS. In the presence of a fibration GHG \to H, we can, quite often, lift legal executions on HH to legal executions on GG, i.e. we have a map S(H)S(G)S(H) \to S(G) (or multi-valued map, S(H)PS(G)S(H) \to \mathcal{P}S(G)). But this really depends on the chosen scheduler SS.

So, from the distributed computing perspective, strictly speaking, the concept of fibration is not enough to capture patterns of synchrony: you need to specify the scheduler too. I don't know how this remark translates to biology, nor to category theory.

ps: there are other formalizations where the scheduler is stochastic. In that case, the executions you can lift from HH can land in a zero-probability subset. So it can be tricky.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 06:18):

Thanks!! Thats interesting!!!

Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper). I think what you said about "lifting legal executions on the base network to the legal executions to the network above" in the presence of a fibration , DeVille-Lerman described as "fibrations of networks send groupoid-invariant virtual vector fields of the base network to groupoid-invariant virtual vector fields of the network above" (Proposition 4.2.1 in paper ), which I think ultimately culminates in their main Theorem 4.3.1.

I think the results "in the biology paper of network reduction" that we were discusing before is based on one of the consequence (Lemma 5.1.1) of the Theorem 4.3.1.

I think DeVille-Lerman described the "chosen scheduler" of a graph G=(E,V,s,t)G= (E, V, s, t) as the "choice of a map p ⁣:VManp \colon V \to \mathsf{Man}" (Definition 2.1.4 in paper), which assigns a manifold of phase space to each node of the network.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 06:40):

Now, what I really find interesting about fibrations of graphs is the fact that even if we do not model scheduler as the way "Lerman/Stewart" suggested, still the notion is good enough for other things such as distributive systems as @Peva Blanchard discussed here #theory: applied category theory > Networks in Biology @ 💬 . So, now, I am even more curious to get an answer for the question that I asked yesterday.

Adittya Chaudhuri said:

I think the real question may be

"if we model the dynamics of our regulatory network in Lerman or Stewart's way, how much realisitic the results will be from the point of Biology?

As @David Corfield mentioned here #theory: applied category theory > Networks in Biology @ 💬 , I think the answer to the above question may lie in the Chapter 20 of that "big book".

view this post on Zulip Peva Blanchard (Jul 19 2025 at 07:25):

Adittya Chaudhuri said:

Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper).

Interesting, thank you. I just skimmed through the paper. If I try to connect the dots, I am more inclined to say that a legal execution is a solution to the ODEs specified by the data at hand, i.e. the flow of a groupoid-invariant virtual vector field (if I understood correctly). The scheduler is implicitly chosen when the authors choose their "target category" to be a category of continuous-time dynamical systems (every process evolves continuously based on its current state and the states of its in-neighbors, but overall all processes share a common notion of time).

view this post on Zulip Peva Blanchard (Jul 19 2025 at 07:32):

By the way, arguments based on fibrations had limited practical applications for distributed computing (at least as far as I knew back when I was doing research in that field) because "symmetry is usually easy to break in distributed systems" (e.g. with identifiers). It is interesting to see that the big book somehow addresses a similar question for biology, see Chapter 8 "Stability and synchronizability".

This means that, dynamically, if
we set the initial condition of the system to respect the fibers—that is, if the nodes in
each fiber initially have the same state—then the same clusters remain synchronous for
all forward time. However, this property alone is irrelevant to biology, because biological
systems must constantly adapt to a changing environment. What matters for biology is
stable cluster synchronization; that is, synchronizability. Even when initial conditions are
not synchronous, the dynamic trajectory should converge to the cluster state for all initial
conditions in some neighborhood of that state

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 08:06):

Peva Blanchard said:

Interesting, thank you. I just skimmed through the paper. If I try to connect the dots, I am more inclined to say that a legal execution is a solution to the ODEs specified by the data at hand, i.e. the flow of a groupoid-invariant virtual vector field (if I understood correctly).

Thanks!! I think what you said is true.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 08:11):

Peva Blanchard said:

Adittya Chaudhuri said:

Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper).

The scheduler is implicitly chosen when the authors choose their "target category" to be a category of continuous-time dynamical systems (every process evolves continuously based on its current state and the states of its in-neighbors, but overall all processes share a common notion of time).

Hmm.. Even if you consider a nice target category CC , I think we also additionally need to choose the specific function p ⁣:VObj(C)p \colon V \to \rm{Obj}(C) to "specify the scheduler". Or, am I misunderstanding?

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 08:15):

Peva Blanchard said:

What matters for biology is
stable cluster synchronization; that is, synchronizability. Even when initial conditions are
not synchronous, the dynamic trajectory should converge to the cluster state for all initial
conditions in some neighborhood of that state

Interesting!! It somehow reminds me of the notion of attractors in Boolean network.

view this post on Zulip David Corfield (Jul 19 2025 at 08:45):

Interesting that chapter 8 on stability. Stewart even gets to go back to his catastrophe theory interests, dating from the 1970s, when that was all the rage at the University of Warwick:

image.png

I guess one shouldn't expect that there would be a neat 'networks for biology' account that covers everything. There needs also to be a study of the biochemical dynamics that keeps the show on the road. But then how separable are these studies? In a more extreme case, one can think about computing without knowing anything about the physics of semiconductors.

Talking of computing, there's the interesting Chap. 18: Synthetic Biology Through Broken
Symmetries: the Cell as a Computer

We have seen how the functions of biological networks can be pictured as an orchestra of synchronized instruments captured by the fibration symmetry. In this chapter we elaborate on an even stricter and more taxing criterion for functionality of the network. We ask whether minimal biological circuits can perform core logic computational programs. We show this through the mechanism of ‘fibration symmetry breaking’. These circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.

view this post on Zulip John Baez (Jul 19 2025 at 12:14):

All very interesting!

I guess one shouldn't expect that there would be a neat 'networks for biology' account that covers everything.

Definitely not! I believe that to understand biology mathematically we need lots of new abstractions: 'networks' are just one of the first that comes to mind. (Also, 'networks' means lots of different things.) I imagine biology will always elude our full grasp, just as even chemistry does. Nobody yet has clearly explained the periodic table! But we can keep understanding more and more.

these circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.

Whenever people start talking about biology in terms of 'machines' I get unhappy, because machines are designed to do what we want, while biology is not 'designed' in that sense, and it doesn't do what someone wants - so the 'machine' metaphor can be misleading and even destructive.

If they're talking here about using ideas from biology to design machines, then they're not necessarily making the mistake of analogizing biological systems with machines. But I'm hoping people will eventually try to learn the ways of biology instead of trying to use ideas from biology to expand the world of machines. (So far, the ever-expanding world of machines is killing the world of biology.)

view this post on Zulip David Corfield (Jul 19 2025 at 13:56):

ARIA, my funder, has another program Nature computes better. Embodied Cognition in Single Celled Organisms sounds fun. I hope there's a good sprinkling of Heidegger and Merleau-Ponty in the outputs.

view this post on Zulip John Baez (Jul 19 2025 at 14:11):

Very interesting - thanks for pointing that out!

I am not very optimistic from the proposals listed on this page that any Heideggerian ideas about going beyond nature as ready-to-hand inform any of this work. I feel the culture that's funding these grants is not wanting to change our attitude toward the natural world: it's still following the "understand the natural world in order to exploit it" paradigm.

I would like to break out of this, but it's extremely hard to do while doing science: any good scientific ideas tend to get fed into that paradigm. It's sad, but it's possible the only way to avoid that is to step aside and not publish research on biology! I should figure out my attitude before I do anything more on this subject. I want to "understand the natural world in order to help it".

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:17):

John Baez said:

I want to "understand the natural world in order to help it".

Thanks!! I also want the same. I feel "our social system, our economic system, our ecosystem and our cereated systems of artefacts" are not interconnected in a way that is helpful for the stability of the whole interconnected system. I want to help in this direction.

view this post on Zulip John Baez (Jul 19 2025 at 14:19):

Great! The big question is whether and how we can do this - for example while doing mathematical studies as we are doing here.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:21):

Thanks. Yes, I also think whether I can align my Mathematical activities in that direction. At the moment, I do not have much ideas on "how to approach this big question".

view this post on Zulip John Baez (Jul 19 2025 at 14:21):

I would like to think that deeply understanding the world would help us understand what we're doing wrong - why "our social system, our economic system, our ecosystem and our created systems of artefacts are not interconnected in a way that is helpful for the stability of the whole interconnected system".

But what sort of research goes toward this deep understanding, instead of just figuring out to make new kinds of biologically inspired gadgets, etc?

view this post on Zulip John Baez (Jul 19 2025 at 14:22):

Anyway, it's good to keep this in mind....

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:25):

John Baez said:

I would like to think that deeply understanding the world would help us understand what we're doing wrong - why "our social system, our economic system, our ecosystem and our created systems of artefacts are not interconnected in a way that is helpful for the stability of the whole interconnected system".

Thanks very much!! I fully agree to this point. I often have thought about it also that if we somehow mathemtically show that our interconnected system is faulty and then demonstrate the significance of the developed Mathematical framework through biology and through historical events, then may be someday, peope would rethink about redesigning social/polotical/economic..etc laws, which would be aligned with the Mathematical framework that ensures stabiltiy in the whole interconnected system.

view this post on Zulip John Baez (Jul 19 2025 at 14:31):

Great. This is why I was (and still actually am) excited about the idea of "bad motifs" in causal loop diagrams: it raises the possibility that there's some invariant concept of what makes a system function "badly" - a concept that is not merely of the sort "X is bad if X causes Y" where Y is some specific consequence we have already decided for external reasons counts as "bad", like "the cell become cancerous". Even if the bad motifs in the book System Archetypes don't provide the sort of invariant concept of "bad motif" that I'm searching for, they're still an interesting attempt.

view this post on Zulip John Baez (Jul 19 2025 at 14:34):

I don't think "stability" is quite the right concept of "well functioning system": a system can become stable by having everything die. But one kind of "badly functioning system" we often see these days, is when one component starts to exponentially grow, crushing everything in its way.... and then ultimately overshoots the available resources and collapses. So this is a particular kind of "bad" behavior that's unstable.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:36):

I am feeling like a notion of "equilibrium in a system" rather than the notion of stability.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:38):

I mean: " Is there always a notion of equilibrium for every sufficiently interconnected system? " which may indicate a measure of how far is the system from being "badly functional"

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:41):

John Baez said:

Great. This is why I was (and still actually am) excited about the idea of "bad motifs" in causal loop diagrams: it raises the possibility that there's some invariant concept of what makes a system function "badly" - a concept that is not merely of the sort "X is bad if X causes Y" where Y is some specific consequence we have already decided for external reasons counts as "bad", like "the cell become cancerous". Even if the bad motifs in the book System Archetypes don't provide the sort of invariant concept of "bad motif" that I'm searching for, they're still an interesting attempt.

Thanks very much!! I am very glad to know that. I also feel similar.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:43):

John Baez said:

Anyway, it's good to keep this in mind....

Thanks!! Yes.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:48):

John Baez said:

I don't think "stability" is quite the right concept of "well functioning system": a system can become stable by having everything die.

I agree. For example, programmed cell death (apoptosis) is essential for the stability of the whole system representing the organism. It makes me think that "the notion of stability/equilibrium" is an emergent property of the interconnected system".

view this post on Zulip John Baez (Jul 19 2025 at 14:51):

Here the death of a component aids the well-being of the organisms. And the death of individual organisms appears to be a necessary part of the well-being of the overall ecosystem. (Some people want to be immortal and regard death as a bad thing, but I believe they're deluded.)

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:53):

Yes, thats very true.. I agree. It seems then, "stability/equilibrium" is a notion which is more appropriate for n-level network where nn tends to \infty.

view this post on Zulip John Baez (Jul 19 2025 at 14:54):

Or I might say it this way: there are multiple notions of "health" operating at different levels. (I'm still not sure "stability" is the word we want here. As a physicist I know a lot about stability, but unfortunately it may not be the relevant concept here!)

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:55):

I understand your point. I agree. I do not know much physics. So, I am just telling the word "stability" in a very naive way.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:56):

I find your idea of "there are multiple notions of "health" operating at different levels" super interesting and also very natural.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 14:58):

I also feel there should be some inter-level interconnections of healths.

view this post on Zulip John Baez (Jul 19 2025 at 14:59):

That sounds good.

Btw, my former student @Joe Moeller has just finished writing two papers on stability from a category-theoretic perspective.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 15:01):

Thanks!! Thats amazing !! I will read them to understand more about stability from the point of category theory.

view this post on Zulip John Baez (Jul 19 2025 at 15:03):

The introduction of Joe's paper Categorical Lyapunov theory II: stability of systems may be a good quick way to start. It reviews the classical Lyapunov theorem on stability before generalizing it.

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 15:04):

Thanks very much!! I will read the introduction part.

view this post on Zulip David Corfield (Jul 19 2025 at 19:50):

Hmm, will mathematics tell apart what makes for the stability of a diversely-populated rock pool from the stability of Rome through the 200 years of the Pax Romana period?

view this post on Zulip Adittya Chaudhuri (Jul 19 2025 at 22:26):

I have a question:

According to my understanding, network motifs are crucial in understanding the behaviour of regulatory networks. Network motifs usally makes sense when we are working with signed graphs, or more generally monoid labeled graphs, because we also need to consder the sign patterns. If we think "dynamics of a regulatory network" as a particular behaviour of the regulatory network, then it may makes sense to ask "how the dynamics of the network is related to the dynamics of various motifs that occur in the network?"

Now, I think we can can generalise the way Lerman/Stewart defined dynamical system for a graph, to the framework of signed graphs. However, it seems to me that unless we suitably generalise/modify several things, their existing approach may not be sufficient to investigate for possible interrelations between the dynamics of recurring patterns in regulatory networks and the dynamics of the whole network.

According to my current understanding, I am expecting some interesting interrelationships between the "interconnected dynamics of various motifs that occur in the network" and "the dynamics of the whole network".

For simplicity, I am first restricting my attention to only basic known motifs like feedback loops, feedforward loops, etc.

Am I misunderstanding something fundamentally?

view this post on Zulip David Corfield (Jul 20 2025 at 10:38):

Adittya Chaudhuri said:

I think we can can generalise the way Lerman/Stewart defined dynamical system for a graph, to the framework of signed graphs

The big book does consider typed graphs (Def 3.4). This is picked up in Sec 9.5:

image.png

image.png

view this post on Zulip Adittya Chaudhuri (Jul 20 2025 at 11:53):

Thanks. I will read that section.

view this post on Zulip David Corfield (Jul 21 2025 at 13:31):

Once we've sorted out synchronisation in biology in terms of fibrations, we can turn to psychotherapy:

Mind you, that's not a great distance from the big book:

25 Fibration Theory of the Brain III: the Human Language Network

view this post on Zulip David Corfield (Jul 21 2025 at 15:45):

Some very recent papers authored by Ian Stewart et al.:

Might be something there in the pursuit of ways to consider stability.

view this post on Zulip John Baez (Jul 21 2025 at 19:04):

Thanks! It'll take me a while to dig into this stuff... I'm still blogging about the hydrogen atom and then I need to dive into writing about applied double category theory! But I definitely want to learn this material. I'll have to create a reading list.

view this post on Zulip Adittya Chaudhuri (Jul 21 2025 at 21:24):

David Corfield said:

Some very recent papers authored by Ian Stewart et al.:

Might be something there in the pursuit of ways to consider stability.

Thanks very much!! They look very interesting. I will read.

view this post on Zulip David Corfield (Jul 22 2025 at 09:47):

I still reckon the theory based around symmetry fibrations and synchronization is a promising path to follow to understand the functions of networks, especially since these fibrations sit so centrally in CT.

But looking to stability of networks, there's more interesting work by Makse (co-author of the big book), this time in a book titled The Science of Influencers and Superspreaders

image.png

view this post on Zulip David Corfield (Jul 22 2025 at 10:29):

But that chapter ends by pointing the reader back to the big book:

image.png

view this post on Zulip Adittya Chaudhuri (Jul 22 2025 at 10:41):

David Corfield said:

I still reckon the theory based around symmetry fibrations and synchronization is a promising path to follow to understand the functions of networks, especially since these fibrations sit so centrally in CT.

I too believe, and I find the concept of graph fibration very interesting.

view this post on Zulip Adittya Chaudhuri (Jul 22 2025 at 10:41):

David Corfield said:

But looking to stability of networks, there's more interesting work by Makse (co-author of the big book), this time in a book titled The Science of Influencers and Superspreaders

image.png

Thanks!!

view this post on Zulip Adittya Chaudhuri (Jul 23 2025 at 19:05):

Hi!! Today, I met and talked with a Biologist named Suman Kumar Banik about several things.

I find the following interesting about his work (Although I am yet to understand fully what he said) :

He is working with "synergic effects" of coherent and incoherent feedforward loops from the point of fluctuations, noise and Shanon entropy. What I find interesting is the fact that he said "we can not just add the effect of the edges xl(e)yx \xrightarrow{l(e)} y and xl(e)yx \xrightarrow{l(e')} y as l(e)+l(e)l(e) + l(e'), (when the labeling monoid is ofcourse commutative). He said there is something called "crossed-term" which arises because of the "gluing of such effects" and, it is very important in biology to consider this "cross-terms". So, he said the cumulative effect would be l(e)+l(e)+crosstermsl(e) + l(e') + crossterms. He said, Uri Alon has explained about it in in a very implicit way in his book. Currently, he is working in understanding these cross-terms for both coherent and incoherent feedforward loops.

view this post on Zulip Kevin Carlson (Jul 23 2025 at 20:06):

Interesting! But have you guys been trying to just sum up effects over parallel edges like that?

view this post on Zulip John Baez (Jul 24 2025 at 07:50):

We guys haven't been working much on semantics for monoid-labeled graphs yet! One could easily write a whole paper on that.

view this post on Zulip John Baez (Jul 24 2025 at 07:50):

But we have talked about morphisms of commutative-monoid-labeled graphs where when you map several edges down to one, you sum their edge labels. We've been calling these 'additive morphisms'.

view this post on Zulip David Corfield (Jul 24 2025 at 08:01):

Adittya Chaudhuri said:

the cumulative effect would be l(e)+l(e)+crosstermsl(e) + l(e') + crossterms

Are the crossterms dependent only on l(e)l(e) and l(e)l(e')? Could we think of l(e)+l(e)+crosstermsl(e) + l(e') + crossterms as a new binary operation on the monoid?

view this post on Zulip John Baez (Jul 24 2025 at 08:06):

In a direct message I said to Adittya that I'd like to see a concrete example of this cross-term business. I can't understand this sort of thing without examples.

view this post on Zulip Peva Blanchard (Jul 24 2025 at 10:46):

It reminds me of the entropy of a joint distribution

H(X,Y)=H(X)+H(Y)I(X,Y)H(X,Y) = H(X) + H(Y) - I(X,Y)

where I(X,Y)I(X,Y) is the mutual information.

But indeed, it would be interesting to have a concrete example.

view this post on Zulip Adittya Chaudhuri (Jul 24 2025 at 11:31):

John Baez said:

In a direct message I said to Adittya that I'd like to see a concrete example of this cross-term business. I can't understand this sort of thing without examples.

Identifying the sources of noise synergy and redundancy in the gene expression of feed-forward loop motif- This is his paper, and the cross-term about which he was saying is in the attached screenshot from the paper. (Although I am yet to read /understand the concepts) .
Crosstermfeedforward.png

view this post on Zulip Adittya Chaudhuri (Jul 24 2025 at 11:34):

David Corfield said:

Adittya Chaudhuri said:

the cumulative effect would be l(e)+l(e)+crosstermsl(e) + l(e') + crossterms

Are the crossterms dependent only on l(e)l(e) and l(e)l(e')? Could we think of l(e)+l(e)+crosstermsl(e) + l(e') + crossterms as a new binary operation on the monoid?

Thanks! Possible!! I have not thought yet in this direction. I think finding such a binary operation will involve also a formulation of "what a crossterm is" in a very precise way, from the point of biology.

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 20:26):

Hi!! Below, I am writing down my thoughts about "what I think is an anticipatory system" from the point of a discrete network.

Naively generalising what @John Baez defined here #theory: applied category theory > Networks in Biology @ 💬

Definition:
A discrete network X(V)X(V) on a finite set VV consists of the following:

State of X(V)X(V) is a function S ⁣:VXS \colon V \to X, and the nthn_{th}-time evolution of X(V)X(V) is given by the nthn_{th} composition of the function F ⁣:XVXVF \colon X^{V} \to X^{V} induced from the state functions ϕv\phi_{v}.

Now, I am defining an anticipatory discrete network AX(V)AX(V) on a finite set VV as a set VV, whose state functions are defined in a such a way that the t+1tht+1_{th} state of vv is dependent on

I imagine the situation. as attached
anticipatorysystems.PNG

So, the (t+1)th(t+1)_{th} state of vv is dependent on the ttht_{th} states of the blue nodes (which influences vv) and the ttht_{th} time evolution of the states of the red nodes(which gets influenced by vv).

In a way, I feel the above is more realistic that the usual discrete dynamical system. I may be mistaking something.

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 20:34):

I am modelling anticipation in the following way:

State of vv is not only dependent on the "just previous" state of the nodes that influence vv but also on the "so far evolution of the states of the nodes that vv can influence".

I am thinking like this because, usually our actions are dependent on both past experiences of the things that influenced us and some imaginary consequences of our present choice of action. Interestingly, this imaginary consequences are the "data that we have" about "how our actions has influenced other things in the past".

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 20:36):

Has there been any work on something like "anticipatory dynamics" of networks?

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 20:52):

On a different note, although I could not understand properly what Rosen has meant by "Anticipatory systems" here, but he indeed conjectured that biological systems are anticipatory in nature. So, instead of going into the details of what he said, I just wanted to create "simplest" framework where such anticipation may make sense Mathematically and a bit biologically too .

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 20:55):

This is what Rosen wrote in begining of that book:

anticipatory.png

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 23:23):

On a different note

Although I should have realised it before, I think any discrete network X(V)X(V) (as defined here #theory: applied category theory > Networks in Biology @ 💬 ) canonically determines a coalgebra fX,V ⁣:XVXVf_{X, V} \colon X^{V} \to X^{V} of the identity functor Id ⁣:SetSet{\rm{Id}} \colon \mathsf{Set} \to \mathsf{Set}. Thus fX,Vf_{X,V} can be thought as the discrete dynamical system associated to the discrete network X(V)X(V). On the otherhand, X(V)X(V) equivalently, also determines a functor P ⁣:BNSetP \colon B \mathbb{N} \to \mathsf{Set} such that P(1)=fX,VP(1)=f_{X,V}, where BNB \mathbb{N} is the one object category of the additive monoid of natural numbers. Thus, the nthn_{th}-time evolution of X(V)X(V) is fX,Vn=P(1)P(1)P(1)ntimesf^{n}_{X,V}= \underbrace{P(1) \circ P(1) \circ \cdots P(1)}_{n \,\, \text{times}}. Thus the functor PP is completely determined by the coalgebra fX,Yf_{X,Y}. I think the functor category SetBN\mathsf{Set}^{B\mathbb{N}} is equivalent to the category of colalgebras of the identity functor Id ⁣:SetSet{\rm{Id}} \colon \mathsf{Set} \to \mathsf{Set}. Authors in the paper Categorical foundations of discrete dynamical systems described discrete dynamical systems in terms of functors P ⁣:BNSetP \colon B \mathbb{N} \to \mathsf{Set} . If I am not misunderstanding, then it is little surprising for me that they have not used the term coalgebra anywhere in that paper.

view this post on Zulip Adittya Chaudhuri (Jul 25 2025 at 23:57):

John Baez said:

Btw, my former student Joe Moeller has just finished writing two papers on stability from a category-theoretic perspective.

Now, I think I am slowly realising the FF-coalgebra perspective of systems. Example 3.4 in the Joe Moeller's paper is precisely telling the definition of discrete dynamical system in terms of Id\rm{Id}-Coalgebras, which are basically the same as the functors P ⁣:BNSetP \colon B \mathbb{N} \to \mathsf{Set} as considered in the paper Categorical foundations of discrete dynamical systems. Interesting!! Now, I am getting a feeling that may be the right language to describe dynamics of regulatory networks are coalgebras. I will read the Joe Moeller's paper in detail to understand more about this perspective.

view this post on Zulip Adittya Chaudhuri (Jul 26 2025 at 19:08):

Hi!!

Recently, I have started working on an idea which combines two things in a single framework.

First of all, thanks very much @John Baez for giving me so many helpful feedbacks on the initial version of the concepts which we discussed in direct messages.

The motivation behind this idea lies in the fact that "the state of a vertex vv not only depends on the states of the neigbouring vertices but also on the types of influences those neighbouring vertices have on vv". I called the notion labeled discrete network. Below, I am briefly describing the concept.

Let MM be a monoid. An MM-labeled discrete network X(V,M)X(V, M) consists of the following data:

From the above data,

The reason for choosing a labeling monoid MM over a `labeling set' because later I want to study the motifs in the underlying graph (G,)(G, \ell) of X(G,M)X(G, M).

I will be very glad to get feedbacks about my notion of labeled discrete network.

Thanks everyone in advance.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:04):

As I mentioned above: My main motivation for defining the noition of a labeled discrete network is to incorporate the notion of "types" on influences of the neighbouring vertices in the framework of a usual discrete network, where such influences are usually ignored. Hence, it is natural to expect, that if we ignore "such types of influences", then my "labeled discrete network" will become usual discrete network. Indeed, this is the case, when in my definition of labeled discrete network, if we take the trivial monoid M={1}M= \lbrace 1 \rbrace.

Furthermore, if we take X=B={0,1}X= \mathbb{B}=\lbrace 0,1 \rbrace and M={1}M= \lbrace 1 \rbrace, then we recover the definition of a Boolean network as @John Baez defined here #theory: applied category theory > Networks in Biology @ 💬.

view this post on Zulip David Corfield (Jul 27 2025 at 09:20):

I'm a bit confused by this:

Adittya Chaudhuri said:

Definition:
A discrete network X(V)X(V) on a finite set VV consists of the following:

Shortly after you discuss a directionality. So is that for a node vv, the nodes among the neighbours of vv, NvN_v, are nodes that influence VV, hence the red arrows to vv, and for the nodes, ww, such that vNwv \in N_w, there is a blue arrow from vv to ww?

If so, have we gained anything over a starting out with a directed graph?

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:28):

David Corfield said:

I'm a bit confused by this:

Adittya Chaudhuri said:

Definition:
A discrete network X(V)X(V) on a finite set VV consists of the following:

Shortly after you discuss a directionality. So is that for a node vv, the nodes among the neighbours of vv, NvN_v, are nodes that influence VV, hence the red arrows to vv, and for the nodes, ww, such that vNwv \in N_w, there is a blue arrow from vv to ww?

Here, NvN_{v} consist of vertices which influence vv : there is a unique edge from every element of NvN_{v} to vv, which are represented by blue arrows in my diagram. The red arrows represent how vv influences other vertices.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:31):

David Corfield said:

I'm a bit confused by this:

Adittya Chaudhuri said:

Definition:
A discrete network X(V)X(V) on a finite set VV consists of the following:

If so, have we gained anything over a starting out with a directed graph?

I think the directed graph does not have the full information of the state functions ϕv\phi_{v}.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:33):

That is why, I would like to think that we can construct a directed graph out of it, rather than initially starting with a directed graph.

view this post on Zulip David Corfield (Jul 27 2025 at 09:34):

Sure, we need to add in states and dynamics. But I don't see why specification of in-neighbourhoods is the best way to describe the topology of the network, especially when you're going on to involve the out-neighbourhoods in the dynamics.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:38):

I understand your point, and I agree. First, I wanted to define a simpler notion "which naively generalises the Boolean network" perspective, and then, I wanted to incorporate the "out-neighbourhood" perspective by appropriately redfining the state functions ϕv\phi_{v}, so that it depends on the time evolution of the vertices in the out-neighbourhood.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:41):

I am still working on the idea of incorporating the "out-neighbourhood" in the dyanmics. Still the idea is not concrete to write a definition.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 09:44):

I am interested in two-way generalisation of usual discrete dynamics:

Here #theory: applied category theory > Networks in Biology @ 💬 , I worked on (1).

I am still thinking about (2).

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 10:49):

I may be misunderstanding a lot of things, but below, I am telling the biological motivation that I have in my mind for defining a notion of "labeled discrete network" i.e. (1) above.

Step 1:
Say, we start with a set VV of biomolecules and through experimental and analytical studies we know how they influence each other.

In otherwords, for a monoid MM, we can define a influence function

i ⁣:V×VM{0}i \colon V \times V \to M \cup \lbrace 0 \rbrace,

where M{0}M \cup \lbrace 0 \rbrace is the unique monoid which extends MM by adding 00 as the absorbing element in the new monoid M{0}M \cup \lbrace 0 \rbrace.

Here, the monoid structure structure is giving us indirect influences of a biomolecule on another. Also, I put an extra condition that i(w,v)=0i(w,v)=0 if and only if we find out by experimental study/other studies that the biomolecule ww has no influence on vv.

Step 2:
We realise that only Step 1 is not sufficent for understanding the interaction because each biomolecule have various states depending on the level of concentration it has, and these states evolve over time. For example, when we work with Boolean B\mathbb{B}, then a state 00 on a biomolecule vv may mean that the biomolecule vv is unavailable for playing its role i.e influencing others, and the state 11 may mean that the vertex is available for influencing others.

Step 3:

Thus, I am proposing my definition of "Labeled discrete networks" #theory: applied category theory > Networks in Biology @ 💬 which carries information of the following:

Note that in my definition of a labeled discrete network here #theory: applied category theory > Networks in Biology @ 💬 , I assumed that the type of influence of a vertex on another is not changing with time (the function i ⁣:V×VMi \colon V \times V \to M). Only the states of the vertices are changing with time. Thus, the types of influences is preserved under the dynamics. However, the type of influences play a role in such change of states

If I am not misunderstanding, then it seems to me that this assumtion is reasonable "at least in the sense of biology". However, I think from social dynamics perspective, may be it is better to have a definition which does not assume that "type of influence of a vertex on another is not changing with time".

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 11:42):

I also want to clarify that "kind of Z/2Z/2-labeling" on the directed graph we obtain from a Boolean network that usually Biologists consider such as here
"a screenshot from the published version" as here
IMG_0393.PNG

is not the same as "what I would get as my underlying Z/2\mathbb{Z}/2-labeled graph" that I considered here #theory: applied category theory > Networks in Biology @ 💬. "My Z/2\mathbb{Z}/2-labeling" tells influence types, whereas "their Z/2\mathbb{Z}/2-labeling" represent the Boolean logic, and thus they have to restrict themselves to the state set X=BX= \mathbb{B} to obtain their Z/2Z/2-labeling.

I think Kauffmann and others considered the "Boolean logic" ones, and showed its significance in studying gene regulation.

view this post on Zulip John Baez (Jul 27 2025 at 11:50):

I don't know what theorems you plan to prove using your definition, so it's hard to say anything about your definition. When I state definitions, I usually also state a bunch of theorems that I plan to prove using these definitions.

(What I do is look at examples of things, and notice that whenever X is true then Y and Z tend to be true. Then I make X into a definition and conjecture that X implies Y and Z. I usually don't just think of concepts and state them as definitions.)

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 11:54):

Thanks!! I understand your point. The motivation for my definition was from

I am now working on "developing statements of some Mathematical theorems" on the basis of the above, so that Mathematically the framework becomes reasonable. In doing so, I may change my definition in future accordingly.

view this post on Zulip John Baez (Jul 27 2025 at 11:56):

Okay. I think it's better to figure out the theorems before stating the definitions. That is: figure out the patterns that hold, and turn these into theorems (or at least conjectures), and then notice that these theorems are easiest to state if one makes certain definitions.

view this post on Zulip John Baez (Jul 27 2025 at 11:57):

The physicist John Wheeler said something similar: "never do a calculation until you already know the answer". Of course he was joking somewhat. But he meant that it helps to know where you're going before you start calculating.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 11:57):

Thanks very much!!I agree. I will work in the way you suggested.

view this post on Zulip Adittya Chaudhuri (Jul 27 2025 at 20:21):

I think I figured out something interesting . I am trying to write my ideas below:

My goal is to study discrete dynamics of graphs with polaritites. However, I want to reframe my goal in a little different way :

We start with a set VV, whose elements are called nodes. When we choose two nodes uu and vv, either we know uu influences vv in a specific way or we know uu has no influence on vv. Also, nodes can have indirect influences : If uu influences vv and vv influences ww, then there is an indirect influences of uu on ww. Then, I want to have a notion of states of nodes . Also, I want to think that state of a node vv is dependent on the states of those vertices which can influence vv. This is what I meant by "graph with polarities equipped with states". I am calling it as a monoid-labeled discrete network. What I just said in words, are written formally as X(V,M)X(V, M) here #theory: applied category theory > Networks in Biology @ 💬 , except a missing condition that I am now adding to incorporate the indirect influences:

I want the map i ⁣:V×VMi \colon V \times V \to M to satisfy the condition that if i(u,v)=mi(u,v)=m and i(v,w)=ni(v,w)=n, then i(u,w)=mni(u,w)=mn.

So, formally, restating everything:

Let MM be a monoid. An MM-labeled discrete network X(V,M)X(V, M) consists of the following data:

Next, given a monoid MM, we can define a morphism f ⁣:X(V,M)X(V,M)f \colon X(V, M) \to X'(V', M) as a pair (fV ⁣:VV,fX ⁣:XX) \big( f_{V} \colon V \to V', f_{X} \colon X' \to X \big) , which satisfies certain natural compatibility conditions with neighbours, influence functions and state functions. I have verified that the composition laws of functions gives us DMD_{M}, the category MM-labeled discrete networks.

Next, I am considering functor category SetBN\mathsf{Set}^{B \mathbb{N}} as my definition of the category of discrete dynamical system. Next, given a MM-labeled discrete network X(V,M)X(V, M), I have constructed a presheaf F(X,V,M) ⁣:BNSetF_{(X,V, M)} \colon B \mathbb{N} \to \mathsf{Set}.

Next, I defined the following:

Definition:
Let MM be a monoid. We define the discrete dynamic of a MM-labeled discrete network X(V,M)X(V, M) as the presheaf F(X,V,M) ⁣:BNSetF_{(X,V, M)} \colon B \mathbb{N} \to \mathsf{Set}.

Conjecture 1
For any monoid MM, there is a functor F ⁣:DMopSetBN\mathcal{F} \colon D_{M}^{\rm{op}} \to \mathsf{Set}^{B \mathbb{N}} which takes X(V,M)X(V, M) to FX,V,MF_{X, V, M}.

On the other hand, given a MM-labeled discrete network X(V,M)X(V, M), I constructed a MM-labeled graph (GX,V,M,)(G_{X,V,M}, \ell).

Next, I defined the following:

Definition:
Let MM be a monoid. We define the underlying MM-labeled graph of X(V,M)X(V, M) as the MM-labeled graph (GX,V,M,)(G_{X,V,M}, \ell).

Conjecture 2
For any monoid MM, there is a functor G ⁣:DMGph/GM\mathcal{G} \colon D_{M} \to \mathsf{Gph}/ G_{M} which takes X(V,M)X(V, M) to (GX,V,M,)(G_{X,V,M}, \ell).

Observe that the MM-labeled graph (GX,V,M,)(G_{X,V,M}, \ell) contains a lot of important information about X(V,M)X(V, M), my object of interest.

Now, let us assume MM is commutative. We consider the slice category ComMon/M\mathsf{ComMon}/M. We know from our last paper that given a MM-labeled graph (G,)(G, \ell), we can define a feedback homomorphism p ⁣:H1(G,N)Mp \colon H_1(G, \mathbb{N}) \to M.

Conjecture 3:
Given a commutative monoid MM, there is a functor H ⁣:Gph/GMComMon/M\mathcal{H} \colon \mathsf{Gph}/G_{M} \to \mathsf{ComMon}/M, which takes a MM-labeled graph (G,)(G, \ell) to its feedback.

Combining Conjecture 2 with Conjecture 3, we get

Conjecture 4:

Given a commutative monoid MM, there is a functor Hˉ ⁣:DMComMon/M\mathcal{\bar{H}} \colon D_{M} \to \mathsf{ComMon}/M, which takes a MM-labeled discrete network to the feedback of its underlying MM-labeled graph (G,)(G, \ell).

At the moment, I want to study the functors in Conjecture 1 and Conjecture 4, and later on if everything goes well, I want to see whether I can lift these functors to double functors and study Conjecture 1 and Conjecture 4 in a compositional way via standard ACT-results in structured/decorated cospans.

view this post on Zulip Peva Blanchard (Jul 28 2025 at 09:14):

Next, given a MM-labeled discrete network X(V,M)X(V, M), I have constructed a presheaf F(X,V,M) ⁣:BNSetF_{(X,V, M)} \colon B \mathbb{N} \to \mathsf{Set}.

If I understood correctly, the presheaf FF

γ(v)=ϕv((γ(w),i(v,w))wNv)\gamma'(v) = \phi_v\big(( \gamma(w), i(v,w))_{w \in N_v}\big)

Is that correct?

view this post on Zulip Peva Blanchard (Jul 28 2025 at 09:14):

(back to our short discussion earlier about schedulers: if I understood correctly, the above definition is a way to specify a synchronous scheduler)

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 09:53):

Peva Blanchard said:

Next, given a MM-labeled discrete network X(V,M)X(V, M), I have constructed a presheaf F(X,V,M) ⁣:BNSetF_{(X,V, M)} \colon B \mathbb{N} \to \mathsf{Set}.

If I understood correctly, the presheaf FF

γ(v)=ϕv((γ(w),i(v,w))wNv)\gamma'(v) = \phi_v\big(( \gamma(w), i(v,w))_{w \in N_v}\big)

Is that correct?

Hi! I think you meant to say the generator arrow 11 of the one object category BN B \mathbb{N} (which contains countably infinite arrows, which gives us time-evolution) associated to the additive monoid of the natural numbers N\mathbb{N}. Yes, your description looks correct assuming you meant γ(w)\gamma(w) as γNv(w)\gamma|_{N_{v}}(w).

view this post on Zulip Peva Blanchard (Jul 28 2025 at 09:54):

Oh yes, right (the generator arrow 11). Thanks

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 09:56):

Peva Blanchard said:

(back to our short discussion earlier about schedulers: if I understood correctly, the above definition is a way to specify a synchronous scheduler)

Interesting. So, if I am understanding correctly, did you mean then, my definition of MM-labeled discrete netework is equipped with synchrnous scheduler i.e FX,V,MF_{X, V, M}?

view this post on Zulip Peva Blanchard (Jul 28 2025 at 10:01):

Yes, indeed.

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 10:04):

I see. Thanks. Often in Biological systems, I have read that it is more realistic to consider asynchronous scheduler (see Page 3 here), where the "updates of every node" do not happen "synchronously" or at the same time.

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 10:10):

How do you usually consider "asynchronous scheduler" in Computer science? Are there any standard techniques ?

view this post on Zulip Peva Blanchard (Jul 28 2025 at 10:25):

This is a risky rabbit-hole. There is usually not many different ways to specify a synchronous scheduler, but there are many many different ways to specify an "asynchronous" scheduler, which depends on the underlying computation model.

Let me try to sketch a few variants:

  1. You can assume that at every time step tt, only a subset VtVV_t \subseteq V of nodes are "triggered". Hence, for two consecutive configurations

γt+1(v)={ϕv((γt+1(w),i(v,w))wNv)if vVtγt(v)if v∉Vt \gamma_{t+1}(v) = \begin{cases} \phi_v\big((\gamma_{t+1}(w), i(v,w))_{w \in N_v}\big) &\text{if } v \in V_t \\ \gamma_t(v) &\text{if } v \not\in V_t \end{cases}

If you allow any sequence (Vt)t0(V_t)_{t\ge 0}, you stumble on trivial executions (e.g., no node is ever triggered), or unfair executions (e.g., some subset of nodes are only triggered finitely many times). So you may want to include "fairness" conditions: e.g., every node is triggered infinitely many times, or, every node is triggered uniformly at random, independently for every time step.

  1. Another variant is "asynchronous message-passing", probably the wildest. This one is further away from your settings. Imagine that, instead of globally defined time events, you have "send-message" or "receive-message" events. A node can send a message to a out-neighbor, or receive a message from a in-neighbor. The intertwining of these events defines an execution (aka a schedule). You may want to impose at least some restrictions on the ordering of these events: e.g. you cannot receive a message that has not be sent before. See for instance Lamport's "happen-before" relation. In this case, a scheduler is a set of "event posets".

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 11:19):

Thanks!!

I think the data of the presheaf FX,V,MF_{X, V, M} is determined by the family of functions {ϕv}vV \lbrace \phi_{v} \rbrace_{v \in V}. From your description of scheduler in your previoius post #theory: applied category theory > Networks in Biology @ 💬 , it seems to me that {ϕv}vV \lbrace \phi_{v} \rbrace_{v \in V} is a scheduler to the MM-labeled graph induced by the data of VV, {Nv}vV\lbrace N_{v} \rbrace_{v \in V} and i ⁣:V×VM{0}i \colon V \times V \to M \cup \lbrace 0 \rbrace.

So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V}. Right?

On the otherhad, I think FX,V,M ⁣:BNSetF_{X, V, M} \colon B \mathbb{N} \to \mathsf{Set} is just a state-updating scheme over discrete time intervals (induced by the scheduler {ϕv}vV \lbrace \phi_{v} \rbrace_{v \in V}) of the MM-labeled graph induced by the data of VV, {Nv}vV\lbrace N_{v} \rbrace_{v \in V} and i ⁣:V×VM{0}i \colon V \times V \to M \cup \lbrace 0 \rbrace.

view this post on Zulip Peva Blanchard (Jul 28 2025 at 11:52):

Mmh, it is a bit strange to me because I never tried to formulate the notion of scheduler in category-theoretic terms, so I may be wrong in connecting the dots. (But it is an interesting exercise!).

So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V}. Right?

I am thinking that a scheduler should "naturally depend on" the local update rules {ϕv}vV\{\phi_v\}_{v \in V}. Maybe, it is best captured in the (yet to proven) fact that F::DMopSetBN{\cal F} :: D^{op}_M \to \text{Set}^{B\mathbb{N}} is a functor. I could name this functor "the synchronous scheduler functor".

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 12:33):

Peva Blanchard said:

Mmh, it is a bit strange to me because I never tried to formulate the notion of scheduler in category-theoretic terms, so I may be wrong in connecting the dots. (But it is an interesting exercise!).

So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V}. Right?

I am thinking that a scheduler should "naturally depend on" the local update rules {ϕv}vV\{\phi_v\}_{v \in V}. Maybe, it is best captured in the (yet to proven) fact that F::DMopSetBN{\cal F} :: D^{op}_M \to \text{Set}^{B\mathbb{N}} is a functor. I could name this functor "the synchronous scheduler functor".

Okay. So, the functor F:DMopSetBN{\cal F} : D^{op}_M \to \text{Set}^{B\mathbb{N}} contains not only the data of how networks are updated (at object level), it also contains the data of "how the the synchrony i.e {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V } (at least I constructed the definition of morphisms in that way) preserving transformations of networks are updated" (at morphism level). Intersting!!

Although I will think more on it, at the moment I like to think the functor F:DMopSetBN{\cal F} : D^{op}_M \to \text{Set}^{B\mathbb{N}} as a functorial updating scheme for the network (underlying MM-labeled graph) equipped with a scheduler {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V }.

For me, still the synchrony is more relatable to the {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V } . I am thinking of {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V } as a analog of the vector fields associated to each nodes in the network of manifold as studied by Lerman-DeVille here.

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 18:04):

Regarding examples of the monoid labeled discrete network that I defined here #theory: applied category theory > Networks in Biology @ 💬 .

I already mentioned that any Boolean network is a special case of a MM-labeled discrete network X(V,M)X(V, M), when we choose X=BX= \mathbb{B} and M={1}M=\lbrace 1 \rbrace. However, the main motivation for my definition of MM-labeled discrete network lies in the study of dynamics of regulatory networks modeled as rig-labeled graphs. More precisely, a large class of examples of monoid-labeled discrete networks arise from rig- labeled graphs equipped with the data of schedulers, which may make my notion of monoid labeled discrete network as a good candidate for describing the discrete dynamics of regulatory networks.

First, I recall the notion of a rig (R,+,0,×,1)(R, +, 0, \times, 1)-labled graph that @John Baez and I studied in this paper.

Definition:
Given a rig (R,+,0,×,1)(R, +, 0, \times, 1), we define a RR-labeled graph as pair (G, ⁣:ER)(G, \ell \colon E \to R).

Next, I am defining a morphism of RR-labeled graphs (which we have not explicitly defined in our paper).

Definition:
Given a rig RR, a morphism ϕ ⁣:(G,)(G,)\phi \colon (G, \ell) \to (G', \ell') is defined as an additive morphism of (R,+,0)(R,+,0)-labeled graphs.

Next, I am introducing a notion of RR-labeled graph equipped with a scheduler:

Definition:
Let (R,+,0,×,1)(R, +, 0, \times, 1) be a rig. We define a RR-labeled graph equipped with a scheduler as a pair ((G,),{ϕv ⁣:(X×R)NvX}vV) \big( (G, \ell), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big), where Nv={wV ⁣:there is an edgeeEsuch thats(e)=wandt(e)=v}N_{v} = \lbrace w \in V \colon \text{there is an edge}\, \, e \in E \,\, \text{such that}\,\, s(e)=w \,\, \text{and} \,\, t(e)=v \rbrace. We say the RR-labeled graph (G,)(G, \ell) is equipped with the scheduler {ϕv ⁣:(X×R)NvX}vV\lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} .

This is the definition where I making an analogy with what Lerman-DeVille defined as network in their paper. I learnt the word scheduler first time from @Peva Blanchard Thanks!! Now, the justification of the word scheduler will probably come from the Lemma below (that I observed today)

Lemma:

For any RR-labeled graph with scheduler ((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV)) \big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big)\big), there is a (R,+,0)(R, +,0)-labeled discrete network X(V,(R,+,0))X(V, (R, +,0)).

The construction follows by observing that the map

i ⁣:V×V(R,+,0){0ˉ}i \colon V \times V \to (R,+,0) \cup \lbrace \bar{0}\rbrace can be defined in the following way:

Let v1,v2Vv_1, v_2 \in V. Then, if path (v1,v2)(v_1, v_2) is non-empty, then define

i(v1,v2):=l(γ)γpath(v1,v2)i(v_1, v_2):= \sum l(\gamma)_{\gamma \in \text{path}(v_1,v_2)}, otherwise 0ˉ\bar{0}, if path (v1,v2)(v_1, v_2) is empty.

Note that in the definition of (γ)\ell(\gamma), I have used the multiplicative monoid (R,×,1)(R, \times, 1).

Thus (if I am not making any mistakes) now, we have a large class of examples of monoid labeled discrete networks arising from models of regulatory networks.

Now, using the additive morphisms, I think (yet to check everything in detail) there is a natural way to define a notion of morphism of RR-labeled graphs with scheduler. We denote the corresponding category of RR-labeled graphs with schedulers as RschGphR_{sch}-\mathsf{Gph}

Now, I conjecture the following:

Conjecture:
Given a rig RR, there is a functor FR ⁣:RschGphD(R,+,0))F_{R} \colon R_{sch}-\mathsf{Gph} \to D_{(R, +, 0))} which takes a RR-labeled graph with scheduler to the associated (R,+,0)(R, +, 0)-labeled discrete network.

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 18:14):

Also, it is important to mention that, I have restricted my attention to only finite rig-labeled graphs for technical reasons.

view this post on Zulip Adittya Chaudhuri (Jul 28 2025 at 18:32):

Thus, if we combine the conjecture here #theory: applied category theory > Networks in Biology @ 💬 , with the conjecture 1 here #theory: applied category theory > Networks in Biology @ 💬 , we get a discrete dynamics of regulatory networks (the goal about which I stated earlier).

view this post on Zulip Peva Blanchard (Jul 28 2025 at 20:48):

I don't want to argue too much about terminology, but, from a distributed computing perspective, it feels a bit strange to name {ϕv}\{ \phi_v \} the scheduler. I picture ϕv\phi_v as the local algorithm that will run on the node vv. And the collection {ϕv}vV\{ \phi_v \}_{v \in V} constitutes the distributed algorithm.

Of course, this terminological remark doesn't impact the relevance of the object ((G,:ER),{ϕv}vV)((G, \ell : E \to R), \{ \phi_v \}_{v \in V}). I'm trying to understand your lemma, but I don't understand why the function ii is defined the way it is (using sum of products of labels along paths), and why you think the lemma justifies the name "scheduler".

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 02:43):

Peva Blanchard said:

I don't want to argue too much about terminology, but, from a distributed computing perspective, it feels a bit strange to name {ϕv}\{ \phi_v \} the scheduler. I picture ϕv\phi_v as the local algorithm that will run on the node vv. And the collection {ϕv}vV\{ \phi_v \}_{v \in V} constitutes the distributed algorithm.

Okay, I see. Thanks. May be I am misundeerstanding the concept of a "scheduler".

What I wanted to say is that {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V} is the extra data we add to (G,)(G, \ell) that enable us to change the state S ⁣:VXS \colon V \to X of all the vertices in (G,)(G, \ell) over discrete time inetervals. If the meaning of a "scheduler" does not match with what I said, then probably I should not call {ϕv}vV\lbrace \phi_{v} \rbrace_{v \in V} a scheduler.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 02:48):

Peva Blanchard said:

I'm trying to understand your lemma, but I don't understand why the function ii is defined the way it is (using sum of products of labels along paths)

I defined it like that because I wanted i(v1,v2)i(v_1, v_2) to represent the cumulative influence of the vertex v1v_1 on the vertex v2v_2 (which constitutes both direct influences (directed edges) and indirect influences (directed paths)).

However, I think there is a flaw with my argument because even if GG is assumed to be finite, path(v,v)\sum_{\text{path}(v,v)} can be an infinite sum and thus can be undefined when we are not restricting our attention to some special rigs. Then, probably my Lemma will hold only for certain special classes of rigs eg: quantales, which ensure that path(v,v)\sum_{\text{path}(v,v)} is always well-defined.

I am working on this issue : Whether to restrict our attention to special classes of rigs or redefining my labeled discrete networks by removing the condition i(u,v)i(v,w)=i(u,w)i(u,v) \cdot i(v,w)=i(u,w) and imposing i(u,v)=0i(u,v)=0 if and only if there is no directed edge from uu to vv. Redefining the X(V,M)X(V, M) in the above way also seems not a very bad option. I have to think more on it.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 06:57):

I think I have figured out something more fundamental which will simplify (probably in a right way) all the things that I said so far about the relationships between labeled graphs and labeled discrete networks. I am working on it. I need a little more time before I can spell it out in a nicer way.

Loosely speaking, my line of thoughts follows from the (probably correct) observation that for any set LL, a finite LL-labeled graph (G=(E,V,s,t), ⁣:EL)(G=(E, V, s,t), \ell \colon E \to L) is same as a function i ⁣:V×VLNi \colon V \times V \to L^{\mathbb{N}}.

view this post on Zulip Peva Blanchard (Jul 29 2025 at 07:15):

Adittya Chaudhuri said:

I defined it like that because I wanted i(v1,v2)i(v_1, v_2) to represent the cumulative influence of the vertex v1v_1 on the vertex v2v_2 (which constitutes both direct influences (directed edges) and indirect influences (directed paths)).

Oh I see. But shouldn't the ϕv\phi_v be compatible with the rig operations in some sense? Because if the state space is a singleton X={}X = \{ \bullet \}, then any node vv only depends on the labels of its direct incoming edges.

view this post on Zulip Peva Blanchard (Jul 29 2025 at 07:25):

I think your idea works in the case X=RX = R, and

ϕv(x,(lw,xw)wNv)=x+wNvlwxw\phi_v( x, ( l_w, x_w)_{w \in N_v} ) =x + \sum_{w\in N_v} l_w \cdot x_w

Maybe we can look at this expression, and make it "free". E.g., XX is a set of rooted trees, with edges labeled by RR. Then we interpret x+wNvlwxwx + \sum_{w\in N_v} l_w \cdot x_w as grafting every tree xwx_w as a child of xx with edge label lwl_w, or replacing the existing child if any. (something like that).

view this post on Zulip John Baez (Jul 29 2025 at 11:55):

Adittya Chaudhuri said:

Loosely speaking, my line of thoughts follows from the (probably correct) observation that for any set LL, a finite LL-labeled graph (G=(E,V,s,t), ⁣:EL)(G=(E, V, s,t), \ell \colon E \to L) is same as a function i ⁣:V×VLNi \colon V \times V \to L^{\mathbb{N}}.

That sounds wrong: suppose V={x,y}V = \{x,y\} and i(x,x),i(x,y)i(x,x), i(x,y) and i(y,y)i(y,y) all equal the function from N\mathbb{N} to LL sending all natural numbers to the same element L\ell \in L. What finite LL-labeled graph does that correspond to?

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 13:55):

Sorry!! I agree. I wanted to say formal linear combination N[L]\mathbb{N}[L] instead of LNL^{\mathbb{N}}. However, I think still the notion is not good beacause at the morphism level it is not working.

view this post on Zulip John Baez (Jul 29 2025 at 14:07):

Right: an LL-labeled graph in our sense with vertex set VV is different from a map V×VN[L]V \times V \to \mathbb{N}[L] because in the latter there's no set of edges. For our kind of LL-labeled graph there are non-identity morphisms that are the identity on vertices, but not on edges. But for maps V×VN[L]V \times V \to \mathbb{N}[L], the most obvious sort of morphism is forced to be the identity if it's the identity on vertices.

This is a standard issue: e.g. there's a definition of Petri net where there's a set of arcs from any state to any transition, and another definition where there's just a number of them.

You get to decide which definition you want, depending on what you are doing with it.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 14:14):

Thanks! I agree.

Now, since my goal is to study the dynamics of regulatory networks, I am feeling why cant we just start with a pair ((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV) \big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big), where

Then, we define an appropriate notion of morphisms for these. Let RGphR^{*}-Gph denote the associated category.

Then, directly, we can define a functor F ⁣:RGphSetBNF \colon R^{*}-Gph \to \mathsf{Set}^{B \mathbb{N}}, whose object level map is induced from the the data of ϕv\phi_{v} ?

More importantly, I am not feeling the necessity to define discrete labeled network, as I had defined before for the purpose of studying dynamics of regulatory networks.

Here RR is a rig.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 14:36):

I am more interested in constructing "those kinds of {ϕv} \lbrace \phi_{v} \rbrace which gives us a steady state dynamics". i.e there exists kNk \in \mathbb{N} such that for all nkn \geq k we have Fn=Id ⁣:XVXVF^{n}= Id \colon X^{V} \to X^{V}.

From the point of biology, I feel such thing can be important (often responsible for producing various cellular behaviour) as explained here.

view this post on Zulip John Baez (Jul 29 2025 at 14:40):

Do you really want to study cells where nothing happens?

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 14:42):

Ok!! I should rephrase my problem to "oscilatory dynamics" where there exists kNk' \in \mathbb{N}, rkNr \leq k \in \mathbb{N} such that fn=fn+rf^{n}=f^{n+r} for all nkn \geq k'... something along this direction.

view this post on Zulip John Baez (Jul 29 2025 at 14:57):

I guess it's fine to prove some theorems about simple situations like fixed points or oscillatory solutions, and later prove some more complicated theorems about more realistic situations.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 14:57):

My point is "what kind of {ϕv}\lbrace \phi_{v} \rbrace produce certain bounds on the dynamics ? If those bounds are interesting from the pioint of Biology, then these bounds may correspond to some cellular behaviour ?

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 14:58):

John Baez said:

I guess it's fine to prove some theorems about simple situations like fixed points or oscillatory solutions, and later prove some more complicated theorems about more realistic situations.

Thanks. Yes, I agree.

view this post on Zulip Adittya Chaudhuri (Jul 29 2025 at 17:13):

Peva Blanchard said:

But shouldn't the ϕv\phi_v be compatible with the rig operations in some sense?

Yes it is. If you see my current definition of the idea here #theory: applied category theory > Networks in Biology @ 💬 , here ϕv\phi_{v} is dependent on θv\theta_{v}, which in turn is dependent on the rig operation.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 09:47):

Hi!!

I am seeing my new definition of a rig-labeled graph equipped with local state-change data i.e for a rig RR and a set XX, a pair ((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV) \big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big) as I had considered here #theory: applied category theory > Networks in Biology @ 💬 as an analog of Petri nets equipped with rates. Like the way, Petri nets equipped with rates give rise to continuous dynamical systems as explained in the Section 6.4 of Structured Versus Decorated Cospans written by @John Baez , Kenny Courser and Christina Vasilakopoulou, my rig labeled graph equipped with local state-change data gives rise to a discrete dynamical system. In biological terms, a regulatory network equipped with local state-change data gives rise to a discrete dynamical system.

I hope I am not misunderstanding the way I am interepreting my notion of rig labeled graph equipped with local state-change data considered here #theory: applied category theory > Networks in Biology @ 💬.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 16:24):

It may be interesting that since my definition of ϕv\phi_{v} in #theory: applied category theory > Networks in Biology @ 💬 depends on the labeling of the underlying rig labeled graph, it is expected that in the attached file

variousdynamics.PNG

the discrete dynamics of (a), (b) and (c) will be different and similarly, the dynamics of (d) and (e) will be different, although the underlying directed graph structure is same for (a), (b) and (c) and same for (d) and (e).

view this post on Zulip John Baez (Jul 30 2025 at 17:02):

Right now the rig structure of the label set RR is not connected at all to the maps ϕv:(X×R)NvX\phi_v : (X \times R)^{N_v} \to X that determine the dynamics. That is, you could replace the rig with a mere set, and the dynamics would not change.

You probably have ideas about how the rig structure is relevant, but they are not in the definition.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 17:25):

I thought my dynamics depends on θv\theta_{v}

θv ⁣:NVR\theta_{v} \colon N_{V} \to R, weedge(w,v)(e)w \mapsto \sum_{e \in edge(w,v)}\ell(e) as defined here #theory: applied category theory > Networks in Biology @ 💬 and θv\theta_{v} depends on the rig structure.

The corresponding dynamics is defined as follows:

F ⁣:XVXVF \colon X^{V} \to X^{V} which takes S ⁣:VXS \colon V \to X to

F(S) ⁣:VXF(S) \colon V \to X, vϕv(SNv×θv)v \mapsto \phi_{v}(S|_{N_{v}} \times \theta_{v}), where

SNv×θv ⁣:NvX×R,w(SNv(w),θv(w))S|_{N_{v}} \times \theta_{v} \colon N_{v} \to X \times R, w \mapsto (S|_{N_{v}}(w), \theta_{v}(w))

and hence, my dynamics depends on the rig structure.

{θv}vV \lbrace \theta_{v} \rbrace_{v \in V} is part of my definition here #theory: applied category theory > Networks in Biology @ 💬 .

Am I misunderstanding anything?

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 17:32):

I agree ϕv\phi_{v} and θv\theta_{v} are separate structures but in the dynamics both of them are used such that ϕv\phi_{v} depends on θv\theta_{v}.

view this post on Zulip John Baez (Jul 30 2025 at 17:52):

Okay, I was just looking at this:

I am seeing my new definition of a rig-labeled graph equipped with local state-change data i.e for a rig RR and a set XX, a pair ((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV)\big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big)

view this post on Zulip John Baez (Jul 30 2025 at 17:52):

I thought ϕv\phi_v was an arbitrary map of that type.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 17:54):

Yes I understand!! Sorry, I should have written about θv\theta_{v} in the defnition.

view this post on Zulip John Baez (Jul 30 2025 at 17:54):

I don't see how ϕv\phi_v is defined here:

The corresponding dynamics is defined as follows:

F ⁣:XVXVF \colon X^{V} \to X^{V} which takes S ⁣:VXS \colon V \to X to

F(S) ⁣:VXF(S) \colon V \to X, vϕv(SNv×θv)v \mapsto \phi_{v}(S|_{N_{v}} \times \theta_{v}), where

SNv×θv ⁣:NvX×R,w(SNv(w),θv(w))S|_{N_{v}} \times \theta_{v} \colon N_{v} \to X \times R, w \mapsto (S|_{N_{v}}(w), \theta_{v}(w))

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 17:55):

ϕv ⁣:(X×R)NvX\phi_{v} \colon (X \times R)^{N_{v}} \to X

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 17:57):

Since (SNv×θv ⁣:NvX×R)(X×R)Nv(S|_{N_{v}} \times \theta_{v} \colon N_{v} \to X \times R) \in (X \times R)^{N_{v}}, the deifnition of ϕv\phi_{v} makes sense.

view this post on Zulip John Baez (Jul 30 2025 at 17:58):

Okay. I still don't understand it. I think more words and fewer symbols would help me understand what you're trying to do.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:01):

θv(w)\theta_{v}(w) computes the cumulative direct influence of the vertex ww on vv using the underlying rig-labeled finite graph structure. My dynamics is dependent on θv\theta_{v}. Hence, my dynamics depends on the rig structure.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:04):

Here, Nv:={wV ⁣:there is an edgeeEsuch thats(e)=wandt(e)=v}N_{v} := \lbrace w \in V \colon \text{there is an edge}\, \, e \in E \,\, \text{such that}\,\, s(e)=w \,\, \text{and} \,\, t(e)=v \rbrace

view this post on Zulip John Baez (Jul 30 2025 at 18:05):

It was nice and clear, and then you had to spoil it with a bunch of symbols. :wink:

So: first you use the rig-labeled finite graph to compute a total influence of a vertex ww on the vertex vv. I'm guessing you simply sum up the labels of all edges from ww to vv. You call this total θv(w)\theta_v(w). If I'm right in my guess, you are only using the addition in the rig here.

Then what do you do with this total?

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:10):

θv\theta_{v} satisfies the following comaptibity condition for any ζ ⁣:NvX×R\zeta \colon N_{v} \to X \times R

thetav.png

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:11):

Now, my dynamics is given by F(S) ⁣:VXF(S) \colon V \to X, vϕv(SNv×θv)v \mapsto \phi_{v}(S|_{N_{v}} \times \theta_{v})

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:12):

where, SNv×θv ⁣:NvX×R,w(SNv(w),θv(w))S|_{N_{v}} \times \theta_{v} \colon N_{v} \to X \times R, w \mapsto (S|_{N_{v}}(w), \theta_{v}(w))

view this post on Zulip John Baez (Jul 30 2025 at 18:12):

Okay; can you explain that using words, so I can understand what you're trying to do?

view this post on Zulip John Baez (Jul 30 2025 at 18:13):

(The human mind really does think using sentences to a large extent, so anything that can't be expressed in sentences is not fully understood.)

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:16):

The t+1tht+1_{th}-state of vv is dependent on the the ttht_{th} states of the neighbours of vv and as well as the cumulative direct influences of the neighbouring vertices on vv

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:17):

This is the reason why I said this:

variousdynamics.PNG

the discrete dynamics of (a), (b) and (c) will be different and similarly, the dynamics of (d) and (e) will be different, although the underlying directed graph structure is same for (a), (b) and (c) and same for (d) and (e).

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:18):

John Baez said:

(The human mind really does think using sentences to a large extent, so anything that can't be expressed in sentences is not fully understood.)

I agree. I am trying.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:20):

ϕv\phi_{v} tells about the t+1tht+1_{th} state of the vv

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:21):

ϕv\phi_{v} depends on θv\theta_{v} (i.e cumulative influences) and SNvS|_{N_{v}} (ttht_{th} states of the neighbouring vertices) because we are computing ϕv(SNv×θv)\phi_{v}(S|_{N_{v}} \times \theta_{v}).

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:22):

John Baez said:

So: first you use the rig-labeled finite graph to compute a total influence of a vertex ww on the vertex vv. I'm guessing you simply sum up the labels of all edges from ww to vv. You call this total θv(w)\theta_v(w). If I'm right in my guess, you are only using the addition in the rig here.

Yes.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:30):

My state functions are ϕv ⁣:(X×R)NvX\phi_{v} \colon (X \times R)^{N_{v}} \to X. But any function ζ ⁣:NvX×R\zeta \colon N_{v} \to X \times R is basically dependent on the rig structure because the second coordinate is computed from the cumulative influences via the underlyng rig structure. I expressed this fact in the following commutative diagram:

thetav.png

view this post on Zulip John Baez (Jul 30 2025 at 18:31):

I didn't look at your diagrams and formulas, just this sentence:

Adittya Chaudhuri said:

The t+1tht+1_{th}-state of vv is dependent on the the ttht_{th} states of the neighbours of vv and as well as the cumulative direct influences of the neighbouring vertices on vv

Okay, that makes sense.

But does it do so in an arbitrary way, or in a way that must obey some limitations? If the next state of vv can depend on the present states of its neighbors in an arbitrary way, there's no need to include the "cumulative direct influence" in your definition!

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:33):

For me "states are concentration levels of biochemical compounds", so, concentation level of a vertex depends on the concentration level of its neigbouring vertices and as well as the nature of cumulative influences of the neighbouring vertices on the vertex.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:35):

That is why I think the dynamics of postive feedforward loop is likely to be different from the dynamics of negative feedforward loop.

view this post on Zulip John Baez (Jul 30 2025 at 18:37):

I argued that if the next state of vv can depend on the present states of its neighbors in an arbitrary way, there's no point in saying it also depends on "cumulative direct influence" you compute from the edge labels. You didn't answer me. But now I don't believe what I said anymore. The edge labels are independent from the states of those neighboring vertices.

view this post on Zulip John Baez (Jul 30 2025 at 18:39):

For me "states are concentration levels of biochemical compounds"

Yes, I am happy with that example. This assumption is not built into your formalism; it's just an example of your formalism.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:40):

Thanks

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:41):

I wanted to say "the next state of a vertex" is dependent on two things : nature of the cumulative infleunces of the neighbours and the previous states of the neighbours.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:42):

So, the cumulative influence depends on the rig structure.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:43):

That was my point.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:45):

Now, I am saying, here the rig labeled graph structure along with ϕv\phi_{v} generates a discrete dynamic, like the way your Petri nets along with rates generate continuous dynamics.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:47):

Thus, I was thininking of my setup as "regulatory networks(modelled as rig-labeled graph) equipped with ϕv\phi_{v}" generates a discrete dynamic.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:53):

Am I understanding correctly about the analogy I made?

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:57):

Same principle is true for Lerman-Deville framework, A graph equipped with "vector fields over the manifolds associated to each node" generates a continuous dynamic.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:57):

My inspiration came from Lerman-DeVille framework.

view this post on Zulip John Baez (Jul 30 2025 at 18:58):

Yes, I think I get it.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 18:59):

I am not sure whether my framework is interesting or not. At the moment, I am not finding anything wrong in it , especially when I am thinking my states as concentration levels.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:03):

Also, today, I could prove that " my definition of morphism of rig-labeled graph equipped with states" (still written in a very vague way) is giving a natural transformation in the functor category SetBN\mathsf{Set}^{B \mathbb{N}}.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:04):

So, probably, the association I said, will give a functor from the category of RR-labeled graphs with states to SetBN\mathsf{Set}^{B \mathbb{N}}.

view this post on Zulip John Baez (Jul 30 2025 at 19:04):

It'll become more interesting if you prove some nontrivial theorems.

There are a bunch of trivial theorems, some of which take work to state. For example, you can construct a double category of 'open' dynamical networks of your sort. And you can construct a semantics for them: a double functor from them to a double category of open discrete dynamical systems. We've proved this kind of theorem so often I consider them trivial.

There are similar trivial theorems for open Petri nets with rates.

A slightly less trivial theorem for open Petri nets was constructing a 'black-boxing' functor that mapped them to relations that say what happens in a steady state.

view this post on Zulip John Baez (Jul 30 2025 at 19:05):

A really interesting theorem would say something interesting about the dynamics of some class of dynamical networks of your sort.

view this post on Zulip John Baez (Jul 30 2025 at 19:06):

For example, in the world of Petri nets with rates (\approx reaction networks), the theorems about complex balanced equilibria are nontrivial and really interesting.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:07):

Thanks very much!!

view this post on Zulip John Baez (Jul 30 2025 at 19:07):

I'm talking about theorems by Jackson and Horne and Feinberg, which you can find in my book.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:07):

Book ?

view this post on Zulip John Baez (Jul 30 2025 at 19:07):

Yeah, my book on Petri nets.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:08):

Thanks very much!! I will download and read the portion.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:09):

Also, I was thinkig about checking whether the "fibration of underlying rig-labeled graphs" (I have to first define it in a suitable way) can preserve discrete dynamical systems like the way it did in continuous case as shown by Lerman-Deville.

view this post on Zulip John Baez (Jul 30 2025 at 19:10):

Because those Jackson-Horne-Feinberg theorems are nontrivial, they will be hard or impossible to adapt to discrete dynamical networks of your sort.

I'm talking about things like the Deficiency Zero Theorem.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:11):

I see. I will read about these. I have not read much about them.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:15):

I hope you are saying about this book of yours right?

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:17):

John Baez said:

I'm talking about things like the Deficiency Zero Theorem.

Yes, I found!! Section 17 of your book. I will read.

view this post on Zulip John Baez (Jul 30 2025 at 19:18):

There's a marvelous theory of chemical reaction networks, starting with the Deficiency Zero Theorem, and leading to the Global Attractor Conjecture, which is really beautiful and appparently very hard to prove. My friend Gheorghe Craciun claimed to have proved it, but now he's working on writing up a proof of that and some even harder conjectures, and he's been working on that for about a decade.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:19):

Thanks very much!! Very interesting!!

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:23):

Since a presheaf BNSet B \mathbb{N} \to \mathsf{Set} is same as a idid-coalgebra, I was also thinking of using some controll theory of systems as FF-coalgebra developed by Joe Moeller as his collaborators in this paper, the paper you shared with me some days back.

view this post on Zulip John Baez (Jul 30 2025 at 19:25):

If I were you I'd focus on concrete examples and look for patterns, instead of trying to apply general formalisms. That's how you can discover really new mathematics. If I were you, I'd start reading papers on Boolean networks, which often study concrete examples - but I also believe they have some general theorems. This seems like a good source of 'raw material'.

view this post on Zulip John Baez (Jul 30 2025 at 19:26):

If you float too high in the world of abstract theory, and don't put your feet down in the study of concrete examples, biologists won't find your work interesting.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:27):

I agree!!! Thanks very much!! I got your point. I will read papers on Boolean networks.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:28):

John Baez said:

If I were you I'd focus on concrete examples and look for patterns, instead of trying to apply general formalisms. That's how you can discover really new mathematics.

I find this line very very helpful and inspiring!!! Thanks very much!!

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:31):

John Baez said:

If you float too high in the world of abstract theory, and don't put your feet down in the study of concrete examples, biologists won't find your work interesting.

I agree. I already experianced similar situation.

view this post on Zulip Adittya Chaudhuri (Jul 30 2025 at 19:35):

John Baez said:

If I were you, I'd start reading papers on Boolean networks, which often study concrete examples - but I also believe they have some general theorems. This seems like a good source of 'raw material'.

Thanks again!! Yes I fully agree!!

view this post on Zulip Peva Blanchard (Jul 30 2025 at 19:56):

Adittya Chaudhuri said:

For me "states are concentration levels of biochemical compounds", so, concentation level of a vertex depends on the concentration level of its neigbouring vertices and as well as the nature of cumulative influences of the neighbouring vertices on the vertex.

Ah, I understand better now! Just to be sure, let me unfold the example.
Say we interpret influence as a rate, e.g. a real number (R=RR = \mathbb{R}), and a state as a concentration level, say a nonnegative real number (X=[0,)X = [0, \infty)). One example of ϕv\phi_v is

ϕv(x,(xw,lw)wNv)=max(0,x+wNvlwxw)\phi_v(x, (x_w, l_w)_{w \in N_v}) = \max\left(0, x + \sum_{w \in N_v} l_w \cdot x_w\right)

The intent is the following. Say the concentration levels at time tt are xvx_v at vv, and xwx_w at every in-neighbor ww of vv. Then at time t+1t + 1, each in-neighbor ww makes a contribution to vv that is proportional to its concentration level xwx_w and its influence lwl_w on vv. Adding all these contributions to the current concentration level of vv yields the formula above.

Does this example make sense?

ps: I think I implicitly assumed that there was a single edge from every in-neighbor ww to vv. But it is easy to adapt this example to take multiple edges into account.

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 05:46):

I am not sure I understand your example. For me, ϕv\phi_{v} is a map from (X×R)Nv(X \times R)^{N_{v}} to XX. Also, I am not sure I fully like the idea that "influences are rates". The reason is that if we know so much information, then from the point of a Biologist, continuous dynamical system is preferable because it is more acurate (at least according to what they usually say). In fact, Biologist usually go for discrete dynamical system eg: Boolean network, when they do not have the data necessary for constructing a Sytem of ODE. That is why I want my framework to be useful when less information/data is available to the Biologists.

The underlying rig-labeled graph is obtained from "several years of experimental and other analysis", which says how various biochemical compounds influence each other (like activation, inhibition, necessary stimulation, etc.). On the other hand, the set of states may consists of {0,1}\lbrace 0,1 \rbrace, representing whether the concentration is at threshold or not, or may be {low concentration, high concentration, medium concentration,etc}\lbrace \text{low concentration, high concentration, medium concentration}, etc \rbrace . I am prefering this because according to biologists, finding exact concentration level of a biomolecule is itself a challenge and usually they do not have the exact data. This is same for finding out the information on rates.

However, it is also possible, I am misunderstanding your example.

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 05:52):

I think this is the main difference between chemistry and biology (Lack of information). Personally, I do not know(since neither I am a biologist nor I am a chemist), but I used to hear these words from Biologists.

view this post on Zulip Peva Blanchard (Jul 31 2025 at 06:24):

Oh, indeed, in my example ϕv:X×(X×R)NvX\phi_v : X \times (X \times R)^{N_v} \to X. I implicitly assumed that a node updates its state based on the states and influences of its in-neighbors and its own previous state. This is a usual assumption when designing local update rules (see e.g. Conway's game of life).

Is there a specific reason to have a node "unaware" of its own state?

view this post on Zulip Peva Blanchard (Jul 31 2025 at 06:33):

You're right, I probably shouldn't have used the term "rate". My intuition is that the "contribution" of an in-neighbor ww of vv is proportional to its concentration level xwx_w and its influence lwl_w, hence the formula.

Do you have a toy example, involving e.g. X={low concentration,high concentration}X = \{ \text{low concentration}, \text{high concentration} \}? Which rig RR would be used then?

view this post on Zulip John Baez (Jul 31 2025 at 08:56):

Peva Blanchard said:

Is there a specific reason to have a node "unaware" of its own state?

No! It's very important that the next state of a node depends on its current state. So, thanks for pointing that out!

One easy solution could be to work with [[reflexive graphs]], where every node has a distinguished edge from that node to itself.

Reflexive graphs are better than graphs in a number of ways, e.g. products work better and the terminal reflexive graph is also the walking node (the free reflexive graph on a node).

(Lately I've been saying 'vertex' instead of 'node', but they mean the same thing.)

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 10:45):

Peva Blanchard said:

Oh, indeed, in my example ϕv:X×(X×R)NvX\phi_v : X \times (X \times R)^{N_v} \to X. I implicitly assumed that a node updates its state based on the states and influences of its in-neighbors and its own previous state. This is a usual assumption when designing local update rules (see e.g. Conway's game of life).

Is there a specific reason to have a node "unaware" of its own state?

Yes, thanks!! As @John Baez explained, I also think it is absolutely essential to work with only reflexive graphs.

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 11:57):

Peva Blanchard said:

You're right, I probably shouldn't have used the term "rate". My intuition is that the "contribution" of an in-neighbor ww of vv is proportional to its concentration level xwx_w and its influence lwl_w, hence the formula.

Ah!! I see. Thanks. I think your example is nice when we are working in a framework where we are aware of the concentrations and the rates.

I like to reword your statement:

My intuition is that the "contribution" of an in-neighbor ww of vv is proportional to its concentration level xwx_w and its influence lwl_w

as

My intuition is that the "contribution" of an in-neighbor ww of vv depends on the concentration level(state) xwx_w and its influence lwl_w

I want to keep this liberty because we may put various different structures Eg: monoid, rig, etc. on XX according to situations. For example, if we take X=BX= \mathbb{B}, (Multiplicative Boolean monoid) and R=BR= \mathbb{B}, the Boolean rig, then using the Boolean logic (AND, OR, NOT, etc) we may define ϕv ⁣:(B×B)NvB\phi_{v} \colon (\mathbb{B} \times \mathbb{B})^{N_{v}} \to \mathbb{B}. Here, 00 means the concentration is low and 11 means the concentration is high (when we see X=BX=\mathbb{B}).

I agree that in every specific problem, the elements of XX and RR should be related in some way. However, I am not specifying the relationship in the definition itself.

I am trying to come up with a better example.

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 16:26):

Peva Blanchard said:

Do you have a toy example, involving e.g. X={low concentration,high concentration}X = \{ \text{low concentration}, \text{high concentration} \}? Which rig RR would be used then?

Hi!The rig that was on my mind is the four elements rig S={+1,0,1,i}S= \lbrace+1, 0, -1, i \rbrace that @John Baez and I constructed in the Example 6.4 of our paper, where +1,0,1,i+1,0, -1, i denote respectively, positive influence, no influence, negative influence and indeterminate influence. For XX, I was thinking about X={low concentration,0concentration,high concentration,indeteriminate concentration}X = \{ \text{low concentration}, 0-\text{concentration}, \text{high concentration}, \text{indeteriminate concentration} \}

However, as of now, I am not able to construct (satisfactorily) the maps ϕv ⁣:(X×R)NvX\phi_{v} \colon (X \times R)^{N_{v}} \to X which matches with our intuition for any interesting graph.

I think your example works fine.. Thanks. If we know the rates and concentrations, then I think your example is a perfectly fine example for the scenario. However, I wanted to apply for the scenario when we do not have such quantitative data, but instead, we have qualitative data like positive influence, no influence, negative influence and indeterminate influence, and {low concentration,0concentration,high concentration,indeteriminate concentration} \{ \text{low concentration}, 0-\text{concentration}, \text{high concentration}, \text{indeteriminate concentration} \}.

At the moment, I am also doubting whether my framework is suitable for the things that I want to study (i.e qualittive information of a regulatory netowork generating discrete dynamics).

I am a bit confused at the moment. I am thinking!

view this post on Zulip John Baez (Jul 31 2025 at 17:20):

It's very good to be studying concrete examples like this. Those feelings of doubt and confusion are good, because they mean you are confronting real problems.

view this post on Zulip Adittya Chaudhuri (Jul 31 2025 at 17:48):

Thanks very much!!

view this post on Zulip Peva Blanchard (Aug 01 2025 at 07:00):

Maybe we can restate what we expect from XX and RR

It seems that:

A simpler case (I guess) is X=CX = C (identifying concentration level with contribution) being an additive monoid, with a monoid action R×XXR \times X \to X.

view this post on Zulip Adittya Chaudhuri (Aug 01 2025 at 15:10):

Thanks!! Below, I am trying to incorporate your ideas into the definition of mine.

First, I am recalling my definitions:

Definition:
Let (R,+,0,×,1)(R, +,0, \times, 1) be a rig and XX be a set. Then, we define a RR-labeled graph with states valued in XX as a pair ((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV) \big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big), where

Lemma:
((G, ⁣:ER),{ϕv ⁣:(X×R)NvX}vV) \big( (G, \ell \colon E \to R), \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} \big) defines a functor F ⁣:BNSetF \colon B \mathbb{N} \to \mathsf{Set}.

Proof: We define a functor F ⁣:BNSetF \colon B \mathbb{N} \to \mathsf{Set}, which takes the unique object * to the set XVX^{V} and takes 11 to F(1) ⁣:XVXVF(1) \colon X^{V} \to X^{V} that takes S ⁣:VXS \colon V \to X to F(S) ⁣:VXF(S) \colon V \to X defined as vϕv(SN(v)×θv)v \mapsto \phi_{v}(S|_{N(v)} \times \theta_{v}).

Now, I want incorporate your ideas to construct a class of examples:

Example:
Let (R,+,0,×,1)(R, +,0, \times, 1) be a rig and XX a commutative monoid. Suppose ρ\rho is an action of the rig RR on the commutative monoid XX (i.e it is an action with respect to both (R,×,1)(R, \times, 1) and (R,+,0)(R, +, 0) and the actions are compatible with the commutative monoid structure of XX ). Now, consider a RR -labeled finite reflexive graph (G, ⁣:ER)(G, \ell \colon E \to R). Let Nv={wV ⁣:there is an edgeeEsuch thats(e)=wandt(e)=v}.N_{v} = \lbrace w \in V \colon \text{there is an edge}\, \, e \in E \,\, \text{such that}\,\, s(e)=w \,\, \text{and} \,\, t(e)=v \rbrace. For each vV,θv ⁣:NvRv \in V, \theta_{v} \colon N_{v} \to R is given as weEdge(w,v)l(e)w \mapsto \sum_{e \in Edge(w,v)} l(e) for all wNw,vVw \in N_{w}, v \in V. Now, suppose for each vVv \in V, we define a function ϕv ⁣:(X×R)NvX\phi_{v} \colon (X \times R)^{N_{v}} \to X by ((x1,r1),(x2,r2),,(xNv,rNv))i=1Nvxiri\big( (x_1,r_1), (x_2,r_2), \cdots, (x_{|N_{v}|}, r_{|N_{v}|}) \big) \mapsto \sum^{|N_{v}|}_{i=1} x_i r_i (Here, I am using @Peva Blanchard 's idea of contribution). Then, it can be verified that ((G,),{ϕv}vV) \big( (G, \ell), \lbrace \phi_{v} \rbrace_{v \in V} \big) defines a RR-labeled graph with XX-valued states.

Is the above discussion making sense?

view this post on Zulip Adittya Chaudhuri (Aug 01 2025 at 16:32):

However, the kind of example that I really want to construct is the following:

Let R={1,1,0,i}R= \lbrace 1,-1, 0,i \rbrace be the four-element rig that @John Baez and I constructed in the Example 6.4 of our paper. Let X={AT, BT, IT}X= \lbrace \text{AT, BT, IT} \rbrace, where AT, BT and IT denote above thereshold concentration, below threshold concentration and indeterminate concentration, respectively. Then, I consider a RR-labeled finite reflexive graph (G,)(G, \ell).

What I am trying:

I am trying to incorporate suitable compatiblity between XX and RR/incorporate some additional structures/data, so that I can construct biologically meaningful {ϕv ⁣:(X×R)NvX}vV\lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V} so that ((G,),{ϕv ⁣:(X×R)NvX}vV)\big( (G, \ell) , \lbrace \phi_{v} \colon (X \times R)^{N_{v}} \to X \rbrace_{v \in V}\big) is a RR-labeled graph with XX-valued states as defined here #theory: applied category theory > Networks in Biology @ 💬 .

Although I am not sure, whether "what I am asking" is actually feasable or not!!

view this post on Zulip Adittya Chaudhuri (Aug 01 2025 at 20:21):

I am writing down what I am thinking:

Usual approach:

We usually first consider a Boolean network (or generalised version of them) as defined by @John Baez here #theory: applied category theory > Networks in Biology @ 💬 , and then this definition allows us to construct a directed graph. Now, the signs of the directed graph (+,+,-) comes from the Boolean logic. In most setting including Kauffmann, I think this kind of approach is taken.

The approach that I was talking for some days:

We start with a labeled graph, and we equip the nodes of the graph with the information of how their present states are depended on the previous states and the nature of influences of its neighbouring nodes. From this data, we can construct a discrete dynamical system. I think this approach is not much common. Closest existing work in this direction I could find are

I am now doubting my approach in the following ground:

At the moment, I do not have proper answers to the above two questions.

view this post on Zulip Peva Blanchard (Aug 02 2025 at 07:42):

Adittya Chaudhuri said:

Instead of finding the biology papers that fit the framework, I guess it could be more fruitful to find a few reference biology papers, or better, biologists, and understand what kinds of question they are interested in. (It usually takes a lot of time, however, to get familiar with a new field of knowledge)

view this post on Zulip Adittya Chaudhuri (Aug 02 2025 at 10:13):

The motivation for my framework is that "a monoid-labeled graph model" of regulatory network is necessary but not sufficient for the purpose of understanding how molecular intereactions give rise to cellular function (that is what interests me the most in biology) because concentration levels of the biochemical compounds change all the time, and unless a biochemical entity reaches certain level of concentration, it can not perform what it is supposed to perform (i.e influencing other biochemical entities in particular ways). Interestingly, I think the dynamics of concentration levels of interconnected biochemical compounds are so nicely engineered by Nature that after certain amount of time the said dynamics behave in certain patterns which become the reason to explain cellular behaviour. I may be wrong but this is my current understanding. At the moment it is difficult for me to throwaway what I realised unless I get a convincing counter argument for it.

But at the same time, I found your suggestions reasonable, meaningful and helpful. Thanks!!

Probably, it is better to work on two problems: (1) towards the direction of my realisations, and (2) Towards problems, whose theme will interest a theoretical biologist and the methods will interest an Applied Category Theorist.

view this post on Zulip Adittya Chaudhuri (Aug 02 2025 at 11:44):

I should emphasise: I am not claiming that my framework will help in understanding "how molecular intereactions give rise to cellular function". I just said "this is the motivation for my framework". It is also very possible that my framework is very inappropriate for the goal that I have.

So, I am very open to change my framework and explore other framework/s/problem/s which can help me progress with what I am interested in i.e studying how molecular intereactions give rise to cellular functions.

view this post on Zulip Adittya Chaudhuri (Aug 02 2025 at 16:03):

Peva Blanchard said:

Adittya Chaudhuri said:

Instead of finding the biology papers that fit the framework, I guess it could be more fruitful to find a few reference biology papers, or better, biologists, and understand what kinds of question they are interested in. (It usually takes a lot of time, however, to get familiar with a new field of knowledge)

I agree.. I am feeling the question " how molecular intereactions give rise to cellular function" is too vague at the moment. I need to first make my question much much more precise from the point of Biology and then translate the question into a Mathematical question. I really need to learn some good Biology if I really want to produce Mathematics that contributes to the understanding of Biological systems.. @John Baez had sugested me to do this before. Somehow I again got distracted a bit!! Instead of thinking from the point of Biology, I started trying to "somehow connect the ideas of graphs with polarities to Discrete dynamical Systems". Thanks for making me realise this.

view this post on Zulip John Baez (Aug 03 2025 at 11:57):

Good. Since "learning biology" is potentially huge job, you might start by reading a book and some papers on gene regulatory networks, or whatever specific kind of network you wish to study mathematically. (It's probably too confusing to to try to learn about all kinds of biological networks at once!)

view this post on Zulip John Baez (Aug 03 2025 at 11:58):

In our paper I mentioned this book:

view this post on Zulip Adittya Chaudhuri (Aug 03 2025 at 13:39):

John Baez said:

Good. Since "learning biology" is potentially huge job, you might start by reading a book and some papers on gene regulatory networks, or whatever specific kind of network you wish to study mathematically. (It's probably too confusing to to try to learn about all kinds of biological networks at once!)

Thanks!! I agree. From yesterday, I started learning about transcription networks . I was reading the first chapter of the Uri Alon's book "Introduction to Systems Biology". In 1-2 days, I will share my understanding of transcription networks here. In parallel, I will also read some papers about gene regulatory networks as you suggested.

view this post on Zulip Adittya Chaudhuri (Aug 03 2025 at 13:40):

John Baez said:

In our paper I mentioned this book:

Thanks very much!! Yes, I will read this book .

view this post on Zulip John Baez (Aug 03 2025 at 13:42):

In 1-2 days, I will share my understanding of transcription networks here.

Great!

view this post on Zulip Adittya Chaudhuri (Aug 03 2025 at 13:43):

Thank you!!

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 19:41):

Hi!!
For last 1-2 days, I mostly focused on two things

Below, I am briefly writing down my understandings:

Regarding 1)

I was trying to understand how a cell processes information coming in the form of biochemical signaling molecules known as ligands and react to them. A cell recieves extracellular and intracellular signals and react to these signals by producing specific protiens and /or RNA-molecule which perform certain precise functions. For example, an intracellular signal can tell the cell that "DNA is damaged and it needs a repair". Then, as a response to the said intracellular signal, the cell produces protiens and/or RNA molecules that repairs DNA. On the other hand, an extracellular signal can come from outside the cell which asks the cell to produce certain proteins and/or RNA molecules for performing certain specific tasks. Cells execute these tasks with precision. For example, Epidermal growth Factor (EGF) is an extracellular signal that asks the cell to produce protiens resposible for cell proliferation. Below, I am trying to give a brief overview of cellular response to signals, which to me is a particular instance of my question "how molecular interactions give rise to cellular behaviour".

Step 1:
Ligands bind with specific receptors present in the cell. Note that for the extra cellular signals the receptors usually lie in the cell membrane , but for intracelular signals the receptors usually lie inside the cell (For eg: Steroid hormone receptors).

Step 2:
The binding of signalling molecules with the receptors cause changes in the configuration of the receptors, which trigger chains of biochemical events known as signalling pathways. When different signalling pathways interact (crosstalk) we get a network called signal tranduction network. The main function of this network is to relay the signal (often via amplifying/damping) by activating various protiens (eg: via phosphorylation) and producing second messengers (eg: realising calcium ions in the cytosol) and finally activating special kinds of protiens called transcription factors (For example, P53 is a transcription factor for activating genes producing protiens responsible for tumour supression), which are resposible for regulating genes in the DNA to produce specific protiens and/or RNA molecules that will perform the biological task asked by the signalling molecules.

Step 3:

When a transctiption factor gets activated by a signal transduction network, it regulates (often stimulate/ repress) the activity of genes which produce either non-coding RNA molecules , or messenger RNA containing the copy of information present in the genes. This process of copying the gene encoded message into RNA molecules is called transcription. The non-coded RNA molecules have important biological roles on their own, but they do not get converted to protiens. However, messanger RNA molecules (whose function is mostly to copy the gene instruction) can enter ribosomes for protien synthesis via a process called translation. Often, the produced protien is another transcription factor for some other gene/s then, it regulate other genes which may produce protiens and or RNA molecules. Again, the produced protiens could be transcription factors for some others genes and so on. This results in a network called transcrioption networks (often called gene regulatory network) which regulate gene expressions. Expression of a gene is the amount of protien produced by the gene at a given time.

So, now, my initial question "how molecular intereactions give rise to cellular function" took a bit precise form : How signalling molecules produce gene expressions? Thus, the answer lies in Step 1-3.

Regarding 2)
According to Uri Alon, transcription network can be modeled as directed signed graphs whose edges are labeled by ++ and -. An edge a+ba \xrightarrow{+} b means the product of gene aa is a transcription factor that can stimulate the rate at which the gene bb is transcribing. Similarly, an edge aba \xrightarrow{-} b means the product of gene aa is a transcription factor that can represss the rate at which the gene bb is transcribing. This rate depends on both he concentration of the transcription factors (produced by aa) and also the chemical affinity for the binding of the transcription factor to the promoter of the gene bb.

Now, the arrows in the transcription network always vary due to mutation in the DNA (which has been experimentally proved to be a random event). According to Uri Alon, network motifs are the patterns that survived evolution, that is random mutations. According to Uri Alon, a way to identify such motifs is by identifying patterns in gene regulatory networks that ouccur much more often that in randomized networks having same number of nodes and arrows. For example, negative autoregulation (walking negative feedback loop), postive auto regulations (walking positive feedback loop), feedforward loops, etc are example of such motifs.

Next, big question: if these patterns have been selected via natural selection, then they must serve some purposes. For example, Uri Alon demonstrated the purpose of a negative autoregulation by showing that their dynamics (rate of concentrations of the produced gene regulated by transcription factors and the binding strength of the transcription factor with the promoter of the target gene) are very special (via the Hill equation), and play crucial roles with respect to "achieving and maintaining steady states of the concentration of the protiens produced by the genes activated by transcription factors".

A question in this regard:
I agree that there are patterns in gene regulatory networks chosen by natural selection. Biologists found some and showed their significance in terms of individual dynamics (at least the portion I read so far). However, it is not clear to me how the dynamics of these motifs would behave when they are placed in a sufficient number inside a very large gene regulatory network responsible for the regulations of certain genes.

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 19:54):

I felt signal transduction networks are more like the kind of networks treated in the Chapter 5 of this and gene regulatory networks are more like Z/2\mathbb{Z}/2-labeled graphs. If we have to understand Step 1-3 in a single framework, then it may be the case that we may need to conisder/develop a more general framework that encompasses both a monoid labeled graphs and signal flow graph within the same framework.

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 21:19):

The way Uri Alon or Tyson describe cells, it seems to me that cells are just biological machines which take signals as inputs and give gene expressions as outputs. However, according to a recent study (I think ongoing) https://mpinb.mpg.de/en/research-groups/groups/cellular-computations-and-learning/news-ccl-eng/koseska-synergygrant-celearn.html: individual cells possess the ability to learn from their environment. Traditionally, cells have been viewed as passive executors of genetic instructions, responding to environmental signals often in a predetermined manner. This research challenges that paradigm by proposing that cells actively generate internal representations of their complex environments.

I found this perspective of a cell quite interesting!!

view this post on Zulip John Baez (Aug 05 2025 at 21:25):

All this is fascinating, and I think ultimately it will lead to some great mathematics.

You wrote:

According to Uri Alon, a way to identify such motifs is by identifying patterns in gene regulatory networks that occur much more often than in randomized networks having the same number of nodes and arrows. For example, negative autoregulation (walking negative feedback loop), postive auto regulations (walking positive feedback loop), feedforward loops, etc are example of such motifs.

What are some non-motifs? I.e., what are some patterns that occur less often than in randomized networks having the same number of nodes and arrows? (If some occur more often than in randomized networks, others must occur less often.)

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 21:29):

John Baez said:

All this is fascinating, and I think ultimately it will lead to some great mathematics.

Thanks very much!!! I also hope so!!

view this post on Zulip John Baez (Aug 05 2025 at 21:30):

Continuing my remarks:

We can deduce from what Uri Alon wrote that these non-motifs - patterns that occur less often than in randomized networks having the same number of nodes and arrows - either serve no purpose, or are actually harmful.

I would like to study these non-motifs. Here's an easy way to start. If you look at signed graphs with a very small number of nodes and arrows, and find which ones Alon calls 'motifs', then the rest should be 'non-motifs'.

He is interested in motifs, but non-motifs should also be interesting.

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 21:32):

Interesting!! Yes, it makes sense. In a way, I think you are asking "Why natural selection is saying non-motifs are harmful/non-useful " ?

view this post on Zulip John Baez (Aug 05 2025 at 21:33):

Right, but first I want to know what they are, and see if I can learn anything from that.

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 21:35):

Thanks!! Yes, I got your point. I will try to understand "what Uri Alon means by a pattern which is not a motif". I will try with something very small signed graphs as you suggested.

view this post on Zulip John Baez (Aug 05 2025 at 21:37):

Yes: just list all signed graphs with n nodes and m edges for small natural numbers n and m, cross off the ones that Alon calls motifs, and make a list of the rest.

view this post on Zulip Adittya Chaudhuri (Aug 05 2025 at 21:38):

Thanks!! Yes, I will do as you suggested.

view this post on Zulip Adittya Chaudhuri (Aug 06 2025 at 15:33):

Hi!!

I think I have an example of a pattern in Biology which could not sustain the evolution. I was trying to understand about these patterns in general.

Below, I am writing down my current understanding:

Like the way, Uri Alon defined a motif in a particular network as a pattern that appears much more frequently than in a random network with same number of nodes and arrows, he used the term anti-motif in a particular network as a pattern that appears much less frequently than in a random network with same number of nodes and arrows. He also argued that this means that Natural Selection has deliberately rejected such patterns. So, from his definition, it seems, that not a motif \neq anti-motif but anti-motif \Rightarrow not a motif. Also, it seems from the definition itself, the concept of motif/anti-motif is defined with respect to a particular network.

Example of an anti-motif:
The example is three vertex feedback loop or a directed 3-cycle. This is a big surprise to me, as I found such patterns very common in systems dynamics diagrams. The example is the neuronal network of C. elegans, a tiny worm composed of about 1000 cells, of which 300 are neurons. The 300 neurons are connected by about 5000 synaptic connections. Studies tell that directed 3-cycles occur much less frequently in the neuronal network of C. Elegans that in in a random network. More, surprisingly, according to Uri Alon, three-node feedforward loops (a direct influence and an indirect influence) are found to be in much more abundance than in a random network. Hence, he argued although both a directed 3-cycle and a three-node feedforward loop both consists of same number of nodes and arrows, Natural selection has chosen three-node feedforward loops but rejected a directed 3-cycles in the designing of the neuronal network of C. Elegans. However, he argued that networks from different fields like (transcription networks, neuronal networks, social networks, food webs, etc.) usually display different network motifs because each field has its own functions and its own constraints. Despite this, he said it is very surprising that three node feed-forward loops are motifs in both transcription networks and neuronal networks. He gave hypothetical explanation of such similarity: He said, that neuronal networks and transcription networks are similar in functioning because both processes information. More precisely, neuronal networks process information from sensory neurons to motor neurons, whereas transcription networks process information in the form of signals coming through the receptors of the cell to produce gene expressions. Uri Alon said the following:

This similarity in function raises the hypothesis that evolution converged on similar network motifs in both networks to perform important signal-processing tasks.

Uri Alon discussed these in the Page 88-89 of the second edition of his book.

Inspired from @John Baez (with the concepts of systems archetype), I have a hypothesis in this direction:

Evolution also converged on "similar network motifs and anti-motifs" in all the biologial networks which perform similar functions. So, if we see some differences in the netowrk motifs or anti motifs between two biological networks, then we may conclude that the two biological networks perform different biological functions.

motifsinvariousnetworks.png

It is interesting to me to note that in the human-made electronic-circuits, we have directed 3-cycles as motifs (as mentioned in the screenshot taken from Page-88 of the Uri Alon's book), but such things are considered as anti-motifs in neuronal networks of C. Elegans.

Uri Alon and Shalev Itzkovitz attempted to develop a general Mathematical formalism (based on random networks) for network motifs in Subgraphs and network motifs in geometric networks, but their study showed that their model is insuffcient to capture the reasoning behind natural selection in biological networks.

I am now getting a feeling that System archetypes are the wrong patterns in the human made systems (modeled as signed graphs) found by humans, whereas anti-motifs are the wrong patterns in the biological systems (modeled as signed graphs) found by Natural Selection.

Possibly there are many more anti-motifs other than the 3-cycle example. I will search for more such examples.

view this post on Zulip John Baez (Aug 06 2025 at 15:41):

This is very interesting! I think with more examples, and more thought about the various choices of semantics of signed graphs, one might be able to figure out what makes certain signed graphs into motifs or anti-motifs (in a way that depends on what they're used for).

view this post on Zulip Adittya Chaudhuri (Aug 06 2025 at 15:43):

Thanks very much!! Yes, I completely agree.

view this post on Zulip Adittya Chaudhuri (Aug 06 2025 at 15:45):

I have a feeling: Dynamics may not be the only way to find the importance of motifs, or the harmfulness of anti-motifs. There may exist something else too. Although at the moment I do not have much idea about "what else they could be".

view this post on Zulip John Baez (Aug 06 2025 at 15:52):

I feel their usefulness or harmfulness must somehow depend on what they "mean"- and thus, some semantics. This semantics may not be dynamics. But as a physicist, I can't help but feel it's dynamics - that is, what the motifs "do".

When you read more of that book, you'll probably learn more about what motifs "do", or "mean". And that will be a major clue.

view this post on Zulip Adittya Chaudhuri (Aug 06 2025 at 17:01):

Thanks!! Yes, I got your point. I will read more of that book, and also will try to find out more examples of anti-motifs in biology.

view this post on Zulip Adittya Chaudhuri (Aug 06 2025 at 21:10):

Hi!!

I just found out that "the concept of motifs/anti-motifs in biological network" (in the way Uri Alon discussed) has some controversies. For example, please see here (references 70 to 77) argued against the idea that motifs are chosen by natural selection for some purpose and anti-motifs are rejected by nartural selection for some purpose. For example, in On the origin of distribution patterns of motifs in biological networks, Arun S Konagurthu and  Arthur M Lesk in the Discussion and conclusion section argued against Uri Alon's hypothesis that "3-cycles are anti-motifs and Feed forward loops are motif". Similarly, in the paper Network motifs: structure does not determine function, authors argued there is no characteristic behaviour for the motif, and with the correct choice of parameters and of internal structure, very different, indeed even opposite behaviours may be obtained.

Now, I am very confused!! Whom to believe "Uri Alon's motifs or the scientists who criticized the idea of motifs? I am more confused because I am not a biologist by training.

view this post on Zulip Kevin Carlson (Aug 06 2025 at 21:37):

Well, who seems more correct?

view this post on Zulip Peva Blanchard (Aug 07 2025 at 05:39):

This counting of motifs/anti-motifs in a structure remind me of Ramsey theory.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 05:59):

Kevin Carlson said:

Well, who seems more correct?

At the moment, it is diffcult for me to answer your question in a concrete way because , it has been only a few days since I started learning Uri Alon's works on network motifs, and it is yesterday only, I came across those papers which argued against Uri Alon's works on network motifs. However, below I am trying to write down my current understandnig of the issue, which may of course evolve in the coming days after learning more about both Uri Alon's works and the works of the Scientists who are against to it.

I find the the follwing idea of Uri Alon interesting: i.e defining a motif in a particular network as a pattern that appears much more frequently than in a random network with same number of nodes and arrows and defining an anti-motif in a particular network as a pattern that appears much less frequently than in a random network with same number of nodes and arrows as indicators of natural selection and natural rejection respectively.

Although I have to read in more details, I think the papers like On the origin of distribution patterns of motifs in biological networks argued that the overrepresentation and underrepresentations of various patterns are actually consequences of the choices of Mathematical models (underlying graphs) used in the study of gene regulatory networks rather than natural selection or natural rejection.

I am convinved (at least the portion I read so far) that the dynamics of certain over-represented patterns like "negative-autoregulation, postive-autoregulation, three node feedforward loops can be shown to be interesting (as they are serving some specific purpose in the gene regulation) when we model the dynamics using for eg: Hill's equation.

However, it is not clear to me, if we place these motifs inside a very large network, then will the special dynamics of these patterns will remain still special ? (In reality, this is the case, because motifs do not exist in isolation, and they usually always occur within some big networks in biology.) This issue has been raised for example in this paper Do motifs reflect evolved function?--No convergent evolution of genetic regulatory network subgraph topologies. Although I am yet to read the paper (the paper is not accessible to me), from the abstract, it seems they meant to say that motifs are not of greater importance to the functioning of the network than any arbitrary subgraph patterns.

However, my previous exposure to "Systems Dynamics" literature which discuss the importance of feedback loops and systems archetypes in causal loop diagrams was very convincing in creating a personal bias towards Uri Alon's theory of motifs and anti motifs. Although, I have not studied how the dynamics of the various feedback loops in a large causal loop diagram affects the dynamics of the whole network. I think, it could be interesting to see whether "the idea that the dynamics of the whole casusal loops diagrams" have any special relationships with the dynamics of the undelying feedback loops. If the answer is yes, then at the moment, I may have a bias toward Uri Alon's works, otherwise, in the other direction. However, I think it is also important to understand the methodology that "the scientists against Alon's works" are using to argue against Alon. It may also be possible that their conclusions are also biased from their choice of Mathematical models.

view this post on Zulip Kevin Carlson (Aug 07 2025 at 06:02):

Yes, it's a very good question whether anybody has actually established with any sort of rigor that small feedback loops in a large CLD are actually relevant to the long-term dynamics. I guess you can choose parameters to force them to be relevant but perhaps there's some way to make this question non-trivial.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 06:02):

Peva Blanchard said:

This counting of motifs/anti-motifs in a structure remind me of Ramsey theory.

Thanks!! This is interesting. Although I have not read much on Ramsey Theory, from a quick glimpse, I think it is also advocating against Uri Alon's works on motifs because the network sizes are usually very large in Biology.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 06:04):

Kevin Carlson said:

Yes, it's a very good question whether anybody has actually established with any sort of rigor that small feedback loops in a large CLD are actually relevant to the long-term dynamics. I guess you can choose parameters to force them to be relevant but perhaps there's some way to make this question non-trivial.

Thanks! Yes, I agree to your points.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 07:29):

Another thing I often find a little awkward in biology examples is that they start with "very simple examples" (from the point of biology) , and if they discover some patterns on these "simple examples", they hypothesize statements for the general complex systems. I know, that due to computational complexity, we do not have much choices in this regard at the moment, but still it feels a little strange to me.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 07:49):

I just found an interesting study : Network connectivity during mergers and growth: optimizing the addition of a module, which argues that the importance of motifs lie in the "way these motifs are connected to the entire network".

In particular, the author wrote the following at the end:

In contrast to several studies showing that global dynamics of a system can depend on the structure of subgraph motifs, our results suggest that “how” motifs are connected in the network may be as important as their structure.

I will read more of this paper.

view this post on Zulip David Corfield (Aug 07 2025 at 08:39):

Sorry, I haven't been following closely, but just to remind us that the big book was warning against leaning too heavily on motifs as an indicator of function in biology.

However, statistical abundance by itself does not imply that these circuits are fundamental bricks of biological systems. Fundamental circuits of biological networks can be underrepresented and appear only once in the network, yet they can have overwhelming importance for the functioning of the cell.

Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles. Thus, the mere statistical significance of the network motif may not confer a definitive functionality within the cell machinery. (p. 332)

The chapter (17) then goes on to give statistical analyses of motif abundance. Naturally, fibrations appear to them as much more important.

view this post on Zulip John Baez (Aug 07 2025 at 09:01):

Adittya Chaudhuri said:

Another thing I often find a little awkward in biology examples is that they start with "very simple examples" (from the point of biology) , and if they discover some patterns on these "simple examples", they hypothesize statements for the general complex systems. I know, that due to computational complexity, we do not have much choices in this regard at the moment, but still it feels a little strange to me.

Science proceeds by being bold and making simple guesses, then later correcting them as needed. It's a good method. Simple laws are often reasonably good approximations to the truth. But it's not like math, where our papers are very likely to be 100% correct, because we can prove things. In science you expect to be wrong a lot of the time.

view this post on Zulip John Baez (Aug 07 2025 at 09:05):

Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles.

It would be good to read these papers (although I don't have time now). I suspect that "synchronization" is less important than people want it to be. Human machines often require events to happen in a synchronized schedule, requiring some sort of clock. But I suspect biology can't afford to be so precise. Or maybe biologists mean something much less precise when they talk about "synchronization".

view this post on Zulip David Corfield (Aug 07 2025 at 09:37):

Yes, chap. 21 for that:

One of the main advantages of using symmetries to characterize dynamics is that the synchrony patterns that we predict are combinatorial/topological properties of the network, thus largely independent of the explicit form of the dynamical equations. Therefore, we do not need to specify the equations that describe the dynamics of the system—be it a neural system, a genetic network, or any other physical, chemical, or biological system—beyond the general form from Definition 3.10, as long as the ODEs are admissible for the graph. We, therefore, derive all our results for a general type of dynamics, which can then be specialized to particular systems. However, particular systems are characterized by a large number of parameters and most of them are required to be the same for symmetry to exist. Unfortunately, this ‘uniformity condition’ is never strictly realized in biology. How is the coherent synchronization necessary for life to emerge?

We treat this question below. We first elaborate on an ideal model of gene expression that predicts perfect synchrony. Then we show that the conditions for this model are not realized in biology due to the broad parameter space of interactions among genes, proteins, and small molecules. We discuss how to reconcile this result with the observed (quasi-perfect) synchrony in real systems.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 15:30):

John Baez said:

Science proceeds by being bold and making simple guesses, then later correcting them as needed. It's a good method. Simple laws are often reasonably good approximations to the truth. But it's not like math, where our papers are very likely to be 100% correct, because we can prove things. In science you expect to be wrong a lot of the time.

Thanks very much!! I am getting your points. I agree. I am trying to adapt with this mindset.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 15:41):

John Baez said:

Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles.

It would be good to read these papers (although I don't have time now). I suspect that "synchronization" is less important than people want it to be. Human machines often require events to happen in a synchronized schedule, requiring some sort of clock. But I suspect biology can't afford to be so precise. Or maybe biologists mean something much less precise when they talk about "synchronization".

After reading this https://mpinb.mpg.de/en/research-groups/groups/cellular-computations-and-learning/news-ccl-eng/koseska-synergygrant-celearn.html, I was thinking, whether "internal representations" of the external environment of cells (whatever it means to Biologists) are the reasons behind being "not so precise", because they are behaving according to their internal representations which vary, which in turn make them less synchronic. For example, while learning a concept , often our internal representations of the concept is wrong in the begining (which may lead to imperfections, asymmetry, etc), but slowly they tend to get better but never perfect.

view this post on Zulip Adittya Chaudhuri (Aug 07 2025 at 15:44):

David Corfield said:

Yes, chap. 21 for that:

One of the main advantages of using symmetries to characterize dynamics is that the synchrony patterns that we predict are combinatorial/topological properties of the network, thus largely independent of the explicit form of the dynamical equations. Therefore, we do not need to specify the equations that describe the dynamics of the system—be it a neural system, a genetic network, or any other physical, chemical, or biological system—beyond the general form from Definition 3.10, as long as the ODEs are admissible for the graph. We, therefore, derive all our results for a general type of dynamics, which can then be specialized to particular systems. However, particular systems are characterized by a large number of parameters and most of them are required to be the same for symmetry to exist. Unfortunately, this ‘uniformity condition’ is never strictly realized in biology. How is the coherent synchronization necessary for life to emerge?

We treat this question below. We first elaborate on an ideal model of gene expression that predicts perfect synchrony. Then we show that the conditions for this model are not realized in biology due to the broad parameter space of interactions among genes, proteins, and small molecules. We discuss how to reconcile this result with the observed (quasi-perfect) synchrony in real systems.

Thanks!! Interesting!! I will read Chapter 21.