You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Lets talk about Networks in Biology in this thread!!
In our recent paper Graphs with polarities, @John Baez and I studied how monoid labeled directed graphs can model regulatory networks in biology when the monoid is suitably chosen. We discussed the work related to this paper here #theory: applied category theory > Graphs with polarities . Now I am trying to understand the content of this interesting paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics by Jorge G. T. Zañudo and Réka Albert (Suggested by @John Baez ). Let me start by spelling out my very initial understanding of the paper. I apologise priorly if my understanding sounds very naive and immature at the moment. I expect to get better with my understanding in the coming days. In this thread, I am assunming the notations used in our paper Graphs with polarities.
How the framework in the paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics relates to our framework in Graphs with polarities??
For a monoid , let be a monoid labeled graph. Although in Graphs with polarities, we modelled regulatory network as a -labeled graph for some suitable choices of monoids , we have not studied the dynamics of -labeled graphs. I feel Biologists are usually interested in three kinds of dynamics:
For a detailed discussion related to "how the above three classes of dynamics modelling approach differ and how each has its own advantages", please see the paper Quantitative and logic modelling of gene and molecular networks by Nicolas Le Novère.
I feel the paper Cell Fate Reprogramming by Control of Intracellular Network Dynamics discusses certain interesting and as well as useful aspects of logic based dynamics of our -labeled graphs in Graphs with polarities and is discussed in terms of Boolean networks. In particular, the authors studied how stabilizing the expression or activity of a few select nodes of a regulatory network can drive the cell towards a desired fate or away from an undesired fate. I may be wrong but next, I will try to argue why I think a Boolean network describe a logic based dynamics of our -labeled graphs!!.
Below, I am trying to write my understanding of a Boolean model of the discrete dynamics of a regulatory network as explained in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks adopted in the framework of monoid labeled graphs.
Consider a -labeled graph . Let . Now, suppose for each , consider a -valued function the state function of the vertex . We define the state of the vertex at time as .
If , we will say the vertex is off.
If , we will say the vertex is on.
We define the tuple as the state of at time .
Now for each , we have to equip an updating function
, where
, where .
Thus, we can extend the above definition to an updating family of functions
which updates the state of from to .
Thus, I am defining a Boolean model of the discrete dynamics of as the tuple , with a given initial state at .
Above is an example of synchronous updating scheme, where the state of every vertex is updated simultaneously. However, in many cases, the updating scheme is asynchronous, in which we have to define the updating function accordingly. (I have to think how to talk about it in a formal way).
Next I want to understand the concept of attractor, a set of states in which the above dynamics eventually settles down. According to the authors in Cell Fate Reprogramming by Control of Intracellular Network Dynamics,
"The attractors of intracellular networks have been found to be identifiable with different cell fates, cell behaviors, and stable patterns of cell activity"
It might be good to start by focusing on the synchronous updating scheme for boolean networks, even though it's oversimplified, and look at various biologically interesting behaviors that can naturally arise from this scheme. The paper you're writing about claims to make progress in this:
As it was pointed out in section II A, the state space of
a Boolean network with N nodes contains states. This
exponential dependence on the number of nodes of the
state space's size makes the problem of finding the attractors
of a Boolean network computationally intractable,
which means a full search of the state space can be performed,
in practice, only for small networks.To overcome this challenge several types of methods have
been proposed to simplify the search space. The most
prominent of these approaches, the so-called network reduction
methods [35-38], shrink the effective network size
by removing frozen nodes (nodes that reach the same
steady state regardless of initial conditions) [38-40] and
dynamically irrelevant nodes (such as simple mediator
nodes or nodes with no outputs), and by simplifying the
Boolean functions (for example, due to the presence of a
node with a fixed state), thus reducing the effective state
space.Although the reduction methods developed so far have
been successfully applied to several biological networks
[35-37, 41], they have some limitations [....]The method that we propose is based on the idea that
certain components of the network can only stabilize in
one or a small number of attractors. This idea is itself
not new and is closely related to R. Thomas' rules
linking feedback loops with the appearance of complex
dynamical behavior in biological networks [19]. For example,
an approach that connects the dynamics of certain
network motifs to construct the attractors of the full
Boolean network was recently proposed by Siebert [28].
The novelty of our method is the efficiency of identifying
every motif that stabilizes in an asynchronous attractor
despite being coupled to the rest of the network...
This makes me want to figure out what their method is. I couldn't succeed in a few minutes of work. I don't even know what it means to say a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme. But it would definitely be worth spending an hour or two trying to figure out their method! If it's too hard, maybe some of the papers they refer to are easier.
By the way, here is a free version of the paper you linked to, on the arXiv:
The version you pointed to doesn't seem to be open-source.
John Baez said:
It might be good to start by focusing on the synchronous updating scheme for boolean networks, even though it's oversimplified, and look at various biologically interesting behaviors that can naturally arise from this scheme.
Thank you!! Yes, I agree.
I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!
Kevin Carlson said:
I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!
Yes, I meant to say that the transition from one state of a vertex to another depends on two things:
1) State of the neighbours of at (state of neighbours also include the state of itself at ).
2) Regulatory nature (stimulation, inhibition, etc.) of the neighbours of on .
John Baez said:
This makes me want to figure out what their method is. I couldn't succeed in a few minutes of work. I don't even know what it means to say a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme. But it would definitely be worth spending an hour or two trying to figure out their method! If it's too hard, maybe some of the papers they refer to are easier.
Thanks!! Yes, I completely agree. I am trying to undertstand what they meant by "a motif stabilizes in an "asynchronous attractor", given that they're studying the synchronous updating scheme"
John Baez said:
By the way, here is a free version of the paper you linked to, on the arXiv:
The version you pointed to doesn't seem to be open-source.
Thanks very much for this version. Somehow, I could not find it yesterday and I downloaded the previous version from my institutional account.
John Baez said:
The method that we propose is based on the idea that
certain components of the network can only stabilize in
one or a small number of attractors. This idea is itself
not new and is closely related to R. Thomas' rules
linking feedback loops with the appearance of complex
dynamical behavior in biological networks [19].
I am trying to understand "what they meant by the above paragraph".
John Baez said:
Although the reduction methods developed so far have
been successfully applied to several biological networks
[35-37, 41], they have some limitations [....]
I need to understand "what they meant by their reduction method" and "what are its limitations".
Kevin Carlson said:
I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!
Adittya mentioned a graph, but I don't see why. Maybe the paper talks about a graph? I prefer to think about a boolean network: a set of vertices and for each vertex a set of vertices together with a boolean function telling us the state of that vertex as a function of the states of the vertices in . If we start by assigning each vertex a boolean at time , we can update the state at time using these functions in the hopefully obvious way.
At least this is my impression of how boolean networks work. Wikipedia puts it this way:
A Boolean network consists of a discrete set of Boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t.
John Baez said:
Kevin Carlson said:
I bounced off the complicated notation for your updating function at first glance. Maybe you can just say what you mean in words too? There's a graph floating around here and yet nobody said anything like "the state depend son the state of all the neighbors"!
Adittya mentioned a graph, but I don't see why. Maybe the paper talks about a graph? I prefer to think about a boolean network: a set of vertices and for each vertex a set of vertices together with a boolean function telling us the state of that vertex as a function of the states of the vertices in . If we start by assigning each vertex a boolean at time , we can update the state at time using these functions in the hopefully obvious way.
My inspiration for thinking Boolean network as "some extra structure" on a monoid labeled graph (which gives us a discrete dynamics of ) comes from the page 2 of * An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks (please see the attached screenshot for the precise portion on the page).
boolnetwork.png
What's the point of having the graph, if "experience shows us" that this is not enough and we need a boolean network as I defined it above? I would throw out the graph. Anyway, I would not worry about these details much until I figured out the "method" which is supposedly the main point of this paper. These are biologists writing, not mathematicians, so I don't expect them to be very precise, but they may (or may not) have an interesting "method".
I understand your point but it said "not enough", but may be necessary. That is why I thought of keeping the both graph structure and as well as the "boolean network" structure in my updating function here
. I may be misunderstanding something!!John Baez said:
Anyway, I would not worry about these details much until I figured out the "method" which is supposedly the main point of this paper. These are biologists writing, not mathematicians, so I don't expect them to be very precise, but they may (or may not) have an interesting "method".
Thanks!! Yes, I agree to your point.
I am trying to write down my current understanding of stable motifs in a Boolean network as discussed in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.
Sorry I realised I was misunderstanding very fundamentally about Boolean networks. In particular, I realised that the graph structure in Boolean network is induced from the Boolean functions on the nodes. Before, I was misunderstanding by thinking that "we are starting with a graph structure and then adding the Boolean functions to it". I now realised what @John Baez explained here . So, I decided to delete my yesterday night's post and redfine the notions according to my current realisations.
Boolean network:
A Boolean network consists of a finite set of nodes along with
Graph associated to a Boolean network:
Let be a Boolean network. We define a graph whose
Remark:
Note that does not permit multiple edges between vertices.
Discrete dynamics of a Boolean network:
Let be a Boolean network. Consider a family of functions , whose members are called the updating functions of the node . We define the discrete dynamics of as the the n-tuple .
State of the Boolean dynamics:
Let be the discrete dynamics of the Boolean network . We define the state of at time as the n-tuple
,
and we define the state of the vertex at time as the projection of to its coordinate.
Based on the above formalisms, I will define the notion of stable motifs later today.
There is something slightly strange about that definition, because it could be that one of the functions doesn't depend on one of its arguments, say . Then the graph will have an edge from to even though there is no influence of on , if that makes sense. I'm not sure if this will matter for anything or not.
Maybe a better way to express that is: what's the right notion of morphism for Boolean networks? It seems like maybe you'd want such a network to be isomorphic to one where .
I have thought about Nathaniel's point when I was defining "stock flow diagrams", which have "links" pointing from "stocks" to "flows" together with a function saying how a flow depends on those stocks that have links pointing to it. The function could be independent of one of its arguments! Then one link would be unnecessary. But instead of trying to forbid this, I decided it was better to allow it.
To disallow it is a bit like trying to say a constant function is not really a function. "What? You're telling me 3 is a function of x? That's absurd!" It feels convincing in a certain gut-level way, but I think it's mathematically very inconvenient.
Thanks!!
Below, I am trying to write down below what I thought about the issue:
According to the (page 3- 4 (Expanded network) of attached puiblished version of An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks Screenshot of the portion is attached
An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.pdf
labeledBoolean.png
),
I think Biologists often prefer to work with the following labeled version of graphs associated to Boolean networks:
Labeled Boolean graph:
Let be the graph associated to a Boolean network . For a monoid , we define a -labeled Boolean graph of as a -labeled graph (as defined in our paper Graphs with polarities) .
Remark:
Note that in the diagram attached, authors considered a -labeled Boolean graph of some Boolean network .
Now, I propose a solution to the issue raised by @Nathaniel Virgo as follows:
Let us work in the -labeled Boolean graph whose one of the element say represents "no-influece" [We discussed this in Example 3.4 in our paper Graphs with polarities].
Then, I would like to label the unnecessary edges with the element ??
By the way, I think this issue is distracting from the more urgent goal of figuring out what "reduction approach" is being explained in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks.
Our earlier paper was setting up a formalism, and for that it's nice to pay careful attention to details. But if one is trying to understand a biology paper one needs to start by trying to understand the main thing they're trying to say, and later worry about details.
(sorry, I didn't mean to distract, the detail just popped out at me when I read the post!)
No problem - I'm just trying to act like Adittya's advisor, since he doesn't seem to mind that, and I'm trying to push him into doing actual biology, since he seems to want that.
John Baez said:
Our earlier paper was setting up a formalism, and for that it's nice to pay careful attention to details. But if one is trying to understand a biology paper one needs to start by trying to understand the main thing they're trying to say, and later worry about details.
Thanks!! Yes, I got your point and fully agree with it!! I will focus on figuring out the "reduction approach" in that paper.
John Baez said:
I'm just trying to act like Adittya's advisor, since he doesn't seem to mind that, and I'm trying to push him into doing actual biology, since he seems to want that.
Thanks very much!! I am very glad and grateful for this!!
It would be interesting to look at commonalities and differences of approach in this field. In a paper along the lines of that Symmetries of Living Systems book I mentioned elsewhere,
the authors explain the steps of their work to understand something as complex as E. coli
The use of fibrations should make people here feel very at home.
What's a 'TRN'? I'm assuming a 'GRN' is a gene regulatory network.
Hmm, looking at the paper I don't see them defining 'TRN', but nor do I see them using that term except in the one section header that David gave us a screenshot of! So never mind.
There is a lot of interesting stuff in this paper. I see what seems like the bold claim that most of the gene regulatory network of E. coli (for example) is unnecessary to understand the behavior of this bacterium, or at least the most important aspects.
They start with a gene regulatory network (GRN) with 4692 genes and then trim it down to a 'minimal' GRN with just 42 genes. I don't yet understand the algorithm they use, and I also don't know what properties are supposed to be preserved by this massive simplification. But it sounds remarkable. I guess how remarkable it is depends on how much evolution penalizes inefficiency.
It could be that some seemingly unimportant aspects of the bacteria's GRN become important under some special conditions.
I'm glad to see the key idea of 'fibration' for these GRNs seems to be taken from my friend Eugene Lerman and a collaborator: I remember Eugene working on it and it's nice to see someone has picked up on it!
Here's an interesting passage:
We have shown before [4, 10, 11, 24] that the use of fibrations allows for the breakdown of the network into its fundamental synchronized building blocks, the fiber building blocks. Each fiber belongs to a fiber building block, defined as the induced subgraph formed from the nodes in the fiber, and the nodes that send inputs to the fiber (the regulators).
[....]
We find that these fiber building blocks can be precisely characterized by just two numbers n (or φ) and ℓ, defining the |n, ℓ⟩ classes [10].
Paper [10] is
This passage in the paper makes my crackpot alarm ring:
The building blocks are classified into topological classes of input trees characterized by integer branching ratios and fractal golden ratios of Fibonacci sequences representing cycles of information.
Anyone who mentions 'fractal', 'golden ratio' and 'Fibonacci' in the same sentence makes me afraid.
On a different note (but might be related), for last few days (after @David Corfield shared the paper on symmetries of Living systems), I was trying to understand the Math of Graph fibrations from the paper fibrations of graphs by Boldi and Vigna.
Last to last week, one of my collegue at my current institute (ISI, Kolkata, Stat-Math Dept.) gave a talk on discrete -homotopy of undirected simple graphs, where he was also talking about "local isomorphism" properties imply casrtesian morphisms lifting in graph fibrations. Although his definition of graph is a bit restrictive than ours. Interestingly, I find the notion of local isomorphisms present in the "paper on graph fibrations" (for eg: Please see page 25 of Fibration of graphs) relatable.
Now again, I am continuing from here An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks
towards network reduction methods discussed inExpanded network of a Boolean network:
Let be a Boolean network. Then, the expanded network of a Boolean network is the graph of a Boolean network obtained from via the following two operations:
where 's are either the states of one of the elements of or one of these states' negations. In the same way, one can also represent the negation of the update function in a similar form. Then, composite nodes so on... are added for every set of nodes involved in a conjunctive clause (AND-dependent relationship) and for every constituent node of the composite node an edge is drawn from the constituent node to the composite node. A simpler example of this operation is attached in the screenshot.
Stable motifs are certain strongly connected components of , which I am going to define next!!
Before defining stable motifs, I will say a few things about "what gives us"!!
According to the authors in that paper, buys us the following:
Consequence of (1):
Consequence of (2):
John Baez said:
What's a 'TRN'? I'm assuming a 'GRN' is a gene regulatory network.
"A gene regulatory network (GRN) is often referred to as a transcriptional regulatory network
(TRN), although the former is more general than the latter." (Stewart et al. book, p. 139)
John Baez said:
Anyone who mentions 'fractal', 'golden ratio' and 'Fibonacci' in the same sentence makes me afraid.
I don't think it's too concerning. From the book with Ian Stewart:
Yes, I saw this chart in the paper I was reading. Maybe this is a case of people who haven't noticed that crackpots love to drop references to fractals, Fibonacci numbers and the golden ratio much more than ordinary scientists, and thus haven't learned that a sentence like the one they wrote is alarming rather than intriguing to many of us.
You may have heard of the physicists Paul Davies. He got in trouble talking about black holes and the golden ratio.
I realised I misunderstod here -labeled Boolean graph. I feel the following may be the correct interpretation of labeledBoolean.png
while "discussingThe definition of naturally gives a -labeled graph, when logical operators like NOT is used in the definition of . Previously, I misinterpreted by thinking that "we are incorporating an addtional labeling on the graph associated with the Boolean network".
What graph do you draw for OR ? Or is that boolean expression not allowed here?
By the way, I have a lot of questions about what you've been explaining - I don't understand it.
For , I would draw
Okay. But that's not what I asked about.
John Baez said:
Okay. But that's not what I asked about.
I understood the problem. I am not able to distinguish between AND and OR by drawing a -labeled graph. However, I am able to distinguish them in the expanded graph by introducing the composite node.
IMG_0377.PNG
BC is the composite node here.
John Baez said:
By the way, I have a lot of questions about what you've been explaining - I don't understand it.
I will try my best to answer. Also, Sorry for explaing not well. I am trying to improve my explanation.
I think NOT in tranforms into in , and hence in becomes in . Here, denotes the Boolean network and denotes the expanded network of .
I'll probably ask my questions tomorrow, after rereading what you wrote.
Okay, I am also learning and writing at the same time. So, may be tomorrow, my understanding will get better than what I have today.
I think the dynamics of a Boolean Network (i.e a Boolean network along with its updating functions on nodes) naturally gives us a -labeled graph , whose signed edges represent the influences of the state of a neighbour of a vertex on the state of the vertex . I think the graph structure represents a synergistic (cumulative) influence of states of all the neighbours of a vertex on the state of the vertex via "logical combinations" (especially AND) of the state of the neigbouring vertices at time , which determines the state of the vertex at time .
In the attached file, I tried to explain it through examples.
I feel the logical operator "NOT" is the reason for using the -labeled graph structure on the graph associated to the Boolean network."
The last post is my current understanding of what I think a Boolean network and its associated graph means. It seems according to this understanding I need to reformulate some the definitions accodingly here @John Baez suggested before, first, I am trying to figure out the authors' reduction methods discussed in that paper based on the understianding of my "last post". Although I am not fully sure, it seems my last post makes more sense than what I was saying till yesterday night.
. However, before formalising fully, asI guess if you were allowed two time steps, you could represent something acting as OR , by negative edges from and from to a third node and a negative edge from that to a fourth node.
Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally, so instead of trying to understand this I'll ask some questions about the "big picture" - basically, why are they doing all this stuff.
Let me state my understanding so far. I'm going to completely avoid using math symbols or formulas, because those are often distracting when you're trying to understand basic ideas.
They start with a boolean network, which is a simple and clear concept, at least in my definition back here.
Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say
If it's Monday I'll eat a sandwich when I see it's 2 pm; if it's Tuesday I won't eat a sandwich when I see it's 2 pm
Does it being 2 pm affect my sandwich eating positively or negatively? This question has no answer; it's not a good question.
So, I'm already confused about what they're trying to do.
Then they try build another graph, starting from the first one, called the "expanded network" of the Boolean network. Adittya says it's obtained from the first graph using two operations.
1) First, for each node we introduce a "complementary node" which is a kind of negation of the original one. E.g. if we had a node that means "I'm eating a sandwich", we'd put in a new node meaning "I'm not eating a sandwich". This is simple enough.
2) Then, we introduce a bunch of nodes called "composite nodes", using a [procedure](Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally, so instead of trying to understand this I'll ask some questions about the "big picture" - basically, why are they doing all this stuff.
Let me state my understanding so far. I'm going to try to avoid using any math symbols or formulas, because those are often distracting when you're trying to understand basic ideas.
They start with a boolean network, which is a simple and clear concept, at least in my definition back here.
Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say
If it's Monday I'll eat a sandwich when I see it's 2 pm; if it's Tuesday I won't eat a sandwich when I see it's 2 pm
Does it being 2 pm affect my sandwich eating positively or negatively? This question has no answer; it's not a good question.
So, I'm already confused about what they're trying to do.
Then they try build another graph, starting from the first one, called the "expanded network" of the Boolean network. Adittya says it's obtained from the first graph using two operations.
1) First, for each node we introduce a "complementary node" which is a kind of negation of the original one. E.g. if we had a node that means "I'm eating a sandwich", we'd put in a new node meaning "I'm not eating a sandwich". This is simple enough.
2) Then, we introduce a bunch of nodes called "composite nodes", using a procedure that's too complicated for me to understand - since I don't know the goal of this procedure.
Adittya did kindly try to explain the goal of doing 1) and 2):
Consequence of (1):
- (a) It allows us to evaluate the inhibitory effect of a node on the rest of the network.
(b) By assigning the negation of the original Boolean function to every complementary node, it also explictly considers how the other nodes can promote the inactivation of a given node.
Consequence of (2):(1) We identify synergistic interactions (via AND) of the neighbouring vertices of a vertex on the vertex.
I still feel very confused. It almost seems that the authors don't like boolean networks, so they are trying to build a graph that expresses most of the information in the original boolean network. But they don't seem to be claiming their expanded graph captures all the information in the original boolean network, do they?
Is there any way to say what information they're trying to capture, and what information they're throwing out? Presumably the information they're trying to capture is supposed to be "more important" in some way.
John Baez said:
I'm glad to see the key idea of 'fibration' for these GRNs seems to be taken from my friend Eugene Lerman and a collaborator: I remember Eugene working on it and it's nice to see someone has picked up on it!
Looking around for other applications of fibrations, I see there's the FibLang work of @Fabrizio Romano Genovese, @fosco and @Caterina Puca (here and here). These look to understand natural language and its acquisition via fibrations. They rely on the result that any functor factors through a fibration. There's also work on locating obstructions to a functor being a fibration by Fabrizio and others.
On the other hand, I see that those using graph fibrations in biology are also interested in slight departures, which get called "13.5.1 Pseudosymmetries 246, 13.5.2 Quasifibrations 249, 13.5.3 Pseudo-balanced colorings".
I wonder if there's some useful cross-over. Grammatical language would resemble the latter's reduced gene networks, the logic of cognition and of the cell.
David Corfield said:
It would be interesting to look at commonalities and differences of approach in this field. In a paper along the lines of that Symmetries of Living Systems book I mentioned elsewhere,
- Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer, Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria,
I'm enjoying this paper, not because it uses fibrations, but because it makes sense to me both biologically and mathematically.
Yes, it's nicely organised. The book has Makse also as one of its authors and covers similar ground, but at much greater length especially on the pure mathematics, as well as several further topics.
John Baez said:
Then they try to build a graph from that boolean network - I guess a graph with edges labeled by signs. As far as I can tell, this graph is suppose to say whether one node affects another node "positively" or "negatively" (or not at all, if there's no edge). But I don't see how to determine this for an arbitrary boolean network: e.g. I can say
Sorry, I was a bit distracted from the afternoon till night. I was attending a post marriage ceremony of one of my closest friend (who is coincidentally also a Mathematician, and works in Algebraic geometry). I just returned from the ceremony venue.
For me, given a Boolean network , the signed edges of the graph may not say whether one node affects another node "positively" or "negatively". I may be misunderstanding something fundamentally, but my interpreation for them are the following:
. In otherwords, the cumulative effects of the states of the elements of (via logical operarations) determines the state of .
In a way, logical operations on state of the source vertices determines the state of the target vertex.
John Baez said:
Hmm. This seems messy, and I don't feel confident that the authors are doing things optimally,
I am also feeling the same!!
John Baez said:
I still feel very confused. It almost seems that the authors don't like boolean networks, so they are trying to build a graph that expresses most of the information in the original boolean network.
I feel following are some features of the expanded network:
I am still trying to understand "how they are using their definition of stable motifs in their network reduction methods". In their network reduction method, they are starting their method by keeping states of the nodes of the stable motifs fixed. Although, I am not able to see "why they suddenly define the notion of stable motifs in their expanded network" and "then fix their states" and thus probably justifying the name "stable" .
It is also not clear to me
I feel they think their definition of a stable motif is good enough to assume that the states of the nodes of the stable motif will be constant eventually. However, at the moment, I am not able to see this from their definition of stable motif .
I am thinking about these things!!
John Baez said:
But they don't seem to be claiming their expanded graph captures all the information in the original boolean network, do they?
Is there any way to say what information they're trying to capture, and what information they're throwing out? Presumably the information they're trying to capture is supposed to be "more important" in some way.
I feel they are trying to define the notion of stable motifs in their expanded network which by definition is not possible to define in the original Boolean network. (However, I doubt whether we really need conceptually to define stable motifs in their way in expanded network, or, it may suffice to define certain usual motifs in the signed graph associated to Boolean network.) According to authors: their definition of a stable motif is good enough to assume that the states of the nodes of the stable motif will be constant eventually. However, at the moment, I am not able to see this from their definition of stable motif. Then, they use this fact to fix the states of the stable motifs as the starting point of their network reduction method.
Hi!! I think I found a formal way to express the definition of Boolean network and its associated signed graph, which expresses the information that I informally discussed here
.Below, I am wrting down what I thought about it:
Boolean network:
A Boolean network consists of the following:
Let .
We define the update function of the vertex as the function .
Thus, by definition, we have .
I intereprete the above as the following:
The logical operations on state of neighbouring nodes of determines the state of the node . Here, I would expect that the definition of the function will involve logical operations "NOT and AND". The composition of function gives us the states of the neighbouring nodes of .
Next, I will construct a -labeled graph from as follows:
Thus, I feel the -labeled behaves like a graphical model of the dynamics of the Boolean network (i.e how the Boolean network changes from the state to the -state.)
I'm struggling to understand what you wrote. What's the role of the functions ? Say it in words without symbols, please. Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?
My definition of 'boolean network' didn't involve integers at all.
I just have enough information to update the boolean at each vertex, as a function of the booleans at the vertices that affect it. I don't understand why your definition is so much more complicated, and the introduction of integers is the most mysterious part.
John Baez said:
What's the role of the functions ? Say it in words without symbols, please.
The functions tells the states of the neighbouring vertices of at a given time.
John Baez said:
Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?
I wanted to express the fact that the function tells that the state of the vertex is determined by the states of the vertices of the neighbours of . For this, I needed a function that takes in the time and gives out the time . Since, in I can not subtract, I took . I formalized this by taking the composition , where the function .
John Baez said:
My definition of 'boolean network' didn't involve integers at all.
I just have enough information to update the boolean at each vertex, as a function of the booleans at the vertices that affect it. I don't understand why your definition is so much more complicated, and the introduction of integers is the most mysterious part.
I am Sorry!! May be there are many redundant information in my definition that I am yet to see. I am trying to make my definition more simpler.
I feel my is your Boolean function at the vertex .
To incorporate the "updating information formallly in the setting" as I explained in my previous post, I wrote the other functions, which I guess is making my definition complicated. However, I agree that is the most important information for updating the Boolean at each vertex.
Adittya Chaudhuri said:
John Baez said:
Why does a boolean network need to involve functions, one for each vertex, from the integers to boolean functions on the set of vertices that affect that vertex? What do the integers represent here?
I wanted to express the fact that the function tells that the state of the vertex is determined by the states of the vertices of the neighbours of . For this, I needed a function that takes in the time and gives out the time . Since, in I can not subtract, I took . I formalized this by taking the composition , where the function .
Okay, so is time. But I don't think "time" should be part of the definition of a boolean network. Time, and in particular integer-valued-time, is part of one possible semantics for boolean networks.
I wanted to express the fact that the function tells that the state of the vertex is determined by the states of the vertices of the neighbours of . For this, I needed a function that takes in the time and gives out the time . Since, in I can not subtract, I took .
If you're trying to compute the next state as a function of the current state, you just need to describe a function that maps states to states. You don't need to mention time at all. (And you definitely don't need to subtract 1, since subtracting one takes us one time step into the past, not the future!)
As I've said before, I think a boolean network is
1) a finite set of nodes,
2) a set of nodes for each node , and
3) a function for each node .
That's all.
A state of a boolean network is a function
From a boolean network it's easy to define a time evolution function
that maps states to states. (I'll leave this as an exercise: here we use the functions .)
Thus, if we have a state , we get a sequence of states which describe how the state evolves in time, and we can define the state at time to be .
This seems much simpler than bringing in the mysterious functions and , and all this stuff about integers, including the function .
Also - this is a smaller stylistic point - there's no need to index the vertices by numbers . Indexing by numbers is often a mistake. You can use the vertices to name themselves! For example when you write "the vertex " you can just write "the vertex ", and when you write you can just write , and when you write you can write . That simplifies some expressions, and it avoids bringing in these irrelevant numbers , which just distract us from the actual ideas.
John Baez said:
As I've said before, I think a boolean network is
1) a finite set of nodes,
2) a set of nodes for each node , and
3) a function for each node .That's all.
A state of a boolean network is a function
From a boolean network it's easy to define a time evolution function
that maps states to states. (I'll leave this as an exercise: here we use the functions .)
Thus, if we have a state , we get a sequence of states which describe how the state evolves in time, and we can define the state at time to be .
Thanks very much!! I find your ideas very interesting and much much more elegant and natural than my approach of incorporating the dynamics of a Boolean network by creating an "unnecessarily complicated definition of a Boolean network".
In particular, I find your idea of genarating the time evolution function of a Boolean network from the underlying structure of your Boolean network very beautiful. In otherwords, (as you mentioned as part of the exercise you proposed) every state induces by restiriction a family of maps , which gives us a map that takes to .
Thus, . I also liked very much the idea of using "n-times self composition of " as the evolution of the Boolean network at time much much more elegant than my approach of using the map to talk about time evolution.
John Baez said:
This seems much simpler than bringing in the mysterious functions and , and all this stuff about integers, including the function .
I fully agree. Thanks very much!!
John Baez said:
Also - this is a smaller stylistic point - there's no need to index the vertices by numbers . Indexing by numbers is often a mistake. You can use the vertices to name themselves! For example when you write "the vertex " you can just write "the vertex ", and when you write you can just write , and when you write you can write . That simplifies some expressions, and it avoids bringing in these irrelevant numbers , which just distract us from the actual ideas.
Thanks!! Yes, I agree to your point. I will use these notational conventions from now on.
John Baez said:
As I've said before, I think a boolean network is
1) a finite set of nodes,
2) a set of nodes for each node , and
3) a function for each node .That's all.
Thanks!! Now, the above definition makes full sense to me!!
I think then associated to any Boolean network (i.e your definition), we can still construct a -labeled graph as
Although I directly do not feel the need of "constructing this -labeled graph at all". Except, yes, since this is a graph in a "graph theoretic sense" we may understand the Boolean network from the point of graph theory or homology theory of monoids, or by identifying the motifs in this -labeled graphs via Kleisli morphisms.
Also, may be another interesting aspect of representing a Boolean network as a -labeled graph is the availability of "ACT compositional theorems based on structured cospans" for creating a compositional framework for your definition of Boolean networks.
However, I think it could be interesting if we define a double category of Boolean networks and then define a double functor from this double category to the double category of -labeled graphs (as we defined in our paper) as a functorial semantic of Boolean network (your definition) ?
So, first I want to check the following:
Does the assoiciation gives a functor from the appropriate category of Boolean networks to the category ?
I'm glad my ideas seem useful.
Adittya Chaudhuri said:
I think then associated to any Boolean network (i.e your definition), we can still construct a -labeled graph as
- Vertices of are the nodes of .
- From the family of subsets , we get all the directed edges of .
- Now, whenever, in the definition of there is a NOT operator involved, we label the associated edge with , otherwise we label the edges by . This gives us a directed -labeled graph associated to the Boolean network .
I'm not sure this rule for associating edges with or is well-defined. Suppose is a function of two boolean variables which I will call and given by
(A,B) = (A and B) or (not(A) and not(B))
Do you label the edges with or in this case?
I want to use only "AND and NOT" operators. For the time being I am excluding the OR operators. I know I may sound "very unnatural", but I agree I am confused how to create the signed graph structure induced from the OR operator.
If the paper we're discussing develops an interesting "network reduction approach", it makes sense to study these questions. If it doesn't, these questions may not be worth studying.
I still do not understand their "network reduction approach". Do you?
John Baez said:
I still do not understand their "network reduction approach". Do you?
Not yet clear. I am trying to improve my understanding of their method in a considerable way by tomorrow. At least, now we have a "definition of what I can think a Boolean network is".
John Baez said:
If the paper we're discussing develops an interesting "network reduction approach", it makes sense to study these questions. If it doesn't, these questions may not be worth studying.
I agree.
They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually. This is the starting point of their network reduction method.
On a different note, I just found this paper Decomposition of Boolean networks: An approach to modularity of biological systems by Claus Kadelka, Reinhard Laubenbacher, David Murrugarra, Alan Veliz-Cuba, and Matthew Wheeler, which studies "a decomposition theory of Boolean networks in "a non-category theoretic way". I will read what they have done.
They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually.
Do they assume it or do they prove it? Is it true or not? You can't make something true just by assuming it.
I'm getting more and more suspicious of this paper.
If you decide this paper is junk, maybe you should switch to reading some better ones, like the paper you just mentioned (if it's better), or this one that David pointed out:
which seems pretty good. In fact I think this paper, and related papers by these authors, and the book
provide an easy way to start doing interesting mathematics connected to gene regulatory networks, I thank @David Corfield for pointing us toward these papers!
John Baez said:
They define certain "minimal strongly connected components" of their Expanded networks as Stable motifs. Then, they assumed that the states of the vertices in this component will be fixed eventually.
Do they assume it or do they prove it? Is it true or not? You can't make something true just by assuming it.
I am not able to see the proof in the main body of the paper. In the main body I think they assumed it. Previously, I was mostly focussed on the "main body of the paper itself" and somehow did not look at the Appendix part. Now, when I looked at the Appendix portion, it seems they might have given the proof. I think I need to read and understand their proofs (Page 15 onwards in An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks) before I can conclude whether they have assumed or proved that "states of the vertices in this component will be fixed eventually".
John Baez said:
If you decide this paper is junk, maybe you should switch to reading some better ones, like the paper you just mentioned (if it's better), or this one that David pointed out:
- Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.
which seems pretty good. In fact I think this paper, and related papers by these authors, and the book
- Hernan A. Makse, Paolo Boldi, Francesco Sorrentino and Ian Stewart, Symmetries of Living Systems: Symmetry Fibrations and Synchronization in Biological Networks.
provide an easy way to start doing interesting mathematics connected to gene regulatory networks, I thank David Corfield for pointing us toward these papers!
Yes, I also thank very much to @David Corfield for pointing us to these papers. I went through the abstract of the paper Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria. It looks very interesting to me too. Since I was already reading about the math of Graph fibrations from the paper fibrations of graphs by Boldi and Vigna, I find it would be interesting (both from the point of usefuleness in Biology and as well as from the point of developing interesting Mathematics inspired from Biology) to understand how these concepts can be used in the doing useful things for studying networks in Biology/GRN in particular.
I also read the abstract of the "paper on decomposition of Boolean networks" that I shared before. At the moment I am finding the David's suggested paper and the book more interesting. So, I would prefer to read these David's suggested paper and the book before I start reading the "decomposition paper".
Also, along with this, I would like to understand the proofs present in the Appendix of the paper An effective network reduction approach to find the dynamical repertoire of discrete dynamic networks to verify whether this paper's claim and methodology are compatible with each other.
I'd be very interested in thinking about this graph fibration approach. Here's a question:
We know that a graph homomorphism is a fibration if and only if the functor induced between the respective free categories is a discrete fibration. We also know that any functor between categories can be factorized through a fibration. What then if we find a graph homomorphism that is close to being graph fibration (as in 13.5 of the book)? We might measure this by inducing a functor on the respective free categories, and then using techniques for measuring departure from being a discrete fibration, as here. Would the factorization of this functor through a fibration correspond to factoring the graph homomorphism through a graph fibration?
Perhaps this is an area to investigate later, but there's bound to be a great range of ideas from CT to bring over to the biology.
David Corfield said:
I'd be very interested in thinking about this graph fibration approach.
Thanks!! I am very glad to know that.
David Corfield said:
We also know that any functor between categories can be factorized through a fibration.
By "fibration" do you mean a [[Grothendieck fibration]] or fibrations in [[canonical model structure on Cat]] or, in some other Model structure on ? However, from the reference you suggested (which measures departure from being a discrete fibration), I think you meant Grothendieck fibrations ?
If I assume Grothendieck fibration, then I am not able to see how this statement is true when we are not working in the category of groupoids (which is certainly the case for ours because we are working in the framework of directed graphs, and hence the we would get free category of graphs rather than free groupoid of graphs). More precisely, fibrations of [[canonical model structure on groupoids]] are precisely the [[isofibration]], which are same as Grothendieck fibrations (Example 4.2 in [[isofibration]]). However, I think this statement is not true when we are working in (For eg: please see here).
David Corfield said:
there's bound to be a great range of ideas from CT to bring over to the biology.
I agree !!
David Corfield said:
We might measure this by inducing a functor on the respective free categories, and then using techniques for measuring departure from being a discrete fibration, as here.
Thanks very much!! I find this idea quite interesting!! I will read the paper your shared.
Adittya Chaudhuri said:
By "fibration" do you mean...
It's a reference to the result recorded at [[comprehensive factorization system]], factorizing a functor into a final functor and a discrete fibration.
Thanks for clarifying!! Now, I understand your question:
"Would the factorization of this functor through a fibration correspond to factoring the graph homomorphism through a graph fibration?
Interesting!! I am thinking on this question.
At the moment, I am not able to see any "answer to your question". I will work with some small examples first.
David Corfield said:
It's a reference to the result recorded at [[comprehensive factorization system]], factorizing a functor into a final functor and a discrete fibration.
I was not aware of this particular factorisation till today. So, thanks very much again for mentioning about this factorisation. I would be curious to know what would be a suitable analog of a final functor "for graphs".
We might see for , a homomorphism of graphs, whether the induced functor between free categories, , factorizes in such a way that the intermediate category is free on some graph.
I’m pretty sure that any discrete fibration over a free category has free domain. A morphism in lies over a composite of indecomposables in , which gives a unique factorization of into morphisms lying over indecomposables, repeatedly applying the discrete fibration property; and a morphism over an indecomposable is actually indecomposable because reflects identities.
Thanks @Kevin Carlson for the explanation !!
@David Corfield , then, it seems for every homomorphism of graphs , the induced functor between free categories factorises (via comprehensive factorization system) such that intermidiate category is free on some graph.
In a way, I think it also means that the comprehensive factorization system of induces a comprehensive factorization system of the Kleisli category of the monad associated to the Free-forgetful adjunction between and .
@John Baez here you mentioned that a Boolean network is a "sort of labeled hypergraph". Inspired from your description, I was trying to think how the underlying graph structure of a Boolean network influences the states of a particular vertex in the Boolean network . In particular, I was trying to see whether the Boolean "rig" can be used to capture indirect and cumulative influences.
To undestand this, I produced a mental image of "such a graph". I am attaching a simple example of such a mental image.
compositeindirectBoolean.PNG
It is also possible that I am misunderstanding something while creating the mental image.
You might be interested in Makse et al.'s ways of representing hypergraphs in the chapter where they extend fibrations to hypergraphs:
Given the "noise" of biological systems perturbing perfect fibrational symmetry, leading to the search for what Makse et al. call "quasifibrations", there might be a useful framing where given a graph one looks to find a graph and homomorphism , which in some sense minimizes the size of while maximizing the degree to which is a fibration.
By the way, I'm not sold on calling fibrations 'symmetry' I see that those authors want to push for a generalized concept of symmetry more suited to biology than physics: they make a big case for it at the beginning of their book. I agree that it's very interesting to study systems that admit a kind of 'projection', e.g. a fibration, down to some simpler system! But I don't yet want to call this 'symmetry'.
Earlier Stewart and Golubitsky had made a case for groupoid symmetries in biology:
I'm completely sold on calling groupoid actions 'symmetries': there's enough similarity in the technical machinery between groups and groupoids that we can generalize lots of methods from the former to the latter.
If every fibration gives rise to a groupoid, such that can be seen as 'modding out by the groupoid action', then I'll be very happy and I'll see how to understand fibrations as 'symmetry'. But I don't know any result like this.
Mind you, discussing semantic boundaries, like precisely counts as 'symmetry', is not very interesting to me. Far more interesting is: what can you do with fibrations of graphs in biology? I could say a bit about this sometime.
David wrote:
We know that a graph homomorphism is a fibration if and only if the functor induced between the respective free categories is a discrete fibration.
I believe it, but why do "we know" it? Is this some sort of well-known result?
Just a snatched moment.
That's one way of defining graph fibration as inducing a fibration on free categories.
Okay, but who does that? The big book gives a more nuts-and-bolts definition of graph fibration (Definition 6.7) as a graph homomorphism such that blah-di-blah-di-blah. Do they give the more high-level definition somewhere?
John Baez said:
If every fibration gives rise to a groupoid, such that can be seen as 'modding out by the groupoid action', then I'll be very happy and I'll see how to understand fibrations as 'symmetry'. But I don't know any result like this.
The only partial result that I can think of:
Let be a fibration of categories equipped with a splitting [[cleavage]] , where is the set of cartesian arrows in with respect to the functor . Now, consider the category ,
Then, the map defines an action of the category on the set in the following sense:
The last two conditions (above) should follow from the fact that is a splitting cleavage.
In otherwords, its a rewording of the fact that "contravariant functors correspond to the splitting fibrations". So, the above description defines a contravariant functor
The big book seems to be arguing that a fibration resembles a map where is a group acting on a set, in that the quotient is a kind of 'simplified version' of where any two points related by a symmetry are identified. So it might interesting to see if anything along those lines can be turned into a theorem.
But I don't really care too much about whether graph fibrations are 'like symmetries'. I think the biological applications are more interesting.
I think given an action of a group on a set , we can define an action groupoid . Then, I think the natural projection map is a fibration of groupoids .
John Baez said:
The big book seems to be arguing that a fibration resembles a map where is a group acting on a set, in that the quotient is a kind of 'simplified version' of where any two points related by a symmetry are identified. So it might interesting to see if anything along those lines can be turned into a theorem.
I see. I got your point!!
John Baez said:
But I don't really care too much about whether graph fibrations are 'like symmetries'. I think the biological applications are more interesting.
I agree.
Now, lets see what what happens if we assume David's definition of fibration of graphs , and apply the discussion in
:I am assuming the following definition:
A homomorphism of graphs is a called a graph fibration if is a discrete fibration.
Since, we are in the framework of discrete fibrations, we do not need to worry about cleavage, as the choice of cartesian lift is uniquely determined by the definition of discrete fibration. Also, note that discrete fibrations are splitting fibrations by deafault. Thus we can legitimately apply the above discussion to get a functor .
Adittya Chaudhuri said:
I think given an action of a group on a set , we can define an action groupoid . Then, I think the natural projection map is a fibration of groupoids .
Right, good point! But notice that this projection is doing something completely different from what the authors of that book are discussing: simplifying by identifying points in the same orbit. This is simplifying the action groupoid down to the group itself!
Thanks. I see. I got your point. I think I need to read the definition of fibration in that book in more detail.
I think it's just a map of graphs s.t. the resulting map of free categories is a discrete fibration - but they say it a lot more slowly, since they're not using category theory.
Thanks. I see. But in that case, we may think of the following
However, I agree, at the moment my description is not "expressing" what you said here
What I really want to think about is not this abstract nonsense but the five-step method of simplifying graphs explained in this paper:
Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.
They seem to be using graphs (not even signed graphs???) as a way of describing a certain class of discrete possibilistic dynamical systems, but they don't spell this out clearly enough for me.
I got your point. I will read and try to understand their "5-step graph simplification method".
By a discrete possibilistic dynamical system I mean a set together with a map from to its power set , which describes how each state (= element of ) can evolve to any state in the subset .
Of course 'discrete possibilistic dynamical system' is just a silly name for a relation from a set to itself. But there's a nice analogy between probability theory and possibility theory, which this term is supposed to evoke. In probability theory we assign a number between 0 and 1 to each event, which tells us how probable that event is. In possibility theory we just assign either 0 or 1 to each event, which tells us whether that event is possible.
If we replace the power set monad by the Giry monad, we get a discrete probabilistic dynamical system, where each state evolves to a probability distribution on states.
I guess people usually say 'stochastic' instead of 'probabilistic' here, and similarly they say 'nondeterministic' instead of 'possibilistic'.
Thanks!! I see. I was not aware of these things. I will read on them. But I got the overall idea of them from the definitions that you wrote here.
Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.
You said something called "possibility theory". Is it a separate branch of Math like Probability theory?
Or did you mean just "directed graphs" ?
When I said directed graphs, I meant directed graphs. When I said possibility theory, I was referring to possibility theory:
... there's a nice analogy between probability theory and possibility theory, which this term is supposed to evoke. In probability theory we assign a number between 0 and 1 to each event, which tells us how probable that event is. In possibility theory we just assign either 0 or 1 to each event, which tells us whether that event is possible.
Sorry, I somehow missed that portion!! Thanks, I got the point now.
John Baez said:
Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.
Thanks. Now, I realised what you meant here.
John Baez said:
Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.
Why are we not considering "directed path" instead of a "directed edge" for this reachability condition?
David Corfield said:
You might be interested in Makse et al.'s ways of representing hypergraphs in the chapter where they extend fibrations to hypergraphs:
Yes. Thanks very much!!
Adittya Chaudhuri said:
John Baez said:
Anyway, the idea is simple: if we have (directed) graph, we say it's possible to go from node v to node w if there's an edge from v to w.
Why are we not considering "directed path" instead of a "directed edge" for this reachability condition?
I was talking about a discrete possibilistic dynamical system: we can go from at time to at time if there's an edge from to . This seems to be what that book is talking about. Their 5 methods of simplifying graphs are supposed to be motivated by simplifying their dynamical systems.
I see. Thanks.
John Baez said:
Okay, but who does that?
The big book has on p. 200:
Okay, thanks! I'm glad they ultimately come clean and use category theory to define graph fibrations. Their presumably 'easier' approach earlier in the book is quite a slog for those of us blessed with knowledge of Grothendieck fibrations.
It's interesting that they only consider graph morphisms , not the more general Kleisli morphisms that Adittya and I studied (which simply arbitrary functors between the free categories on these graphs). A Kleisli morphism can send an edge of to a formal composite of edges of . Kleisli morphisms are no good if one is considering each edge as taking one time step to traverse, as they sometimes seem to be thinking (e.g. when they're thinking of gene regulatory networks as synchronized processes.) But they're fine if one is not worried about time steps. Then you could say a Kleisli morphism between graphs is a fibration iff is a fibration - i.e., it's a fibration iff its a fibration.
Regarding the issue of symmetry, I recall making considerable use of that Alan Weinstein paper Groupoids: unifying internal and external symmetry for my paper on groupoids, later Ch. 9 of my 2003 book. He considered there "local symmetries" of a tiled floor, so that there are several different kinds of point: outside corners, outside edge points, inside edge points, internal intersections, etc.
I think things are similar here in that one looks for local symmetries between nodes, though perhaps one further step is taken in that we're looking to relate nodes only in terms of isomorphic input trees. The activity of a node only depends on messages sent to it, not on what it sends.
Hence they define a network groupoid as follows:
This enables them to speak in a third way about the situation already described twice via fibrations and via balanced colorings. More on the translation is in
Oh, thanks yet again! Do you know if they prove a theorem relating graph fibrations to these 'network groupoids' of graphs? In what I've read so far, they seem to be hinting at a relation, but I haven't seen them come out and say anything.
No doubt all is revealed in that Stewart 2024 article I just mentioned. Unfortunately for me it's behind a paywall.
Okay, I'll bring my powers to bear on it.
Hi!! Below I am trying to clean up the relation (according to my understanding) between what they consider as fibrations and what I think as fibration and why they are relevant in their context. (By "them", I meant the authors of that "Big book").
Proposition:
The following are equivalent:
(1) is a homomorphism of graphs such that for each vertex in , we have , where is the set of edges in whose target is and is the set of edges in whose target is . (It is their definition of fibrations (attached))
fibrationofgraphs.png
(2) is a homomorphism of graphs such that for each edge in and a vertex , there is a unique edge in such that and (Definition 1 in fibration of graphs)
I feel (2) and (3) will be equivalent for some obvious reasons. However, I think from the context of applications in Biology/ the context in that big book, I feel (1) is the most important. Below, I am trying to sketch the proof of equivalence between (1) and (2) and demonstrate what (1) buys us.
Equivalence between (1) and (2):
(1) (2)
Let be an edge in and . Then by (1), . Hence, there exists a unique edge in such that and .
(2) 1
Let be a vertex in . Then, the morphism of graphs induces by restriction a function . Then, by (2), the map is a bijection.
What (1) gives us?
The local isomorphism formulation (1) says if and are in the same fibre of , then, , i.e their input sets are isomorphic.
However, I am not able to see the following:
"if , then and are in the same fibre".
David Corfield said:
Hence they define a network groupoid as follows:
I am trying to write down my understanding of a network groupoid:
Let be a graph. Then, the network groupoid is a groupoid
whose set of morphisms is
,
law of composition is induced by the composition law of functions.
Note that the vertices of the connected components of have the same input sets and thus can be identified "according to the context of that paper/big book".
John Baez said:
Oh, thanks yet again! Do you know if they prove a theorem relating graph fibrations to these 'network groupoids' of graphs? In what I've read so far, they seem to be hinting at a relation, but I haven't seen them come out and say anything.
I think the following may be true:
Claim:
For any fibration , there is a morphism of groupoids .
I think in the construction of morphism level map, we need to use (1) of
Are you also claiming that your claim is false for graph homomorphisms that are not fibrations?
Say,
I am saying the following:
At the moment, I am not able see how to construct without using the bijection . Also, I think to prove is a bijection we may need to use the bijection .
I asked a yes or no question, and it would be nice to get a yes or no answer, or else "I don't know". I have trouble understanding what you're trying to say with all these symbols. The great thing about a yes or no question is that it can often be answered with just one word, "yes" or "no", transmitting a lot of information in a single word.
Sorry!! I got your point. Yes, I think we need "fibration".
Okay, good. The result would not be so interesting otherwise!
Thank you. Yes, I got your point.
The big book's preferences are to convey this "symmetry" they're interested in via graph fibrations, that we've discussed, and via balanced colorings. A network is colored by coloring its nodes. The coloring is balanced if any two nodes of the same color receive inputs from neighbors with the same colors.
And this is generalised to typed networks, e.g., repressors and activators,
Nodes of the same color have inputs that match each other, as regards both the type of edge and the color of the source of that edge. That is, nodes of the same colors ‘receive’ the same colors from in-neighbors. (p. 17)
Balanced colorings and fibrations are equivalent -- they're recording isomorphisms between the whole input tree, not just the one-step input set. So more work needs to be done in the case of the network groupoid, since this is defined solely in terms of the input sets. In addition, one looks for coloring subgroupoids
Many results about balanced colorings can be interpreted using coloring subgroupoids,
but the theory is technical. See (Stewart, 2024). (p. 122)
They seem to be suggesting that there's little to gain from the groupoid approach as it requires conditions on subgroupoids that are imitating the definition of a balanced coloring:
Intriguing the explanation for the relevance of fibrations for biology in that a common evolutionary mechanism is gene duplication. This corresponds (in many cases) to a preservation of the dynamics as maintained by the same fibrational symmetry, but then allowing later symmetry-breaking as different regulators input the two copies. They show how this process leads to "computational" structures, such as memory and clocks.
David Corfield said:
Balanced colorings and fibrations are equivalent --"they're recording isomorphisms between the whole input tree, not just the one-step input set.
I agree. I find the way the "authors are recording isomorphisms between the whole input tree, not just the one-step input set" really interesting!! Thanks.
For any fibration, if we color vertices of the same fibre by the same color, then by the definition of fibration of graphs (combining (1) and (2) in
), "this colouring" is a balanced coloring.David Corfield said:
So more work needs to be done in the case of the network groupoid, since this is defined solely in terms of the input sets. In addition, one looks for coloring subgroupoids
Hmm.. I agree (at least from the defintion of a network groupoid), however, I have not read through the technical portion related to colored subgroupoids.
David Corfield said:
They seem to be suggesting that there's little to gain from the groupoid approach as it requires conditions on subgroupoids that are imitating the definition of a balanced coloring:
If my claim is true here to the category of groupoids ( is a category whose objects are graphs and morphisms are graph fibrations).
, then I may expect that the association of the network groupoid to a graph can be extended to a functor from the categoryHowever, at the moment I am not sure about the above result's interpretation or implications interms of "balanced coloring". I will think on this.
David Corfield said:
Intriguing the explanation for the relevance of fibrations for biology in that a common evolutionary mechanism is gene duplication. This corresponds (in many cases) to a preservation of the dynamics as maintained by the same fibrational symmetry, but then allowing later symmetry-breaking as different regulators input the two copies. They show how this process leads to "computational" structures, such as memory and clocks.
In that "big book", has they defined "what they meant by a "broken symmetry" in Mathematical terms? I mean if we consider "graph fibrations" as a kind of "symmetry", then natrually the definition of a "symmetry break" that comes to my mind is a homomorphism of graphs which is not a fibration. I think you already hinted about this here
, and also here , or am I misunderstanding something?I just found there is a chapter called "quasifibration" (Chapter 13.5.2) in that big book. I will read this portion. May be quasifibrations are the right language to describe broken symmetries (although I am not sure).
I just found a paper Quasifibrations of Graphs to Find Symmetries in Biological Networks by Boldi, Leifer and Makse, where I think the notion of quasifibration is introduced.
The quasifibration section is dealing with situations in the noisy biological world where a network is close to being such that there's a fibration from it to a reduced target. I was wondering earlier whether we could leverage our CT knowledge to say something about this situation, with the idea of factorizing a graph morphism into a final functor (or graph equivalent) and discrete fibration.
The broken symmetry idea comes to prominence in Chap. 18: Synthetic Biology Through Broken
Symmetries: the Cell as a Computer
We have seen how the functions of biological networks can be pictured as an orchestra of synchronized instruments captured by the fibration symmetry. In this chapter we elaborate on an even stricter and more taxing criterion for functionality of the network. We ask whether minimal biological circuits can perform core logic computational programs. We show this through the mechanism of ‘fibration symmetry breaking’. These circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.
Gene duplication by itself would be pointless
18.3 Gene duplication in evolution mimics lifting in a fibration
Two networks related by a surjective fibration behave identically. Or rather conditions are such so that nodes in the same fibre synchronise.
But a further step brings in symmetry breaking if the copies gain new different inputs. That's the rest of the chapter.
I guess there could be a connection between the two topics when a network has had a symmetry broken and there's no longer a fibration to the base, one could think of it as a quasifibration perhaps.
Adittya Chaudhuri said:
I just found a paper Quasifibrations of Graphs to Find Symmetries in Biological Networks by Boldi, Leifer and Makse, where I think the notion of quasifibration is introduced.
Right, so that's one approach they advocate. Another is
13.5.3 Pseudo-balanced colorings and repair algorithm without knowledge of balanced coloring
This is in
As I say, could be a place to wield some CT-power.
These two approaches aren't mentioned in the context of symmetry-breaking (chap. 18).
David Corfield said:
The quasifibration section is dealing with situations in the noisy biological world where a network is close to being such that there's a fibration from it to a reduced target. I was wondering earlier whether we could leverage our CT knowledge to say something about this situation, with the idea of factorizing a graph morphism into a final functor (or graph equivalent) and discrete fibration.
Thanks. Yes, I agree. I read the definition of quasifibration (as in the Definition V.1 in Quasifibrations of Graphs to Find Symmetries in Biological Networks in page 7). From this definition, I felt they measured the noise in the fibration by the degree to which a graph homomorphism fails to be a graph fibration by taking account of
As you suggested before, I also feel that there could some interesting (at least Mathematically) relation between Definition V.1 in Quasifibrations of Graphs to Find Symmetries in Biological Networks and Proposition 2.9 in How Nice is this Functor? Two Squares and Some Homology go a Long Way.
David Corfield said:
Two networks related by a surjective fibration behave identically. Or rather conditions are such so that nodes in the same fibre synchronise.
But a further step brings in symmetry breaking if the copies gain new different inputs. That's the rest of the chapter.
Thanks. Yes, I got your point. I am reading about it from Circuits with broken fibration symmetries perform core logic computations in biological networks
David Corfield said:
I guess there could be a connection between the two topics when a network has had a symmetry broken and there's no longer a fibration to the base, one could think of it as a quasifibration perhaps.
Yes, but I realised "symmetry breaking" and "quasifibrations" are two very different concepts. Previously, I was misunderstanding.
David Corfield said:
Another is
13.5.3 Pseudo-balanced colorings and repair algorithm without knowledge of balanced coloring
This is in
- Leifer, I., Phillips, D., Sorrentino, F., and Makse, H. A. 2022. Symmetry-driven network reconstruction through pseudobalanced coloring optimization. J. Stat. Mech.: Theo. and Exp., 2022, 073403.
As I say, could be a place to wield some CT-power.
These two approaches aren't mentioned in the context of symmetry-breaking (chap. 18).
Thanks. I will read about this other approach. Since fibrations correspond to balanced coloring, I am not surprised if there is an analogue of quasifibrations in terms of colorings: "Pseudo-balanced coloring".
Adittya Chaudhuri said:
However, I am not able to see the following:
"if , then and are in the same fibre".
In this context, I feel Theorem II.1(2) page 3 in Quasifibrations of Graphs to Find Symmetries in Biological Networks is relevant.
@John Baez mentioned about the Lerman's paper before here . While reading the "steps" in that complexity reduction paper, I am guided to the Lerman's paper Modular dynamical systems on networks. The paper looks very interesting!!
Adittya Chaudhuri said:
However, I am not able to see the following:
"if , then and are in the same fibre".
I don't see where you're getting this from. Isomorphism of input sets isn't enough to say that there's any fibration with and in the same fibre. The identity map is a fibration, and could be highly symmetrical, all vertices having isomorphic input trees, but each fibre is a singleton.
There is the notion of a minimal fibration of , where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).
David Corfield said:
I don't see where you're getting this from. Isomorphism of input sets isn't enough to say that there's any fibration with and in the same fibre. The identity map is a fibration, and could be highly symmetrical, all vertices having isomorphic input trees, but each fibre is a singleton.
Yes, I agree. I did not mean it. In that "particular post", I was willing to have a notion of fibration where such thing happens. I said about it here
.I wanted to say the following:
If there is a graph where we have a equivalence relation on the set of vertices defined as if there exists a bijection , such that for all . Then, by Theorem II.1(2) page 3 in Quasifibrations of Graphs to Find Symmetries in Biological Networks, there exists a graph and a surjective graph fibration , whose fibres are the said equivalence classes of .
Then, by Theorem 26 of Fibration of Graphs, the (above) constructed can be chosen to be a minimal fibration.
David Corfield said:
There is the notion of a minimal fibration of , where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).
Thanks!! I think one direction is true for any fibration i.e In any fibration, if two vertices are in the same fibre, then their input tress are isomorphic (Propotion 23 of Fibration of Graphs).
Adittya Chaudhuri said:
If there is a graph where we have a equivalence relation on the set of vertices defined as if there exists a bijection , such that for all .
Surely, it can't be an identity, . No, it's not
It's that the sources are equivalent.
Sorry, yes. I just edited.
David Corfield said:
There is the notion of a minimal fibration of , where two vertices are in the same fibre if and only if their input trees (not just sets) are isomorphic. The base graph here is unique (up to isomorphism).
I think the other direction follows because otherwise, the base of the minimal fibration will not be fibration prime because there would be a pair of isomorphic universal total graphs, which will violate the Corollary 27 of Fibration of Graphs.
John Baez said:
What I really want to think about is not this abstract nonsense but the five-step method of simplifying graphs explained in this paper:
Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.
In the Step 1, they reduce a directed graph (although I think similar method is applicable for signed graphs), by defining a minimal fibration . So, the new reduced graph of in Step 1 is .
Now, I have the following questions:
Regarding (2)
If I am not mistaking anything fundamental, I observed that the information of feedback loops (hence homology of the graph) may be changed in this reduction process. (I tried to construct a simple example of "this change" in the attached file).
feedbackinflost.PNG
Question:
Under what condition a minimal fibration preserves homology monoids of directed graphs? I have a feeling that when the total graph is connected, then a minimal fibration may preserve homology (I am not sure)
Question:
Under what condition a minimal fibration preserves feed-forward information?
However, may be their focus of attention is very different from "feedback or feed-forward information" during the reduction process.
Regarding (1)
In the attached sreenshot
complexityreductiondynamics.png
they said that DeVille Lerman's paper's result (attached
surjectivefibrationslemma.png
)
imply induces synchronization on nodes and in if . Then, they said "This guarantees that the dynamics for both the original network and the base network are the same, all the collapsed nodes dynamics are identical to the representative node they were collapsed into."
I am yet to understand how the above implications are following from the DeVille Lerman's paper's result. I will work on it.
For sure, the number of loops will change. But then with the claim of synchronicity each identified loop would be undergoing the same dynamics.
There may be a place for your graph homology at the later stages
Steps IV and V: Small-scale structure inside the SCCs: logic circuits and cycles The last two steps consist of understanding the small-scale structures within the SCCs: the logic circuits (Step IV) and the bigger cycles connecting them (Step V).
Thanks. I got the point you are making.
I didn't follow the whole discussion, but there's been quite some work in distributed computing related to graph coverings (and fibrations). Roughly speaking, they are about characterizing whether some problem (e.g. electing a leader) can be solved on a process network, assuming that processors have limited knowledge. For instance
Also, Vigna collected some notes on his website about fibration-based techniques in computer science.
Interesting!! Thanks for sharing the references on the applications of fibration of graphs.
Hi!! I read Step 1 - 3 of Luis A. Álvarez-García, Wolfram Liebermeister, Ian Leifer and Hernán A. Makse, Complexity reduction by symmetry: uncovering the minimal regulatory network for logical computation in bacteria.
Below, I am trying to writing down my current understanding of Step 1.
Step 1: Collapsing a graph into ints minimal base by surjective fibrations:
A graph is said to be fibration prime iff every surjective fibration is an isomorphism. A fibration of graphs is called minimal iff is surjective and is fibration prime.
We start with a graph . Now, either by Theorem 26 or by section 4.1 of Fibration of graphs, we can construct a fibration prime graph along with a minimal fibration The reduced version of is .
Justification: Why it is a reasonable notion of reduction:
Thus, I am convinced that by Step 1 we can legitimately reduce a graph to a graph via a surjective fibration which preserves the dynamical systems, as claimed by the authors:
complexityreductiondynamics.png
I will write about Step 2 and Step 3 tomorrow.
This is great stuff! Thanks for working through it and explaining it.
Thanks very much!! I am trying to finish the rest of the steps by tomorrow. Lets see!!. I mean by today.. Its 3:17 Am here :)
I realised, Lerman-DeVille wrote quite a number of papers in this direction. I will read them. I am really enjoying their treatments of interconnection of open networks (via labeleing of vertices of a graph by some objects of an approriate category like ) interms of suitable notion of vector fields on the total phase space which arises from the graph structure by which those manifolds/objects in that category are interconnected.
Networks of open systems by Lerman
Networks of hybrid open systems by Lerman and Schmidt
I found this interesting paper Symmetry groupoids and patterns of synchrony in coupled cell networks by Stewart et al., which gives an equivalent framework to surjective fibrations.
DeVille and Lerman said that they discovered that "the framework in Stewart's" paper (via balanced equivalence relation) can be realised as surjective fibrations of graphs introduced by Boldi and Vigna.
Adittya Chaudhuri said:
(via balanced equivalence relation)
In the "big book" they end up talking about "balanced colorings" and "equitable partitions" of nodes. Lots of terms for the same idea.
Interesting. There are also applications to neural networks:
Intriguing that such a simple idea from the point of view of CT has potentially profound uses. After all, a graph fibration is just a presheaf on the free category of the base graph.
Hmm, so what is the construction of a minimal fibration of in the presheaf perspective? It's as though we're given a category (here ) and want to know for which can be seen as an element of . Like going in the opposite direction to finding the universal cover of a space.
The (potentially infinite) input trees are part of the universal cover of the graph, so we're in the territory of discrete directed topology. I guess if we come from continuous topology, we're not used to there being a minimal space. When we see the line as helix sitting above a circle as its (unique) universal cover, we know that there are circles below that our circle can cover. But in the discrete case, things bottom out.
David Corfield said:
Interesting. There are also applications to neural networks:
- Osvaldo Velarde, Lucas Parra, Paolo Boldi, Hernan Makse, The Role of Fibration Symmetries in Geometric Deep Learning
Intriguing that such a simple idea from the point of view of CT has potentially profound uses.
Interesting!! Thanks!! Yes, indeed it is.
David Corfield said:
After all, a graph fibration is just a presheaf on the free category of the base graph.
Yes, in a way I think this "presheaf perspective" of a discrete fibration also tells us that every graph fibation induces a canonical action of the category on the category whose objects are fibres of and morphisms are functions between the fibres (as I discussed before about this perspective here ).
David Corfield said:
Hmm, so what is the construction of a minimal fibration of in the presheaf perspective?
Thanks. Yes, it is a natural question. I am thinking on this.
David Corfield said:
for which can be seen as an element of
That was a loose way of speaking. I mean for which can be seen as arising from the Grothendieck constuction for some , i.e., .
David Corfield said:
David Corfield said:
for which can be seen as an element of
That was a loose way of speaking. I mean for which can be seen as arising from the Grothendieck constuction for some , i.e., .
Thanks, thats fine. Yes, if I am not misunderstanding anything, then a prescription of such is in the proof of the Theorem 2 (2), Page 25 of Fibration of graphs.
More precisely, let be a graph with an equivalence realtion on the set of vertices which satisfies the local in-isomorphism property:
image-3.png
I think we can always define such a relation canonically by defining iff there is a bijection such that . Then, by Theorem 2 (2), Page 25 of Fibration of graphs, there exists a graph (whose construction is also given in the proof) and a surjective fibration of graph whose fibres are the said equivalence classes. Thus, is a discrete fibration. Let be the presheaf associated to . Then, by Grothendieck's correspondence between discrete fibrations and presheaves (Theorem 2.1.2 of Categorical notions of fibration), we have .
In fact, more is true:
By Theorem 26 of Fibration of graphs, we can choose to be minimal, and also algorithmic construction of such is given in the section 4.1 of "Fibration of Graphs" paper by Boldi-Vigna.
Hi!! I am writing down some of the things that I was thinking today:
As I explained here , then a reasonable way to reduce the complexity of (preseriving the dynamics of the G) is to collapse the graph to a graph by a minimal fibration . Recall, by definition of a minimal fibration, is surjective and every surjective fibration is an isomorphism. Although there are some ways (for example, Theorem 26 or section 4.1 of "Fibration of Graphs" paper by Boldi-Vigna ), which gives prescription of the required minimal fibration for the reduction of . I was wondering about a particular way of doing the same, in particular, by a suitable action of a group on the graph .
, I am convinved that if we have a graphLet me explain!!
Consider a graph and group . Then according to the Section 2.3 of "Fibration of Graphs" paper by Boldi-Vigna, a left action of on is defined as a group homomoprhism . Now, if for some , then, the action of gives a bijection between . Hence, the equivalence relation induced by the action of on the the vertex set satifies the Local In-isomorphism property image-3.png
Hence, by the Theorem 2 (2) Page 25 of "Fibration of Graphs" paper by Boldi-Vigna, we have a surjective fibration , whose fibres are the orbits of . The vertices of the graph are the orbits of and edges from an orbit to a orbit are the edges coming to an element of from all the elements of .
Questions:
Answers to above questions may give us some relationships between reduction of complexities of a graph via action of certain groups on the graph.
On a different note, I am not very happy with the definition of a left action of a group on the graph as given in the Section 2.3 of "Fibration of Graphs" paper by Boldi-Vigna.
Let me argue why I think so:
Recall, a morphism of graphs is called a fibration if is a discrete fibration.
In the light of the above definition of fibration of graph, it is natural for me to see how an action of a group on the graph induces an action on the category . I think, the action is not that interesting because, it induces canonically a strict action of the discrete 2-group on . Thus, I would prefer a higher analog:
Definition:
Let be a graph and be a strict 2-group. Then a left action of on is defined as a functor such that
Note that the above is a standard definition of an action of a strict 2-group on a category, whose smooth analog are standard in the 2-group gauge theory literature.
I am not sure but it might be interesting to investigate (at least Mathematically), what kind of discrete fibrations over "suitable quotient categories" these higher actions may produce, and under what conditions it give a suitable notion of minimal discrete fibrations.
Thanks for working on this, guys! I've been following along but distracted by thinking about hydrogen.
I have some questions about fibrations of graphs and whether we can really think of two graphs and related by a fibration as describing "the same dynamics" (in some sense).
Define the cyclic graph to be the graph with vertices - think of them as elements of the cyclic group - and an edge from each vertex to the next vertex .
Question: The quotient map induces a graph homomorphism . Is this a graph fibration?
Question: Is the graph minimal iff is prime?
Adittya Chaudhuri said:
Thanks, thats fine. Yes, if I am not misunderstanding anything, then a prescription of such is in the proof of the Theorem 2 (2), Page 25 of Fibration of graphs.
Yes, we know that there's an algorithm to find a minimal fibration. My speculation was a more abstract one, beyond the case of graphs, of wondering how to reconstruct a general category as a [[category of elements]]. This seems rather similar to finding out about a space which spaces it covers.
Adittya Chaudhuri said:
I was wondering about a particular way of doing the same, in particular, by a suitable action of a group on the graph .
I have doubts about this approach. The "big book" shows many cases where a graph has lots of fibrational symmetry but little global symmetry. But the group actions you mention are in terms of global symmetries, i.e. elements of
Adittya Chaudhuri said:
a left action of on is defined as a group homomoprhism .
Sure, group actions generate surjective fibrations, but rarely minimal ones. That's the mantra of the book, that we need to generalise from global symmetry to local (input) symmetry.
So the answer to your questions
Adittya Chaudhuri said:
- 1) Is always a minimal fibration ? (I feel it is not true)
- 2) Given a graph , can we always find a group and an action of on such that is a minimal fibration ?
are both 'No'.
By the way, my claimed example of a fibration should be one that does come from a group action, namely the obvious action of on .
John Baez said:
Question: Is the graph minimal iff is prime?
I think in this case the minimal fibration will be to a single vertex and arrow. All the vertices of your graph have isomorphic input trees.
John Baez said:
Question: Is the graph minimal iff is prime?
I think not: I think is always a fibration!
Okay, our messages crossed in the aether.
This problem thinks is prime. :upside_down:
Your construction does appear in a nice paper I was reading yesterday
Oh, great! They even had the wisdom to use the best notation.
Personally I don't feel going around and around in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in . This is the simplest of many examples along these lines. So I think we need to be careful about what we claim when we simplify a network to a smaller network using fibrations. For example, the fact that we can dramatically simplify the gene regulatory network of E. coli using fibrations doesn't imply that all the extra complexity of the full E. coli network is irrelevant to its behavior.
I still think fibrations are important.
There's probably a paper that spells this out carefully; I haven't had time to read much. I felt the paper on E. coli went a wee bit overboard.
John Baez said:
Personally I don't feel going around and around in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in .
Right. So this marks the difference between a general approach to discrete dynamical systems, as in the article I just mentioned, and this fibration program. The latter is sold on the idea of synchronisation, which duly appears in the subtitle of the "big book", and goes back to Stewart and Lerman's work from several years ago.
Clearly, one could have the nodes of the network as synchronised, especially if they're started off in the same state. But is there some particular reason for synchrony in biology? I guess we need to read
They seem to be looking at things from the other end.
"how can a predictable pattern of synchrony, which is essential for survival, be achieved in biology?"
We know there is life. This must involve robust (approximate) synchronization. No life without synchrony. In a network this is only possible when there's fibrational symmetry.
Nature uses the trick of lifting a network by gene duplication to a new network with the same dynamics. But then the functions of these copies can diverge as new connections form. But without synchronization persisting over the course of the duplication, there would be very little chance of a biological system maintaining viability.
They're claiming also that empirically we see this synchronization:
20.4 Gene coexpression synchronization analysis from massive dataset
Interesting! I'm skeptical of "gene coexpression synchronization" in anything but a very loose sense of the word "synchronization", or maybe some biological concept of synchronization that's unlike what we normally think of from that word. I'll have to read 20.4.
John Baez said:
Define the cyclic graph to be the graph with vertices - think of them as elements of the cyclic group - and an edge from each vertex to the next vertex .
Question: The quotient map induces a graph homomorphism . Is this a graph fibration?
Thanks. Yes, I think it is a surjective fibration . In particular, I think it is minimal if and only if as @David Corfield mentioned here
David Corfield said:
Yes, we know that there's an algorithm to find a minimal fibration. My speculation was a more abstract one, beyond the case of graphs, of wondering how to reconstruct a general category as a [[category of elements]]. This seems rather similar to finding out about a space which spaces it covers.
Thanks!! I see. But, are you not saying about my construction of here ?
David Corfield said:
Adittya Chaudhuri said:
I was wondering about a particular way of doing the same, in particular, by a suitable action of a group on the graph .
I have doubts about this approach. The "big book" shows many cases where a graph has lots of fibrational symmetry but little global symmetry. But the group actions you mention are in terms of global symmetries, i.e. elements of
Adittya Chaudhuri said:
a left action of on is defined as a group homomoprhism .
Sure, group actions generate surjective fibrations, but rarely minimal ones. That's the mantra of the book, that we need to generalise from global symmetry to local (input) symmetry.
So the answer to your questions
Adittya Chaudhuri said:
- 1) Is always a minimal fibration ? (I feel it is not true)
- 2) Given a graph , can we always find a group and an action of on such that is a minimal fibration ?
are both 'No'.
Interesting!! Thanks!! I realised the flaws in my way of thinking about a relation between global symmetries and minimal fibration symmetries especially in biology world. I agree to your points. For example, a counter example for question (1) could be , where and acts in the usual way on .
John Baez said:
By the way, my claimed example of a fibration should be one that does come from a group action, namely the obvious action of on .
Yes, I agree. I think you meant the action of on , or, am I misunderstanding anything?
David Corfield said:
- Carranza et al., Categorical foundations of discrete dynamical systems
Thanks!! The paper looks interesting.
John Baez said:
Personally I don't feel going around and around in a loop of size 30 "computes the same thing" or "is biologically isomorphic" to staying in place in . This is the simplest of many examples along these lines. So I think we need to be careful about what we claim when we simplify a network to a smaller network using fibrations.
Yes, I fully agree to this point. I also realised "my intution for minimal fibration is not great" as pointed out by @David Corfield here . So, I thought of improving my "intuition" for minimal fibration, and thus I tried to see how minimal fibrations look for some common graphs like "feedback loops, feedforward loops, graph (attached
Minimalfibration.PNG
)
I realised, although "length" of both feedback loop and feedforward loop is lost during reduction, unlike the feedback loop, one may still recover the whole feedforward loop ((B) in the attached file) from the reduced minimal graph .
I constructed the above minimal fibrations using the proof of Theorem 26 in the paper "Fibration of graphs" by Boldi and Vigna. In particular, the construction uses iff the input trees of and are isomorphic.
John Baez said:
For example, the fact that we can dramatically simplify the gene regulatory network of E. coli using fibrations doesn't imply that all the extra complexity of the full E. coli network is irrelevant to its behavior.
I still think fibrations are important.
There's probably a paper that spells this out carefully; I haven't had time to read much. I felt the paper on E. coli went a wee bit overboard.
I think the paper is Fibration symmetries uncover the building blocks of biological networks by Flaviano Morone, Ian Leifer, Hernán A. Makse.
At the moment my understadning on the importance of fibrational symmetry can be summarized in the following screenshots:
fibr.png
Taken from this paper
Taken from this paper
Taken from this paper
Taken from this paper
I think the importance of fibrations is dependent on how we are modelling the total phase space of regulatory networks from the individual phase spaces of the nodes. If we model it according to Lerman or Stewart, or in a similar approach, then, the notion of minimal fibration may become a natural way of reducing complexity of the network preserving "synchrony" (although I am yet to understand the word synchrony in a proper way).
I think the real question may be
"if we model the dynamics of our regulatory network in Lerman or Stewart's way, how much realisitic the results will be from the point of Biology?
David Corfield said:
They seem to be looking at things from the other end.
"how can a predictable pattern of synchrony, which is essential for survival, be achieved in biology?"
We know there is life. This must involve robust (approximate) synchronization. No life without synchrony. In a network this is only possible when there's fibrational symmetry.
Nature uses the trick of lifting a network by gene duplication to a new network with the same dynamics. But then the functions of these copies can diverge as new connections form. But without synchronization persisting over the course of the duplication, there would be very little chance of a biological system maintaining viability.
They're claiming also that empirically we see this synchronization:
20.4 Gene coexpression synchronization analysis from massive dataset
Thanks!! Interesting!! I will read your suggested portion.
I don't know if it will help, but just a word about synchronization.
In the distributed computing literature, "synchronization" is usually expressed in terms of a scheduler. Intuitively, the scheduler specifies how events occur in a system: e.g., all processes proceed simultaneously, or only one at a time, or else. There are many ways to formalize this, but the easiest I know is as a set of (legal) executions.
Let be the set of legal executions on a network for the scheduler . In the presence of a fibration , we can, quite often, lift legal executions on to legal executions on , i.e. we have a map (or multi-valued map, ). But this really depends on the chosen scheduler .
So, from the distributed computing perspective, strictly speaking, the concept of fibration is not enough to capture patterns of synchrony: you need to specify the scheduler too. I don't know how this remark translates to biology, nor to category theory.
ps: there are other formalizations where the scheduler is stochastic. In that case, the executions you can lift from can land in a zero-probability subset. So it can be tricky.
Thanks!! Thats interesting!!!
Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper). I think what you said about "lifting legal executions on the base network to the legal executions to the network above" in the presence of a fibration , DeVille-Lerman described as "fibrations of networks send groupoid-invariant virtual vector fields of the base network to groupoid-invariant virtual vector fields of the network above" (Proposition 4.2.1 in paper ), which I think ultimately culminates in their main Theorem 4.3.1.
I think the results "in the biology paper of network reduction" that we were discusing before is based on one of the consequence (Lemma 5.1.1) of the Theorem 4.3.1.
I think DeVille-Lerman described the "chosen scheduler" of a graph as the "choice of a map " (Definition 2.1.4 in paper), which assigns a manifold of phase space to each node of the network.
Now, what I really find interesting about fibrations of graphs is the fact that even if we do not model scheduler as the way "Lerman/Stewart" suggested, still the notion is good enough for other things such as distributive systems as @Peva Blanchard discussed here . So, now, I am even more curious to get an answer for the question that I asked yesterday.
Adittya Chaudhuri said:
I think the real question may be
"if we model the dynamics of our regulatory network in Lerman or Stewart's way, how much realisitic the results will be from the point of Biology?
As @David Corfield mentioned here , I think the answer to the above question may lie in the Chapter 20 of that "big book".
Adittya Chaudhuri said:
Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper).
Interesting, thank you. I just skimmed through the paper. If I try to connect the dots, I am more inclined to say that a legal execution is a solution to the ODEs specified by the data at hand, i.e. the flow of a groupoid-invariant virtual vector field (if I understood correctly). The scheduler is implicitly chosen when the authors choose their "target category" to be a category of continuous-time dynamical systems (every process evolves continuously based on its current state and the states of its in-neighbors, but overall all processes share a common notion of time).
By the way, arguments based on fibrations had limited practical applications for distributed computing (at least as far as I knew back when I was doing research in that field) because "symmetry is usually easy to break in distributed systems" (e.g. with identifiers). It is interesting to see that the big book somehow addresses a similar question for biology, see Chapter 8 "Stability and synchronizability".
This means that, dynamically, if
we set the initial condition of the system to respect the fibers—that is, if the nodes in
each fiber initially have the same state—then the same clusters remain synchronous for
all forward time. However, this property alone is irrelevant to biology, because biological
systems must constantly adapt to a changing environment. What matters for biology is
stable cluster synchronization; that is, synchronizability. Even when initial conditions are
not synchronous, the dynamic trajectory should converge to the cluster state for all initial
conditions in some neighborhood of that state
Peva Blanchard said:
Interesting, thank you. I just skimmed through the paper. If I try to connect the dots, I am more inclined to say that a legal execution is a solution to the ODEs specified by the data at hand, i.e. the flow of a groupoid-invariant virtual vector field (if I understood correctly).
Thanks!! I think what you said is true.
Peva Blanchard said:
Adittya Chaudhuri said:
Although I am yet to understand fully, I think what you said as the "set of legal executions of a network", DeVille-Lerman in this paper described them as groupoid invariant virtual vector fields of a network (Definition 3.2.9 in DeVille-Lerman's paper).
The scheduler is implicitly chosen when the authors choose their "target category" to be a category of continuous-time dynamical systems (every process evolves continuously based on its current state and the states of its in-neighbors, but overall all processes share a common notion of time).
Hmm.. Even if you consider a nice target category , I think we also additionally need to choose the specific function to "specify the scheduler". Or, am I misunderstanding?
Peva Blanchard said:
What matters for biology is
stable cluster synchronization; that is, synchronizability. Even when initial conditions are
not synchronous, the dynamic trajectory should converge to the cluster state for all initial
conditions in some neighborhood of that state
Interesting!! It somehow reminds me of the notion of attractors in Boolean network.
Interesting that chapter 8 on stability. Stewart even gets to go back to his catastrophe theory interests, dating from the 1970s, when that was all the rage at the University of Warwick:
I guess one shouldn't expect that there would be a neat 'networks for biology' account that covers everything. There needs also to be a study of the biochemical dynamics that keeps the show on the road. But then how separable are these studies? In a more extreme case, one can think about computing without knowing anything about the physics of semiconductors.
Talking of computing, there's the interesting Chap. 18: Synthetic Biology Through Broken
Symmetries: the Cell as a Computer
We have seen how the functions of biological networks can be pictured as an orchestra of synchronized instruments captured by the fibration symmetry. In this chapter we elaborate on an even stricter and more taxing criterion for functionality of the network. We ask whether minimal biological circuits can perform core logic computational programs. We show this through the mechanism of ‘fibration symmetry breaking’. These circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.
All very interesting!
I guess one shouldn't expect that there would be a neat 'networks for biology' account that covers everything.
Definitely not! I believe that to understand biology mathematically we need lots of new abstractions: 'networks' are just one of the first that comes to mind. (Also, 'networks' means lots of different things.) I imagine biology will always elude our full grasp, just as even chemistry does. Nobody yet has clearly explained the periodic table! But we can keep understanding more and more.
these circuits can then be used to build computational molecular machineries. This helps in system biology to design biological machines from the bottom up, following a systematic approach from first principles of symmetry and symmetry breaking.
Whenever people start talking about biology in terms of 'machines' I get unhappy, because machines are designed to do what we want, while biology is not 'designed' in that sense, and it doesn't do what someone wants - so the 'machine' metaphor can be misleading and even destructive.
If they're talking here about using ideas from biology to design machines, then they're not necessarily making the mistake of analogizing biological systems with machines. But I'm hoping people will eventually try to learn the ways of biology instead of trying to use ideas from biology to expand the world of machines. (So far, the ever-expanding world of machines is killing the world of biology.)
ARIA, my funder, has another program Nature computes better. Embodied Cognition in Single Celled Organisms sounds fun. I hope there's a good sprinkling of Heidegger and Merleau-Ponty in the outputs.
Very interesting - thanks for pointing that out!
I am not very optimistic from the proposals listed on this page that any Heideggerian ideas about going beyond nature as ready-to-hand inform any of this work. I feel the culture that's funding these grants is not wanting to change our attitude toward the natural world: it's still following the "understand the natural world in order to exploit it" paradigm.
I would like to break out of this, but it's extremely hard to do while doing science: any good scientific ideas tend to get fed into that paradigm. It's sad, but it's possible the only way to avoid that is to step aside and not publish research on biology! I should figure out my attitude before I do anything more on this subject. I want to "understand the natural world in order to help it".
John Baez said:
I want to "understand the natural world in order to help it".
Thanks!! I also want the same. I feel "our social system, our economic system, our ecosystem and our cereated systems of artefacts" are not interconnected in a way that is helpful for the stability of the whole interconnected system. I want to help in this direction.
Great! The big question is whether and how we can do this - for example while doing mathematical studies as we are doing here.
Thanks. Yes, I also think whether I can align my Mathematical activities in that direction. At the moment, I do not have much ideas on "how to approach this big question".
I would like to think that deeply understanding the world would help us understand what we're doing wrong - why "our social system, our economic system, our ecosystem and our created systems of artefacts are not interconnected in a way that is helpful for the stability of the whole interconnected system".
But what sort of research goes toward this deep understanding, instead of just figuring out to make new kinds of biologically inspired gadgets, etc?
Anyway, it's good to keep this in mind....
John Baez said:
I would like to think that deeply understanding the world would help us understand what we're doing wrong - why "our social system, our economic system, our ecosystem and our created systems of artefacts are not interconnected in a way that is helpful for the stability of the whole interconnected system".
Thanks very much!! I fully agree to this point. I often have thought about it also that if we somehow mathemtically show that our interconnected system is faulty and then demonstrate the significance of the developed Mathematical framework through biology and through historical events, then may be someday, peope would rethink about redesigning social/polotical/economic..etc laws, which would be aligned with the Mathematical framework that ensures stabiltiy in the whole interconnected system.
Great. This is why I was (and still actually am) excited about the idea of "bad motifs" in causal loop diagrams: it raises the possibility that there's some invariant concept of what makes a system function "badly" - a concept that is not merely of the sort "X is bad if X causes Y" where Y is some specific consequence we have already decided for external reasons counts as "bad", like "the cell become cancerous". Even if the bad motifs in the book System Archetypes don't provide the sort of invariant concept of "bad motif" that I'm searching for, they're still an interesting attempt.
I don't think "stability" is quite the right concept of "well functioning system": a system can become stable by having everything die. But one kind of "badly functioning system" we often see these days, is when one component starts to exponentially grow, crushing everything in its way.... and then ultimately overshoots the available resources and collapses. So this is a particular kind of "bad" behavior that's unstable.
I am feeling like a notion of "equilibrium in a system" rather than the notion of stability.
I mean: " Is there always a notion of equilibrium for every sufficiently interconnected system? " which may indicate a measure of how far is the system from being "badly functional"
John Baez said:
Great. This is why I was (and still actually am) excited about the idea of "bad motifs" in causal loop diagrams: it raises the possibility that there's some invariant concept of what makes a system function "badly" - a concept that is not merely of the sort "X is bad if X causes Y" where Y is some specific consequence we have already decided for external reasons counts as "bad", like "the cell become cancerous". Even if the bad motifs in the book System Archetypes don't provide the sort of invariant concept of "bad motif" that I'm searching for, they're still an interesting attempt.
Thanks very much!! I am very glad to know that. I also feel similar.
John Baez said:
Anyway, it's good to keep this in mind....
Thanks!! Yes.
John Baez said:
I don't think "stability" is quite the right concept of "well functioning system": a system can become stable by having everything die.
I agree. For example, programmed cell death (apoptosis) is essential for the stability of the whole system representing the organism. It makes me think that "the notion of stability/equilibrium" is an emergent property of the interconnected system".
Here the death of a component aids the well-being of the organisms. And the death of individual organisms appears to be a necessary part of the well-being of the overall ecosystem. (Some people want to be immortal and regard death as a bad thing, but I believe they're deluded.)
Yes, thats very true.. I agree. It seems then, "stability/equilibrium" is a notion which is more appropriate for n-level network where tends to .
Or I might say it this way: there are multiple notions of "health" operating at different levels. (I'm still not sure "stability" is the word we want here. As a physicist I know a lot about stability, but unfortunately it may not be the relevant concept here!)
I understand your point. I agree. I do not know much physics. So, I am just telling the word "stability" in a very naive way.
I find your idea of "there are multiple notions of "health" operating at different levels" super interesting and also very natural.
I also feel there should be some inter-level interconnections of healths.
That sounds good.
Btw, my former student @Joe Moeller has just finished writing two papers on stability from a category-theoretic perspective.
Thanks!! Thats amazing !! I will read them to understand more about stability from the point of category theory.
The introduction of Joe's paper Categorical Lyapunov theory II: stability of systems may be a good quick way to start. It reviews the classical Lyapunov theorem on stability before generalizing it.
Thanks very much!! I will read the introduction part.
Hmm, will mathematics tell apart what makes for the stability of a diversely-populated rock pool from the stability of Rome through the 200 years of the Pax Romana period?
I have a question:
According to my understanding, network motifs are crucial in understanding the behaviour of regulatory networks. Network motifs usally makes sense when we are working with signed graphs, or more generally monoid labeled graphs, because we also need to consder the sign patterns. If we think "dynamics of a regulatory network" as a particular behaviour of the regulatory network, then it may makes sense to ask "how the dynamics of the network is related to the dynamics of various motifs that occur in the network?"
Now, I think we can can generalise the way Lerman/Stewart defined dynamical system for a graph, to the framework of signed graphs. However, it seems to me that unless we suitably generalise/modify several things, their existing approach may not be sufficient to investigate for possible interrelations between the dynamics of recurring patterns in regulatory networks and the dynamics of the whole network.
According to my current understanding, I am expecting some interesting interrelationships between the "interconnected dynamics of various motifs that occur in the network" and "the dynamics of the whole network".
For simplicity, I am first restricting my attention to only basic known motifs like feedback loops, feedforward loops, etc.
Am I misunderstanding something fundamentally?
Adittya Chaudhuri said:
I think we can can generalise the way Lerman/Stewart defined dynamical system for a graph, to the framework of signed graphs
The big book does consider typed graphs (Def 3.4). This is picked up in Sec 9.5:
Thanks. I will read that section.
Once we've sorted out synchronisation in biology in terms of fibrations, we can turn to psychotherapy:
- The self is a dynamic, multilayered system that underlies coregulation processes within the therapeutic relationship. In this context, intersubjective synchrony reflects the encounter of self‐dynamics between patient and therapist, fostering alliance and emotional resonance.
- Physiological synchrony (interoexteroceptive) supports moment‐to‐moment affective attunement, whereas behavioural synchrony (exteroproprioceptive) sustains long‐term relational engagement.
- Therapist‐led nonverbal synchrony is particularly associated with stronger therapeutic alliance and better outcomes.
- Presession intrapsychic traits—such as patient well‐being or therapist expertise and attachment—modulate synchrony patterns and therapeutic responsiveness.
- Intersubjective synchrony offers a promising marker for precision psychotherapy, guiding self‐reorganization through tailored relational dynamics.
Mind you, that's not a great distance from the big book:
25 Fibration Theory of the Brain III: the Human Language Network
Some very recent papers authored by Ian Stewart et al.:
Might be something there in the pursuit of ways to consider stability.
Thanks! It'll take me a while to dig into this stuff... I'm still blogging about the hydrogen atom and then I need to dive into writing about applied double category theory! But I definitely want to learn this material. I'll have to create a reading list.
David Corfield said:
Some very recent papers authored by Ian Stewart et al.:
- Homeostasis in Gene Regulatory Networks
- Homeostasis in Input-Output Networks: Structure, Classification and Applications
- Synchronous Propagation of Periodic Signals in Feedforward Networks of Standard Model Neurons
Might be something there in the pursuit of ways to consider stability.
Thanks very much!! They look very interesting. I will read.
I still reckon the theory based around symmetry fibrations and synchronization is a promising path to follow to understand the functions of networks, especially since these fibrations sit so centrally in CT.
But looking to stability of networks, there's more interesting work by Makse (co-author of the big book), this time in a book titled The Science of Influencers and Superspreaders
But that chapter ends by pointing the reader back to the big book:
David Corfield said:
I still reckon the theory based around symmetry fibrations and synchronization is a promising path to follow to understand the functions of networks, especially since these fibrations sit so centrally in CT.
I too believe, and I find the concept of graph fibration very interesting.
David Corfield said:
But looking to stability of networks, there's more interesting work by Makse (co-author of the big book), this time in a book titled The Science of Influencers and Superspreaders
Thanks!!
Hi!! Today, I met and talked with a Biologist named Suman Kumar Banik about several things.
I find the following interesting about his work (Although I am yet to understand fully what he said) :
He is working with "synergic effects" of coherent and incoherent feedforward loops from the point of fluctuations, noise and Shanon entropy. What I find interesting is the fact that he said "we can not just add the effect of the edges and as , (when the labeling monoid is ofcourse commutative). He said there is something called "crossed-term" which arises because of the "gluing of such effects" and, it is very important in biology to consider this "cross-terms". So, he said the cumulative effect would be . He said, Uri Alon has explained about it in in a very implicit way in his book. Currently, he is working in understanding these cross-terms for both coherent and incoherent feedforward loops.
Interesting! But have you guys been trying to just sum up effects over parallel edges like that?
We guys haven't been working much on semantics for monoid-labeled graphs yet! One could easily write a whole paper on that.
But we have talked about morphisms of commutative-monoid-labeled graphs where when you map several edges down to one, you sum their edge labels. We've been calling these 'additive morphisms'.
Adittya Chaudhuri said:
the cumulative effect would be
Are the crossterms dependent only on and ? Could we think of as a new binary operation on the monoid?
In a direct message I said to Adittya that I'd like to see a concrete example of this cross-term business. I can't understand this sort of thing without examples.
It reminds me of the entropy of a joint distribution
where is the mutual information.
But indeed, it would be interesting to have a concrete example.
John Baez said:
In a direct message I said to Adittya that I'd like to see a concrete example of this cross-term business. I can't understand this sort of thing without examples.
Identifying the sources of noise synergy and redundancy in the gene expression of
feed-forward loop motif- This is his paper, and the cross-term about which he was saying is in the attached screenshot from the paper. (Although I am yet to read /understand the concepts) .
Crosstermfeedforward.png
David Corfield said:
Adittya Chaudhuri said:
the cumulative effect would be
Are the crossterms dependent only on and ? Could we think of as a new binary operation on the monoid?
Thanks! Possible!! I have not thought yet in this direction. I think finding such a binary operation will involve also a formulation of "what a crossterm is" in a very precise way, from the point of biology.
Hi!! Below, I am writing down my thoughts about "what I think is an anticipatory system" from the point of a discrete network.
Naively generalising what @John Baez defined here
Definition:
A discrete network on a finite set consists of the following:
State of is a function , and the -time evolution of is given by the composition of the function induced from the state functions .
Now, I am defining an anticipatory discrete network on a finite set as a set , whose state functions are defined in a such a way that the state of is dependent on
I imagine the situation. as attached
anticipatorysystems.PNG
So, the state of is dependent on the states of the blue nodes (which influences ) and the time evolution of the states of the red nodes(which gets influenced by ).
In a way, I feel the above is more realistic that the usual discrete dynamical system. I may be mistaking something.
I am modelling anticipation in the following way:
State of is not only dependent on the "just previous" state of the nodes that influence but also on the "so far evolution of the states of the nodes that can influence".
I am thinking like this because, usually our actions are dependent on both past experiences of the things that influenced us and some imaginary consequences of our present choice of action. Interestingly, this imaginary consequences are the "data that we have" about "how our actions has influenced other things in the past".
Has there been any work on something like "anticipatory dynamics" of networks?
On a different note, although I could not understand properly what Rosen has meant by "Anticipatory systems" here, but he indeed conjectured that biological systems are anticipatory in nature. So, instead of going into the details of what he said, I just wanted to create "simplest" framework where such anticipation may make sense Mathematically and a bit biologically too .
This is what Rosen wrote in begining of that book:
On a different note
Although I should have realised it before, I think any discrete network (as defined here ) canonically determines a coalgebra of the identity functor . Thus can be thought as the discrete dynamical system associated to the discrete network . On the otherhand, equivalently, also determines a functor such that , where is the one object category of the additive monoid of natural numbers. Thus, the -time evolution of is . Thus the functor is completely determined by the coalgebra . I think the functor category is equivalent to the category of colalgebras of the identity functor . Authors in the paper Categorical foundations of discrete dynamical systems described discrete dynamical systems in terms of functors . If I am not misunderstanding, then it is little surprising for me that they have not used the term coalgebra anywhere in that paper.
John Baez said:
Btw, my former student Joe Moeller has just finished writing two papers on stability from a category-theoretic perspective.
Now, I think I am slowly realising the -coalgebra perspective of systems. Example 3.4 in the Joe Moeller's paper is precisely telling the definition of discrete dynamical system in terms of -Coalgebras, which are basically the same as the functors as considered in the paper Categorical foundations of discrete dynamical systems. Interesting!! Now, I am getting a feeling that may be the right language to describe dynamics of regulatory networks are coalgebras. I will read the Joe Moeller's paper in detail to understand more about this perspective.
Hi!!
Recently, I have started working on an idea which combines two things in a single framework.
First of all, thanks very much @John Baez for giving me so many helpful feedbacks on the initial version of the concepts which we discussed in direct messages.
The motivation behind this idea lies in the fact that "the state of a vertex not only depends on the states of the neigbouring vertices but also on the types of influences those neighbouring vertices have on ". I called the notion labeled discrete network. Below, I am briefly describing the concept.
Let be a monoid. An -labeled discrete network consists of the following data:
From the above data,
The reason for choosing a labeling monoid over a `labeling set' because later I want to study the motifs in the underlying graph of .
I will be very glad to get feedbacks about my notion of labeled discrete network.
Thanks everyone in advance.
As I mentioned above: My main motivation for defining the noition of a labeled discrete network is to incorporate the notion of "types" on influences of the neighbouring vertices in the framework of a usual discrete network, where such influences are usually ignored. Hence, it is natural to expect, that if we ignore "such types of influences", then my "labeled discrete network" will become usual discrete network. Indeed, this is the case, when in my definition of labeled discrete network, if we take the trivial monoid .
Furthermore, if we take and , then we recover the definition of a Boolean network as @John Baez defined here .
I'm a bit confused by this:
Adittya Chaudhuri said:
Definition:
A discrete network on a finite set consists of the following:
- a set of states,
- a family of subsets whose elements are called neighbours of ,
- a family of functions called the state functions .
State of is a function , and the -time evolution of is given by the composition of the function induced from the state functions .
Shortly after you discuss a directionality. So is that for a node , the nodes among the neighbours of , , are nodes that influence , hence the red arrows to , and for the nodes, , such that , there is a blue arrow from to ?
If so, have we gained anything over a starting out with a directed graph?
David Corfield said:
I'm a bit confused by this:
Adittya Chaudhuri said:
Definition:
A discrete network on a finite set consists of the following:
- a set of states,
- a family of subsets whose elements are called neighbours of ,
- a family of functions called the state functions .
State of is a function , and the -time evolution of is given by the composition of the function induced from the state functions .Shortly after you discuss a directionality. So is that for a node , the nodes among the neighbours of , , are nodes that influence , hence the red arrows to , and for the nodes, , such that , there is a blue arrow from to ?
Here, consist of vertices which influence : there is a unique edge from every element of to , which are represented by blue arrows in my diagram. The red arrows represent how influences other vertices.
David Corfield said:
I'm a bit confused by this:
Adittya Chaudhuri said:
Definition:
A discrete network on a finite set consists of the following:
- a set of states,
- a family of subsets whose elements are called neighbours of ,
- a family of functions called the state functions .
State of is a function , and the -time evolution of is given by the composition of the function induced from the state functions .If so, have we gained anything over a starting out with a directed graph?
I think the directed graph does not have the full information of the state functions .
That is why, I would like to think that we can construct a directed graph out of it, rather than initially starting with a directed graph.
Sure, we need to add in states and dynamics. But I don't see why specification of in-neighbourhoods is the best way to describe the topology of the network, especially when you're going on to involve the out-neighbourhoods in the dynamics.
I understand your point, and I agree. First, I wanted to define a simpler notion "which naively generalises the Boolean network" perspective, and then, I wanted to incorporate the "out-neighbourhood" perspective by appropriately redfining the state functions , so that it depends on the time evolution of the vertices in the out-neighbourhood.
I am still working on the idea of incorporating the "out-neighbourhood" in the dyanmics. Still the idea is not concrete to write a definition.
I am interested in two-way generalisation of usual discrete dynamics:
Here
, I worked on (1).I am still thinking about (2).
I may be misunderstanding a lot of things, but below, I am telling the biological motivation that I have in my mind for defining a notion of "labeled discrete network" i.e. (1) above.
Step 1:
Say, we start with a set of biomolecules and through experimental and analytical studies we know how they influence each other.
In otherwords, for a monoid , we can define a influence function
,
where is the unique monoid which extends by adding as the absorbing element in the new monoid .
Here, the monoid structure structure is giving us indirect influences of a biomolecule on another. Also, I put an extra condition that if and only if we find out by experimental study/other studies that the biomolecule has no influence on .
Step 2:
We realise that only Step 1 is not sufficent for understanding the interaction because each biomolecule have various states depending on the level of concentration it has, and these states evolve over time. For example, when we work with Boolean , then a state on a biomolecule may mean that the biomolecule is unavailable for playing its role i.e influencing others, and the state may mean that the vertex is available for influencing others.
Step 3:
Thus, I am proposing my definition of "Labeled discrete networks"
which carries information of the following:Note that in my definition of a labeled discrete network here ). Only the states of the vertices are changing with time. Thus, the types of influences is preserved under the dynamics. However, the type of influences play a role in such change of states
, I assumed that the type of influence of a vertex on another is not changing with time (the functionIf I am not misunderstanding, then it seems to me that this assumtion is reasonable "at least in the sense of biology". However, I think from social dynamics perspective, may be it is better to have a definition which does not assume that "type of influence of a vertex on another is not changing with time".
I also want to clarify that "kind of -labeling" on the directed graph we obtain from a Boolean network that usually Biologists consider such as here
"a screenshot from the published version" as here
IMG_0393.PNG
is not the same as "what I would get as my underlying -labeled graph" that I considered here . "My -labeling" tells influence types, whereas "their -labeling" represent the Boolean logic, and thus they have to restrict themselves to the state set to obtain their -labeling.
I think Kauffmann and others considered the "Boolean logic" ones, and showed its significance in studying gene regulation.
I don't know what theorems you plan to prove using your definition, so it's hard to say anything about your definition. When I state definitions, I usually also state a bunch of theorems that I plan to prove using these definitions.
(What I do is look at examples of things, and notice that whenever X is true then Y and Z tend to be true. Then I make X into a definition and conjecture that X implies Y and Z. I usually don't just think of concepts and state them as definitions.)
Thanks!! I understand your point. The motivation for my definition was from
I am now working on "developing statements of some Mathematical theorems" on the basis of the above, so that Mathematically the framework becomes reasonable. In doing so, I may change my definition in future accordingly.
Okay. I think it's better to figure out the theorems before stating the definitions. That is: figure out the patterns that hold, and turn these into theorems (or at least conjectures), and then notice that these theorems are easiest to state if one makes certain definitions.
The physicist John Wheeler said something similar: "never do a calculation until you already know the answer". Of course he was joking somewhat. But he meant that it helps to know where you're going before you start calculating.
Thanks very much!!I agree. I will work in the way you suggested.
I think I figured out something interesting . I am trying to write my ideas below:
My goal is to study discrete dynamics of graphs with polaritites. However, I want to reframe my goal in a little different way :
We start with a set , whose elements are called nodes. When we choose two nodes and , either we know influences in a specific way or we know has no influence on . Also, nodes can have indirect influences : If influences and influences , then there is an indirect influences of on . Then, I want to have a notion of states of nodes . Also, I want to think that state of a node is dependent on the states of those vertices which can influence . This is what I meant by "graph with polarities equipped with states". I am calling it as a monoid-labeled discrete network. What I just said in words, are written formally as here , except a missing condition that I am now adding to incorporate the indirect influences:
I want the map to satisfy the condition that if and , then .
So, formally, restating everything:
Let be a monoid. An -labeled discrete network consists of the following data:
Next, given a monoid , we can define a morphism as a pair , which satisfies certain natural compatibility conditions with neighbours, influence functions and state functions. I have verified that the composition laws of functions gives us , the category -labeled discrete networks.
Next, I am considering functor category as my definition of the category of discrete dynamical system. Next, given a -labeled discrete network , I have constructed a presheaf .
Next, I defined the following:
Definition:
Let be a monoid. We define the discrete dynamic of a -labeled discrete network as the presheaf .
Conjecture 1
For any monoid , there is a functor which takes to .
On the other hand, given a -labeled discrete network , I constructed a -labeled graph .
Next, I defined the following:
Definition:
Let be a monoid. We define the underlying -labeled graph of as the -labeled graph .
Conjecture 2
For any monoid , there is a functor which takes to .
Observe that the -labeled graph contains a lot of important information about , my object of interest.
Now, let us assume is commutative. We consider the slice category . We know from our last paper that given a -labeled graph , we can define a feedback homomorphism .
Conjecture 3:
Given a commutative monoid , there is a functor , which takes a -labeled graph to its feedback.
Combining Conjecture 2 with Conjecture 3, we get
Conjecture 4:
Given a commutative monoid , there is a functor , which takes a -labeled discrete network to the feedback of its underlying -labeled graph .
At the moment, I want to study the functors in Conjecture 1 and Conjecture 4, and later on if everything goes well, I want to see whether I can lift these functors to double functors and study Conjecture 1 and Conjecture 4 in a compositional way via standard ACT-results in structured/decorated cospans.
Next, given a -labeled discrete network , I have constructed a presheaf .
If I understood correctly, the presheaf
Is that correct?
(back to our short discussion earlier about schedulers: if I understood correctly, the above definition is a way to specify a synchronous scheduler)
Peva Blanchard said:
Next, given a -labeled discrete network , I have constructed a presheaf .
If I understood correctly, the presheaf
- maps the unique object of to (set of configurations, i.e. a state for every node)
- maps the unique arrow to the "synchronous update of every node state". I.e., if are the consecutive configurations, then
Is that correct?
Hi! I think you meant to say the generator arrow of the one object category (which contains countably infinite arrows, which gives us time-evolution) associated to the additive monoid of the natural numbers . Yes, your description looks correct assuming you meant as .
Oh yes, right (the generator arrow ). Thanks
Peva Blanchard said:
(back to our short discussion earlier about schedulers: if I understood correctly, the above definition is a way to specify a synchronous scheduler)
Interesting. So, if I am understanding correctly, did you mean then, my definition of -labeled discrete netework is equipped with synchrnous scheduler i.e ?
Yes, indeed.
I see. Thanks. Often in Biological systems, I have read that it is more realistic to consider asynchronous scheduler (see Page 3 here), where the "updates of every node" do not happen "synchronously" or at the same time.
How do you usually consider "asynchronous scheduler" in Computer science? Are there any standard techniques ?
This is a risky rabbit-hole. There is usually not many different ways to specify a synchronous scheduler, but there are many many different ways to specify an "asynchronous" scheduler, which depends on the underlying computation model.
Let me try to sketch a few variants:
If you allow any sequence , you stumble on trivial executions (e.g., no node is ever triggered), or unfair executions (e.g., some subset of nodes are only triggered finitely many times). So you may want to include "fairness" conditions: e.g., every node is triggered infinitely many times, or, every node is triggered uniformly at random, independently for every time step.
Thanks!!
I think the data of the presheaf is determined by the family of functions . From your description of scheduler in your previoius post , it seems to me that is a scheduler to the -labeled graph induced by the data of , and .
So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of . Right?
On the otherhad, I think is just a state-updating scheme over discrete time intervals (induced by the scheduler ) of the -labeled graph induced by the data of , and .
Mmh, it is a bit strange to me because I never tried to formulate the notion of scheduler in category-theoretic terms, so I may be wrong in connecting the dots. (But it is an interesting exercise!).
So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of . Right?
I am thinking that a scheduler should "naturally depend on" the local update rules . Maybe, it is best captured in the (yet to proven) fact that is a functor. I could name this functor "the synchronous scheduler functor".
Peva Blanchard said:
Mmh, it is a bit strange to me because I never tried to formulate the notion of scheduler in category-theoretic terms, so I may be wrong in connecting the dots. (But it is an interesting exercise!).
So, I think whether the scheduler is synchronous or asynchronous, it is supposed to be dependent on the definition of . Right?
I am thinking that a scheduler should "naturally depend on" the local update rules . Maybe, it is best captured in the (yet to proven) fact that is a functor. I could name this functor "the synchronous scheduler functor".
Okay. So, the functor contains not only the data of how networks are updated (at object level), it also contains the data of "how the the synchrony i.e (at least I constructed the definition of morphisms in that way) preserving transformations of networks are updated" (at morphism level). Intersting!!
Although I will think more on it, at the moment I like to think the functor as a functorial updating scheme for the network (underlying -labeled graph) equipped with a scheduler .
For me, still the synchrony is more relatable to the . I am thinking of as a analog of the vector fields associated to each nodes in the network of manifold as studied by Lerman-DeVille here.
Regarding examples of the monoid labeled discrete network that I defined here
.I already mentioned that any Boolean network is a special case of a -labeled discrete network , when we choose and . However, the main motivation for my definition of -labeled discrete network lies in the study of dynamics of regulatory networks modeled as rig-labeled graphs. More precisely, a large class of examples of monoid-labeled discrete networks arise from rig- labeled graphs equipped with the data of schedulers, which may make my notion of monoid labeled discrete network as a good candidate for describing the discrete dynamics of regulatory networks.
First, I recall the notion of a rig -labled graph that @John Baez and I studied in this paper.
Definition:
Given a rig , we define a -labeled graph as pair .
Next, I am defining a morphism of -labeled graphs (which we have not explicitly defined in our paper).
Definition:
Given a rig , a morphism is defined as an additive morphism of -labeled graphs.
Next, I am introducing a notion of -labeled graph equipped with a scheduler:
Definition:
Let be a rig. We define a -labeled graph equipped with a scheduler as a pair , where . We say the -labeled graph is equipped with the scheduler .
This is the definition where I making an analogy with what Lerman-DeVille defined as network in their paper. I learnt the word scheduler first time from @Peva Blanchard Thanks!! Now, the justification of the word scheduler will probably come from the Lemma below (that I observed today)
Lemma:
For any -labeled graph with scheduler , there is a -labeled discrete network .
The construction follows by observing that the map
can be defined in the following way:
Let . Then, if path is non-empty, then define
, otherwise , if path is empty.
Note that in the definition of , I have used the multiplicative monoid .
Thus (if I am not making any mistakes) now, we have a large class of examples of monoid labeled discrete networks arising from models of regulatory networks.
Now, using the additive morphisms, I think (yet to check everything in detail) there is a natural way to define a notion of morphism of -labeled graphs with scheduler. We denote the corresponding category of -labeled graphs with schedulers as
Now, I conjecture the following:
Conjecture:
Given a rig , there is a functor which takes a -labeled graph with scheduler to the associated -labeled discrete network.
Also, it is important to mention that, I have restricted my attention to only finite rig-labeled graphs for technical reasons.
Thus, if we combine the conjecture here
, with the conjecture 1 here , we get a discrete dynamics of regulatory networks (the goal about which I stated earlier).I don't want to argue too much about terminology, but, from a distributed computing perspective, it feels a bit strange to name the scheduler. I picture as the local algorithm that will run on the node . And the collection constitutes the distributed algorithm.
Of course, this terminological remark doesn't impact the relevance of the object . I'm trying to understand your lemma, but I don't understand why the function is defined the way it is (using sum of products of labels along paths), and why you think the lemma justifies the name "scheduler".
Peva Blanchard said:
I don't want to argue too much about terminology, but, from a distributed computing perspective, it feels a bit strange to name the scheduler. I picture as the local algorithm that will run on the node . And the collection constitutes the distributed algorithm.
Okay, I see. Thanks. May be I am misundeerstanding the concept of a "scheduler".
What I wanted to say is that is the extra data we add to that enable us to change the state of all the vertices in over discrete time inetervals. If the meaning of a "scheduler" does not match with what I said, then probably I should not call a scheduler.
Peva Blanchard said:
I'm trying to understand your lemma, but I don't understand why the function is defined the way it is (using sum of products of labels along paths)
I defined it like that because I wanted to represent the cumulative influence of the vertex on the vertex (which constitutes both direct influences (directed edges) and indirect influences (directed paths)).
However, I think there is a flaw with my argument because even if is assumed to be finite, can be an infinite sum and thus can be undefined when we are not restricting our attention to some special rigs. Then, probably my Lemma will hold only for certain special classes of rigs eg: quantales, which ensure that is always well-defined.
I am working on this issue : Whether to restrict our attention to special classes of rigs or redefining my labeled discrete networks by removing the condition and imposing if and only if there is no directed edge from to . Redefining the in the above way also seems not a very bad option. I have to think more on it.
I think I have figured out something more fundamental which will simplify (probably in a right way) all the things that I said so far about the relationships between labeled graphs and labeled discrete networks. I am working on it. I need a little more time before I can spell it out in a nicer way.
Loosely speaking, my line of thoughts follows from the (probably correct) observation that for any set , a finite -labeled graph is same as a function .
Adittya Chaudhuri said:
I defined it like that because I wanted to represent the cumulative influence of the vertex on the vertex (which constitutes both direct influences (directed edges) and indirect influences (directed paths)).
Oh I see. But shouldn't the be compatible with the rig operations in some sense? Because if the state space is a singleton , then any node only depends on the labels of its direct incoming edges.
I think your idea works in the case , and
Maybe we can look at this expression, and make it "free". E.g., is a set of rooted trees, with edges labeled by . Then we interpret as grafting every tree as a child of with edge label , or replacing the existing child if any. (something like that).
Adittya Chaudhuri said:
Loosely speaking, my line of thoughts follows from the (probably correct) observation that for any set , a finite -labeled graph is same as a function .
That sounds wrong: suppose and and all equal the function from to sending all natural numbers to the same element . What finite -labeled graph does that correspond to?
Sorry!! I agree. I wanted to say formal linear combination instead of . However, I think still the notion is not good beacause at the morphism level it is not working.
Right: an -labeled graph in our sense with vertex set is different from a map because in the latter there's no set of edges. For our kind of -labeled graph there are non-identity morphisms that are the identity on vertices, but not on edges. But for maps , the most obvious sort of morphism is forced to be the identity if it's the identity on vertices.
This is a standard issue: e.g. there's a definition of Petri net where there's a set of arcs from any state to any transition, and another definition where there's just a number of them.
You get to decide which definition you want, depending on what you are doing with it.
Thanks! I agree.
Now, since my goal is to study the dynamics of regulatory networks, I am feeling why cant we just start with a pair , where
Then, we define an appropriate notion of morphisms for these. Let denote the associated category.
Then, directly, we can define a functor , whose object level map is induced from the the data of ?
More importantly, I am not feeling the necessity to define discrete labeled network, as I had defined before for the purpose of studying dynamics of regulatory networks.
Here is a rig.
I am more interested in constructing "those kinds of which gives us a steady state dynamics". i.e there exists such that for all we have .
From the point of biology, I feel such thing can be important (often responsible for producing various cellular behaviour) as explained here.
Do you really want to study cells where nothing happens?
Ok!! I should rephrase my problem to "oscilatory dynamics" where there exists , such that for all ... something along this direction.
I guess it's fine to prove some theorems about simple situations like fixed points or oscillatory solutions, and later prove some more complicated theorems about more realistic situations.
My point is "what kind of produce certain bounds on the dynamics ? If those bounds are interesting from the pioint of Biology, then these bounds may correspond to some cellular behaviour ?
John Baez said:
I guess it's fine to prove some theorems about simple situations like fixed points or oscillatory solutions, and later prove some more complicated theorems about more realistic situations.
Thanks. Yes, I agree.
Peva Blanchard said:
But shouldn't the be compatible with the rig operations in some sense?
Yes it is. If you see my current definition of the idea here is dependent on , which in turn is dependent on the rig operation.
, hereHi!!
I am seeing my new definition of a rig-labeled graph equipped with local state-change data i.e for a rig and a set , a pair as I had considered here as an analog of Petri nets equipped with rates. Like the way, Petri nets equipped with rates give rise to continuous dynamical systems as explained in the Section 6.4 of Structured Versus Decorated Cospans written by @John Baez , Kenny Courser and Christina Vasilakopoulou, my rig labeled graph equipped with local state-change data gives rise to a discrete dynamical system. In biological terms, a regulatory network equipped with local state-change data gives rise to a discrete dynamical system.
I hope I am not misunderstanding the way I am interepreting my notion of rig labeled graph equipped with local state-change data considered here
.It may be interesting that since my definition of in depends on the labeling of the underlying rig labeled graph, it is expected that in the attached file
the discrete dynamics of (a), (b) and (c) will be different and similarly, the dynamics of (d) and (e) will be different, although the underlying directed graph structure is same for (a), (b) and (c) and same for (d) and (e).
Right now the rig structure of the label set is not connected at all to the maps that determine the dynamics. That is, you could replace the rig with a mere set, and the dynamics would not change.
You probably have ideas about how the rig structure is relevant, but they are not in the definition.
I thought my dynamics depends on
, as defined here and depends on the rig structure.
The corresponding dynamics is defined as follows:
which takes to
, , where
and hence, my dynamics depends on the rig structure.
is part of my definition here .
Am I misunderstanding anything?
I agree and are separate structures but in the dynamics both of them are used such that depends on .
Okay, I was just looking at this:
I am seeing my new definition of a rig-labeled graph equipped with local state-change data i.e for a rig and a set , a pair
I thought was an arbitrary map of that type.
Yes I understand!! Sorry, I should have written about in the defnition.
I don't see how is defined here:
The corresponding dynamics is defined as follows:
which takes to
, , where
Since , the deifnition of makes sense.
Okay. I still don't understand it. I think more words and fewer symbols would help me understand what you're trying to do.
computes the cumulative direct influence of the vertex on using the underlying rig-labeled finite graph structure. My dynamics is dependent on . Hence, my dynamics depends on the rig structure.
Here,
It was nice and clear, and then you had to spoil it with a bunch of symbols. :wink:
So: first you use the rig-labeled finite graph to compute a total influence of a vertex on the vertex . I'm guessing you simply sum up the labels of all edges from to . You call this total . If I'm right in my guess, you are only using the addition in the rig here.
Then what do you do with this total?
satisfies the following comaptibity condition for any
Now, my dynamics is given by ,
where,
Okay; can you explain that using words, so I can understand what you're trying to do?
(The human mind really does think using sentences to a large extent, so anything that can't be expressed in sentences is not fully understood.)
The -state of is dependent on the the states of the neighbours of and as well as the cumulative direct influences of the neighbouring vertices on
This is the reason why I said this:
the discrete dynamics of (a), (b) and (c) will be different and similarly, the dynamics of (d) and (e) will be different, although the underlying directed graph structure is same for (a), (b) and (c) and same for (d) and (e).
John Baez said:
(The human mind really does think using sentences to a large extent, so anything that can't be expressed in sentences is not fully understood.)
I agree. I am trying.
tells about the state of the
depends on (i.e cumulative influences) and ( states of the neighbouring vertices) because we are computing .
John Baez said:
So: first you use the rig-labeled finite graph to compute a total influence of a vertex on the vertex . I'm guessing you simply sum up the labels of all edges from to . You call this total . If I'm right in my guess, you are only using the addition in the rig here.
Yes.
My state functions are . But any function is basically dependent on the rig structure because the second coordinate is computed from the cumulative influences via the underlyng rig structure. I expressed this fact in the following commutative diagram:
I didn't look at your diagrams and formulas, just this sentence:
Adittya Chaudhuri said:
The -state of is dependent on the the states of the neighbours of and as well as the cumulative direct influences of the neighbouring vertices on
Okay, that makes sense.
But does it do so in an arbitrary way, or in a way that must obey some limitations? If the next state of can depend on the present states of its neighbors in an arbitrary way, there's no need to include the "cumulative direct influence" in your definition!
For me "states are concentration levels of biochemical compounds", so, concentation level of a vertex depends on the concentration level of its neigbouring vertices and as well as the nature of cumulative influences of the neighbouring vertices on the vertex.
That is why I think the dynamics of postive feedforward loop is likely to be different from the dynamics of negative feedforward loop.
I argued that if the next state of can depend on the present states of its neighbors in an arbitrary way, there's no point in saying it also depends on "cumulative direct influence" you compute from the edge labels. You didn't answer me. But now I don't believe what I said anymore. The edge labels are independent from the states of those neighboring vertices.
For me "states are concentration levels of biochemical compounds"
Yes, I am happy with that example. This assumption is not built into your formalism; it's just an example of your formalism.
Thanks
I wanted to say "the next state of a vertex" is dependent on two things : nature of the cumulative infleunces of the neighbours and the previous states of the neighbours.
So, the cumulative influence depends on the rig structure.
That was my point.
Now, I am saying, here the rig labeled graph structure along with generates a discrete dynamic, like the way your Petri nets along with rates generate continuous dynamics.
Thus, I was thininking of my setup as "regulatory networks(modelled as rig-labeled graph) equipped with " generates a discrete dynamic.
Am I understanding correctly about the analogy I made?
Same principle is true for Lerman-Deville framework, A graph equipped with "vector fields over the manifolds associated to each node" generates a continuous dynamic.
My inspiration came from Lerman-DeVille framework.
Yes, I think I get it.
I am not sure whether my framework is interesting or not. At the moment, I am not finding anything wrong in it , especially when I am thinking my states as concentration levels.
Also, today, I could prove that " my definition of morphism of rig-labeled graph equipped with states" (still written in a very vague way) is giving a natural transformation in the functor category .
So, probably, the association I said, will give a functor from the category of -labeled graphs with states to .
It'll become more interesting if you prove some nontrivial theorems.
There are a bunch of trivial theorems, some of which take work to state. For example, you can construct a double category of 'open' dynamical networks of your sort. And you can construct a semantics for them: a double functor from them to a double category of open discrete dynamical systems. We've proved this kind of theorem so often I consider them trivial.
There are similar trivial theorems for open Petri nets with rates.
A slightly less trivial theorem for open Petri nets was constructing a 'black-boxing' functor that mapped them to relations that say what happens in a steady state.
A really interesting theorem would say something interesting about the dynamics of some class of dynamical networks of your sort.
For example, in the world of Petri nets with rates ( reaction networks), the theorems about complex balanced equilibria are nontrivial and really interesting.
Thanks very much!!
I'm talking about theorems by Jackson and Horne and Feinberg, which you can find in my book.
Book ?
Yeah, my book on Petri nets.
Thanks very much!! I will download and read the portion.
Also, I was thinkig about checking whether the "fibration of underlying rig-labeled graphs" (I have to first define it in a suitable way) can preserve discrete dynamical systems like the way it did in continuous case as shown by Lerman-Deville.
Because those Jackson-Horne-Feinberg theorems are nontrivial, they will be hard or impossible to adapt to discrete dynamical networks of your sort.
I'm talking about things like the Deficiency Zero Theorem.
I see. I will read about these. I have not read much about them.
I hope you are saying about this book of yours right?
John Baez said:
I'm talking about things like the Deficiency Zero Theorem.
Yes, I found!! Section 17 of your book. I will read.
There's a marvelous theory of chemical reaction networks, starting with the Deficiency Zero Theorem, and leading to the Global Attractor Conjecture, which is really beautiful and appparently very hard to prove. My friend Gheorghe Craciun claimed to have proved it, but now he's working on writing up a proof of that and some even harder conjectures, and he's been working on that for about a decade.
Thanks very much!! Very interesting!!
Since a presheaf is same as a -coalgebra, I was also thinking of using some controll theory of systems as -coalgebra developed by Joe Moeller as his collaborators in this paper, the paper you shared with me some days back.
If I were you I'd focus on concrete examples and look for patterns, instead of trying to apply general formalisms. That's how you can discover really new mathematics. If I were you, I'd start reading papers on Boolean networks, which often study concrete examples - but I also believe they have some general theorems. This seems like a good source of 'raw material'.
If you float too high in the world of abstract theory, and don't put your feet down in the study of concrete examples, biologists won't find your work interesting.
I agree!!! Thanks very much!! I got your point. I will read papers on Boolean networks.
John Baez said:
If I were you I'd focus on concrete examples and look for patterns, instead of trying to apply general formalisms. That's how you can discover really new mathematics.
I find this line very very helpful and inspiring!!! Thanks very much!!
John Baez said:
If you float too high in the world of abstract theory, and don't put your feet down in the study of concrete examples, biologists won't find your work interesting.
I agree. I already experianced similar situation.
John Baez said:
If I were you, I'd start reading papers on Boolean networks, which often study concrete examples - but I also believe they have some general theorems. This seems like a good source of 'raw material'.
Thanks again!! Yes I fully agree!!
Adittya Chaudhuri said:
For me "states are concentration levels of biochemical compounds", so, concentation level of a vertex depends on the concentration level of its neigbouring vertices and as well as the nature of cumulative influences of the neighbouring vertices on the vertex.
Ah, I understand better now! Just to be sure, let me unfold the example.
Say we interpret influence as a rate, e.g. a real number (), and a state as a concentration level, say a nonnegative real number (). One example of is
The intent is the following. Say the concentration levels at time are at , and at every in-neighbor of . Then at time , each in-neighbor makes a contribution to that is proportional to its concentration level and its influence on . Adding all these contributions to the current concentration level of yields the formula above.
Does this example make sense?
ps: I think I implicitly assumed that there was a single edge from every in-neighbor to . But it is easy to adapt this example to take multiple edges into account.
I am not sure I understand your example. For me, is a map from to . Also, I am not sure I fully like the idea that "influences are rates". The reason is that if we know so much information, then from the point of a Biologist, continuous dynamical system is preferable because it is more acurate (at least according to what they usually say). In fact, Biologist usually go for discrete dynamical system eg: Boolean network, when they do not have the data necessary for constructing a Sytem of ODE. That is why I want my framework to be useful when less information/data is available to the Biologists.
The underlying rig-labeled graph is obtained from "several years of experimental and other analysis", which says how various biochemical compounds influence each other (like activation, inhibition, necessary stimulation, etc.). On the other hand, the set of states may consists of , representing whether the concentration is at threshold or not, or may be . I am prefering this because according to biologists, finding exact concentration level of a biomolecule is itself a challenge and usually they do not have the exact data. This is same for finding out the information on rates.
However, it is also possible, I am misunderstanding your example.
I think this is the main difference between chemistry and biology (Lack of information). Personally, I do not know(since neither I am a biologist nor I am a chemist), but I used to hear these words from Biologists.
Oh, indeed, in my example . I implicitly assumed that a node updates its state based on the states and influences of its in-neighbors and its own previous state. This is a usual assumption when designing local update rules (see e.g. Conway's game of life).
Is there a specific reason to have a node "unaware" of its own state?
You're right, I probably shouldn't have used the term "rate". My intuition is that the "contribution" of an in-neighbor of is proportional to its concentration level and its influence , hence the formula.
Do you have a toy example, involving e.g. ? Which rig would be used then?
Peva Blanchard said:
Is there a specific reason to have a node "unaware" of its own state?
No! It's very important that the next state of a node depends on its current state. So, thanks for pointing that out!
One easy solution could be to work with [[reflexive graphs]], where every node has a distinguished edge from that node to itself.
Reflexive graphs are better than graphs in a number of ways, e.g. products work better and the terminal reflexive graph is also the walking node (the free reflexive graph on a node).
(Lately I've been saying 'vertex' instead of 'node', but they mean the same thing.)
Peva Blanchard said:
Oh, indeed, in my example . I implicitly assumed that a node updates its state based on the states and influences of its in-neighbors and its own previous state. This is a usual assumption when designing local update rules (see e.g. Conway's game of life).
Is there a specific reason to have a node "unaware" of its own state?
Yes, thanks!! As @John Baez explained, I also think it is absolutely essential to work with only reflexive graphs.
Peva Blanchard said:
You're right, I probably shouldn't have used the term "rate". My intuition is that the "contribution" of an in-neighbor of is proportional to its concentration level and its influence , hence the formula.
Ah!! I see. Thanks. I think your example is nice when we are working in a framework where we are aware of the concentrations and the rates.
I like to reword your statement:
My intuition is that the "contribution" of an in-neighbor of is proportional to its concentration level and its influence
as
My intuition is that the "contribution" of an in-neighbor of depends on the concentration level(state) and its influence
I want to keep this liberty because we may put various different structures Eg: monoid, rig, etc. on according to situations. For example, if we take , (Multiplicative Boolean monoid) and , the Boolean rig, then using the Boolean logic (AND, OR, NOT, etc) we may define . Here, means the concentration is low and means the concentration is high (when we see ).
I agree that in every specific problem, the elements of and should be related in some way. However, I am not specifying the relationship in the definition itself.
I am trying to come up with a better example.
Peva Blanchard said:
Do you have a toy example, involving e.g. ? Which rig would be used then?
Hi!The rig that was on my mind is the four elements rig that @John Baez and I constructed in the Example 6.4 of our paper, where denote respectively, positive influence, no influence, negative influence and indeterminate influence. For , I was thinking about
However, as of now, I am not able to construct (satisfactorily) the maps which matches with our intuition for any interesting graph.
I think your example works fine.. Thanks. If we know the rates and concentrations, then I think your example is a perfectly fine example for the scenario. However, I wanted to apply for the scenario when we do not have such quantitative data, but instead, we have qualitative data like positive influence, no influence, negative influence and indeterminate influence, and .
At the moment, I am also doubting whether my framework is suitable for the things that I want to study (i.e qualittive information of a regulatory netowork generating discrete dynamics).
I am a bit confused at the moment. I am thinking!
It's very good to be studying concrete examples like this. Those feelings of doubt and confusion are good, because they mean you are confronting real problems.
Thanks very much!!
Maybe we can restate what we expect from and
It seems that:
A simpler case (I guess) is (identifying concentration level with contribution) being an additive monoid, with a monoid action .
Thanks!! Below, I am trying to incorporate your ideas into the definition of mine.
First, I am recalling my definitions:
Definition:
Let be a rig and be a set. Then, we define a -labeled graph with states valued in as a pair , where
Lemma:
defines a functor .
Proof: We define a functor , which takes the unique object to the set and takes to that takes to defined as .
Now, I want incorporate your ideas to construct a class of examples:
Example:
Let be a rig and a commutative monoid. Suppose is an action of the rig on the commutative monoid (i.e it is an action with respect to both and and the actions are compatible with the commutative monoid structure of ). Now, consider a -labeled finite reflexive graph . Let For each is given as for all . Now, suppose for each , we define a function by (Here, I am using @Peva Blanchard 's idea of contribution). Then, it can be verified that defines a -labeled graph with -valued states.
Is the above discussion making sense?
However, the kind of example that I really want to construct is the following:
Let be the four-element rig that @John Baez and I constructed in the Example 6.4 of our paper. Let , where AT, BT and IT denote above thereshold concentration, below threshold concentration and indeterminate concentration, respectively. Then, I consider a -labeled finite reflexive graph .
What I am trying:
I am trying to incorporate suitable compatiblity between and /incorporate some additional structures/data, so that I can construct biologically meaningful so that is a -labeled graph with -valued states as defined here .
Although I am not sure, whether "what I am asking" is actually feasable or not!!
I am writing down what I am thinking:
Usual approach:
We usually first consider a Boolean network (or generalised version of them) as defined by @John Baez here , and then this definition allows us to construct a directed graph. Now, the signs of the directed graph () comes from the Boolean logic. In most setting including Kauffmann, I think this kind of approach is taken.
The approach that I was talking for some days:
We start with a labeled graph, and we equip the nodes of the graph with the information of how their present states are depended on the previous states and the nature of influences of its neighbouring nodes. From this data, we can construct a discrete dynamical system. I think this approach is not much common. Closest existing work in this direction I could find are
I am now doubting my approach in the following ground:
At the moment, I do not have proper answers to the above two questions.
Adittya Chaudhuri said:
- (1) Why do I think the discrete dynamics generated by a rig-labeled-graph equipped with states is relevant for Biological systems?
Instead of finding the biology papers that fit the framework, I guess it could be more fruitful to find a few reference biology papers, or better, biologists, and understand what kinds of question they are interested in. (It usually takes a lot of time, however, to get familiar with a new field of knowledge)
The motivation for my framework is that "a monoid-labeled graph model" of regulatory network is necessary but not sufficient for the purpose of understanding how molecular intereactions give rise to cellular function (that is what interests me the most in biology) because concentration levels of the biochemical compounds change all the time, and unless a biochemical entity reaches certain level of concentration, it can not perform what it is supposed to perform (i.e influencing other biochemical entities in particular ways). Interestingly, I think the dynamics of concentration levels of interconnected biochemical compounds are so nicely engineered by Nature that after certain amount of time the said dynamics behave in certain patterns which become the reason to explain cellular behaviour. I may be wrong but this is my current understanding. At the moment it is difficult for me to throwaway what I realised unless I get a convincing counter argument for it.
But at the same time, I found your suggestions reasonable, meaningful and helpful. Thanks!!
Probably, it is better to work on two problems: (1) towards the direction of my realisations, and (2) Towards problems, whose theme will interest a theoretical biologist and the methods will interest an Applied Category Theorist.
I should emphasise: I am not claiming that my framework will help in understanding "how molecular intereactions give rise to cellular function". I just said "this is the motivation for my framework". It is also very possible that my framework is very inappropriate for the goal that I have.
So, I am very open to change my framework and explore other framework/s/problem/s which can help me progress with what I am interested in i.e studying how molecular intereactions give rise to cellular functions.
Peva Blanchard said:
Adittya Chaudhuri said:
- (1) Why do I think the discrete dynamics generated by a rig-labeled-graph equipped with states is relevant for Biological systems?
Instead of finding the biology papers that fit the framework, I guess it could be more fruitful to find a few reference biology papers, or better, biologists, and understand what kinds of question they are interested in. (It usually takes a lot of time, however, to get familiar with a new field of knowledge)
I agree.. I am feeling the question " how molecular intereactions give rise to cellular function" is too vague at the moment. I need to first make my question much much more precise from the point of Biology and then translate the question into a Mathematical question. I really need to learn some good Biology if I really want to produce Mathematics that contributes to the understanding of Biological systems.. @John Baez had sugested me to do this before. Somehow I again got distracted a bit!! Instead of thinking from the point of Biology, I started trying to "somehow connect the ideas of graphs with polarities to Discrete dynamical Systems". Thanks for making me realise this.
Good. Since "learning biology" is potentially huge job, you might start by reading a book and some papers on gene regulatory networks, or whatever specific kind of network you wish to study mathematically. (It's probably too confusing to to try to learn about all kinds of biological networks at once!)
In our paper I mentioned this book:
John Baez said:
Good. Since "learning biology" is potentially huge job, you might start by reading a book and some papers on gene regulatory networks, or whatever specific kind of network you wish to study mathematically. (It's probably too confusing to to try to learn about all kinds of biological networks at once!)
Thanks!! I agree. From yesterday, I started learning about transcription networks . I was reading the first chapter of the Uri Alon's book "Introduction to Systems Biology". In 1-2 days, I will share my understanding of transcription networks here. In parallel, I will also read some papers about gene regulatory networks as you suggested.
John Baez said:
In our paper I mentioned this book:
- E. H. Davidson, The Regulatory Genome: Gene Regulatory Networks in Development and Evolution, Elsevier, 2006.
Thanks very much!! Yes, I will read this book .
In 1-2 days, I will share my understanding of transcription networks here.
Great!
Thank you!!
Hi!!
For last 1-2 days, I mostly focused on two things
Below, I am briefly writing down my understandings:
Regarding 1)
I was trying to understand how a cell processes information coming in the form of biochemical signaling molecules known as ligands and react to them. A cell recieves extracellular and intracellular signals and react to these signals by producing specific protiens and /or RNA-molecule which perform certain precise functions. For example, an intracellular signal can tell the cell that "DNA is damaged and it needs a repair". Then, as a response to the said intracellular signal, the cell produces protiens and/or RNA molecules that repairs DNA. On the other hand, an extracellular signal can come from outside the cell which asks the cell to produce certain proteins and/or RNA molecules for performing certain specific tasks. Cells execute these tasks with precision. For example, Epidermal growth Factor (EGF) is an extracellular signal that asks the cell to produce protiens resposible for cell proliferation. Below, I am trying to give a brief overview of cellular response to signals, which to me is a particular instance of my question "how molecular interactions give rise to cellular behaviour".
Step 1:
Ligands bind with specific receptors present in the cell. Note that for the extra cellular signals the receptors usually lie in the cell membrane , but for intracelular signals the receptors usually lie inside the cell (For eg: Steroid hormone receptors).
Step 2:
The binding of signalling molecules with the receptors cause changes in the configuration of the receptors, which trigger chains of biochemical events known as signalling pathways. When different signalling pathways interact (crosstalk) we get a network called signal tranduction network. The main function of this network is to relay the signal (often via amplifying/damping) by activating various protiens (eg: via phosphorylation) and producing second messengers (eg: realising calcium ions in the cytosol) and finally activating special kinds of protiens called transcription factors (For example, P53 is a transcription factor for activating genes producing protiens responsible for tumour supression), which are resposible for regulating genes in the DNA to produce specific protiens and/or RNA molecules that will perform the biological task asked by the signalling molecules.
Step 3:
When a transctiption factor gets activated by a signal transduction network, it regulates (often stimulate/ repress) the activity of genes which produce either non-coding RNA molecules , or messenger RNA containing the copy of information present in the genes. This process of copying the gene encoded message into RNA molecules is called transcription. The non-coded RNA molecules have important biological roles on their own, but they do not get converted to protiens. However, messanger RNA molecules (whose function is mostly to copy the gene instruction) can enter ribosomes for protien synthesis via a process called translation. Often, the produced protien is another transcription factor for some other gene/s then, it regulate other genes which may produce protiens and or RNA molecules. Again, the produced protiens could be transcription factors for some others genes and so on. This results in a network called transcrioption networks (often called gene regulatory network) which regulate gene expressions. Expression of a gene is the amount of protien produced by the gene at a given time.
So, now, my initial question "how molecular intereactions give rise to cellular function" took a bit precise form : How signalling molecules produce gene expressions? Thus, the answer lies in Step 1-3.
Regarding 2)
According to Uri Alon, transcription network can be modeled as directed signed graphs whose edges are labeled by and . An edge means the product of gene is a transcription factor that can stimulate the rate at which the gene is transcribing. Similarly, an edge means the product of gene is a transcription factor that can represss the rate at which the gene is transcribing. This rate depends on both he concentration of the transcription factors (produced by ) and also the chemical affinity for the binding of the transcription factor to the promoter of the gene .
Now, the arrows in the transcription network always vary due to mutation in the DNA (which has been experimentally proved to be a random event). According to Uri Alon, network motifs are the patterns that survived evolution, that is random mutations. According to Uri Alon, a way to identify such motifs is by identifying patterns in gene regulatory networks that ouccur much more often that in randomized networks having same number of nodes and arrows. For example, negative autoregulation (walking negative feedback loop), postive auto regulations (walking positive feedback loop), feedforward loops, etc are example of such motifs.
Next, big question: if these patterns have been selected via natural selection, then they must serve some purposes. For example, Uri Alon demonstrated the purpose of a negative autoregulation by showing that their dynamics (rate of concentrations of the produced gene regulated by transcription factors and the binding strength of the transcription factor with the promoter of the target gene) are very special (via the Hill equation), and play crucial roles with respect to "achieving and maintaining steady states of the concentration of the protiens produced by the genes activated by transcription factors".
A question in this regard:
I agree that there are patterns in gene regulatory networks chosen by natural selection. Biologists found some and showed their significance in terms of individual dynamics (at least the portion I read so far). However, it is not clear to me how the dynamics of these motifs would behave when they are placed in a sufficient number inside a very large gene regulatory network responsible for the regulations of certain genes.
I felt signal transduction networks are more like the kind of networks treated in the Chapter 5 of this and gene regulatory networks are more like -labeled graphs. If we have to understand Step 1-3 in a single framework, then it may be the case that we may need to conisder/develop a more general framework that encompasses both a monoid labeled graphs and signal flow graph within the same framework.
The way Uri Alon or Tyson describe cells, it seems to me that cells are just biological machines which take signals as inputs and give gene expressions as outputs. However, according to a recent study (I think ongoing) https://mpinb.mpg.de/en/research-groups/groups/cellular-computations-and-learning/news-ccl-eng/koseska-synergygrant-celearn.html: individual cells possess the ability to learn from their environment. Traditionally, cells have been viewed as passive executors of genetic instructions, responding to environmental signals often in a predetermined manner. This research challenges that paradigm by proposing that cells actively generate internal representations of their complex environments.
I found this perspective of a cell quite interesting!!
All this is fascinating, and I think ultimately it will lead to some great mathematics.
You wrote:
According to Uri Alon, a way to identify such motifs is by identifying patterns in gene regulatory networks that occur much more often than in randomized networks having the same number of nodes and arrows. For example, negative autoregulation (walking negative feedback loop), postive auto regulations (walking positive feedback loop), feedforward loops, etc are example of such motifs.
What are some non-motifs? I.e., what are some patterns that occur less often than in randomized networks having the same number of nodes and arrows? (If some occur more often than in randomized networks, others must occur less often.)
John Baez said:
All this is fascinating, and I think ultimately it will lead to some great mathematics.
Thanks very much!!! I also hope so!!
Continuing my remarks:
We can deduce from what Uri Alon wrote that these non-motifs - patterns that occur less often than in randomized networks having the same number of nodes and arrows - either serve no purpose, or are actually harmful.
I would like to study these non-motifs. Here's an easy way to start. If you look at signed graphs with a very small number of nodes and arrows, and find which ones Alon calls 'motifs', then the rest should be 'non-motifs'.
He is interested in motifs, but non-motifs should also be interesting.
Interesting!! Yes, it makes sense. In a way, I think you are asking "Why natural selection is saying non-motifs are harmful/non-useful " ?
Right, but first I want to know what they are, and see if I can learn anything from that.
Thanks!! Yes, I got your point. I will try to understand "what Uri Alon means by a pattern which is not a motif". I will try with something very small signed graphs as you suggested.
Yes: just list all signed graphs with n nodes and m edges for small natural numbers n and m, cross off the ones that Alon calls motifs, and make a list of the rest.
Thanks!! Yes, I will do as you suggested.
Hi!!
I think I have an example of a pattern in Biology which could not sustain the evolution. I was trying to understand about these patterns in general.
Below, I am writing down my current understanding:
Like the way, Uri Alon defined a motif in a particular network as a pattern that appears much more frequently than in a random network with same number of nodes and arrows, he used the term anti-motif in a particular network as a pattern that appears much less frequently than in a random network with same number of nodes and arrows. He also argued that this means that Natural Selection has deliberately rejected such patterns. So, from his definition, it seems, that not a motif anti-motif but anti-motif not a motif. Also, it seems from the definition itself, the concept of motif/anti-motif is defined with respect to a particular network.
Example of an anti-motif:
The example is three vertex feedback loop or a directed 3-cycle. This is a big surprise to me, as I found such patterns very common in systems dynamics diagrams. The example is the neuronal network of C. elegans, a tiny worm composed of about 1000 cells, of which 300 are neurons. The 300 neurons are connected by about 5000 synaptic connections. Studies tell that directed 3-cycles occur much less frequently in the neuronal network of C. Elegans that in in a random network. More, surprisingly, according to Uri Alon, three-node feedforward loops (a direct influence and an indirect influence) are found to be in much more abundance than in a random network. Hence, he argued although both a directed 3-cycle and a three-node feedforward loop both consists of same number of nodes and arrows, Natural selection has chosen three-node feedforward loops but rejected a directed 3-cycles in the designing of the neuronal network of C. Elegans. However, he argued that networks from different fields like (transcription networks, neuronal networks, social networks, food webs, etc.) usually display different network motifs because each field has its own functions and its own constraints. Despite this, he said it is very surprising that three node feed-forward loops are motifs in both transcription networks and neuronal networks. He gave hypothetical explanation of such similarity: He said, that neuronal networks and transcription networks are similar in functioning because both processes information. More precisely, neuronal networks process information from sensory neurons to motor neurons, whereas transcription networks process information in the form of signals coming through the receptors of the cell to produce gene expressions. Uri Alon said the following:
This similarity in function raises the hypothesis that evolution converged on similar network motifs in both networks to perform important signal-processing tasks.
Uri Alon discussed these in the Page 88-89 of the second edition of his book.
Inspired from @John Baez (with the concepts of systems archetype), I have a hypothesis in this direction:
Evolution also converged on "similar network motifs and anti-motifs" in all the biologial networks which perform similar functions. So, if we see some differences in the netowrk motifs or anti motifs between two biological networks, then we may conclude that the two biological networks perform different biological functions.
It is interesting to me to note that in the human-made electronic-circuits, we have directed 3-cycles as motifs (as mentioned in the screenshot taken from Page-88 of the Uri Alon's book), but such things are considered as anti-motifs in neuronal networks of C. Elegans.
Uri Alon and Shalev Itzkovitz attempted to develop a general Mathematical formalism (based on random networks) for network motifs in Subgraphs and network motifs in geometric networks, but their study showed that their model is insuffcient to capture the reasoning behind natural selection in biological networks.
I am now getting a feeling that System archetypes are the wrong patterns in the human made systems (modeled as signed graphs) found by humans, whereas anti-motifs are the wrong patterns in the biological systems (modeled as signed graphs) found by Natural Selection.
Possibly there are many more anti-motifs other than the 3-cycle example. I will search for more such examples.
This is very interesting! I think with more examples, and more thought about the various choices of semantics of signed graphs, one might be able to figure out what makes certain signed graphs into motifs or anti-motifs (in a way that depends on what they're used for).
Thanks very much!! Yes, I completely agree.
I have a feeling: Dynamics may not be the only way to find the importance of motifs, or the harmfulness of anti-motifs. There may exist something else too. Although at the moment I do not have much idea about "what else they could be".
I feel their usefulness or harmfulness must somehow depend on what they "mean"- and thus, some semantics. This semantics may not be dynamics. But as a physicist, I can't help but feel it's dynamics - that is, what the motifs "do".
When you read more of that book, you'll probably learn more about what motifs "do", or "mean". And that will be a major clue.
Thanks!! Yes, I got your point. I will read more of that book, and also will try to find out more examples of anti-motifs in biology.
Hi!!
I just found out that "the concept of motifs/anti-motifs in biological network" (in the way Uri Alon discussed) has some controversies. For example, please see here (references 70 to 77) argued against the idea that motifs are chosen by natural selection for some purpose and anti-motifs are rejected by nartural selection for some purpose. For example, in On the origin of distribution patterns of motifs in biological networks, Arun S Konagurthu and Arthur M Lesk in the Discussion and conclusion section argued against Uri Alon's hypothesis that "3-cycles are anti-motifs and Feed forward loops are motif". Similarly, in the paper Network motifs: structure does not determine function, authors argued there is no characteristic behaviour for the motif, and with the correct choice of parameters and of internal structure, very different, indeed even opposite behaviours may be obtained.
Now, I am very confused!! Whom to believe "Uri Alon's motifs or the scientists who criticized the idea of motifs? I am more confused because I am not a biologist by training.
Well, who seems more correct?
This counting of motifs/anti-motifs in a structure remind me of Ramsey theory.
Kevin Carlson said:
Well, who seems more correct?
At the moment, it is diffcult for me to answer your question in a concrete way because , it has been only a few days since I started learning Uri Alon's works on network motifs, and it is yesterday only, I came across those papers which argued against Uri Alon's works on network motifs. However, below I am trying to write down my current understandnig of the issue, which may of course evolve in the coming days after learning more about both Uri Alon's works and the works of the Scientists who are against to it.
I find the the follwing idea of Uri Alon interesting: i.e defining a motif in a particular network as a pattern that appears much more frequently than in a random network with same number of nodes and arrows and defining an anti-motif in a particular network as a pattern that appears much less frequently than in a random network with same number of nodes and arrows as indicators of natural selection and natural rejection respectively.
Although I have to read in more details, I think the papers like On the origin of distribution patterns of motifs in biological networks argued that the overrepresentation and underrepresentations of various patterns are actually consequences of the choices of Mathematical models (underlying graphs) used in the study of gene regulatory networks rather than natural selection or natural rejection.
I am convinved (at least the portion I read so far) that the dynamics of certain over-represented patterns like "negative-autoregulation, postive-autoregulation, three node feedforward loops can be shown to be interesting (as they are serving some specific purpose in the gene regulation) when we model the dynamics using for eg: Hill's equation.
However, it is not clear to me, if we place these motifs inside a very large network, then will the special dynamics of these patterns will remain still special ? (In reality, this is the case, because motifs do not exist in isolation, and they usually always occur within some big networks in biology.) This issue has been raised for example in this paper Do motifs reflect evolved function?--No convergent evolution of genetic regulatory network subgraph topologies. Although I am yet to read the paper (the paper is not accessible to me), from the abstract, it seems they meant to say that motifs are not of greater importance to the functioning of the network than any arbitrary subgraph patterns.
However, my previous exposure to "Systems Dynamics" literature which discuss the importance of feedback loops and systems archetypes in causal loop diagrams was very convincing in creating a personal bias towards Uri Alon's theory of motifs and anti motifs. Although, I have not studied how the dynamics of the various feedback loops in a large causal loop diagram affects the dynamics of the whole network. I think, it could be interesting to see whether "the idea that the dynamics of the whole casusal loops diagrams" have any special relationships with the dynamics of the undelying feedback loops. If the answer is yes, then at the moment, I may have a bias toward Uri Alon's works, otherwise, in the other direction. However, I think it is also important to understand the methodology that "the scientists against Alon's works" are using to argue against Alon. It may also be possible that their conclusions are also biased from their choice of Mathematical models.
Yes, it's a very good question whether anybody has actually established with any sort of rigor that small feedback loops in a large CLD are actually relevant to the long-term dynamics. I guess you can choose parameters to force them to be relevant but perhaps there's some way to make this question non-trivial.
Peva Blanchard said:
This counting of motifs/anti-motifs in a structure remind me of Ramsey theory.
Thanks!! This is interesting. Although I have not read much on Ramsey Theory, from a quick glimpse, I think it is also advocating against Uri Alon's works on motifs because the network sizes are usually very large in Biology.
Kevin Carlson said:
Yes, it's a very good question whether anybody has actually established with any sort of rigor that small feedback loops in a large CLD are actually relevant to the long-term dynamics. I guess you can choose parameters to force them to be relevant but perhaps there's some way to make this question non-trivial.
Thanks! Yes, I agree to your points.
Another thing I often find a little awkward in biology examples is that they start with "very simple examples" (from the point of biology) , and if they discover some patterns on these "simple examples", they hypothesize statements for the general complex systems. I know, that due to computational complexity, we do not have much choices in this regard at the moment, but still it feels a little strange to me.
I just found an interesting study : Network connectivity during mergers and growth: optimizing the addition of a module, which argues that the importance of motifs lie in the "way these motifs are connected to the entire network".
In particular, the author wrote the following at the end:
In contrast to several studies showing that global dynamics of a system can depend on the structure of subgraph motifs, our results suggest that “how” motifs are connected in the network may be as important as their structure.
I will read more of this paper.
Sorry, I haven't been following closely, but just to remind us that the big book was warning against leaning too heavily on motifs as an indicator of function in biology.
However, statistical abundance by itself does not imply that these circuits are fundamental bricks of biological systems. Fundamental circuits of biological networks can be underrepresented and appear only once in the network, yet they can have overwhelming importance for the functioning of the cell.
Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles. Thus, the mere statistical significance of the network motif may not confer a definitive functionality within the cell machinery. (p. 332)
The chapter (17) then goes on to give statistical analyses of motif abundance. Naturally, fibrations appear to them as much more important.
Adittya Chaudhuri said:
Another thing I often find a little awkward in biology examples is that they start with "very simple examples" (from the point of biology) , and if they discover some patterns on these "simple examples", they hypothesize statements for the general complex systems. I know, that due to computational complexity, we do not have much choices in this regard at the moment, but still it feels a little strange to me.
Science proceeds by being bold and making simple guesses, then later correcting them as needed. It's a good method. Simple laws are often reasonably good approximations to the truth. But it's not like math, where our papers are very likely to be 100% correct, because we can prove things. In science you expect to be wrong a lot of the time.
Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles.
It would be good to read these papers (although I don't have time now). I suspect that "synchronization" is less important than people want it to be. Human machines often require events to happen in a synchronized schedule, requiring some sort of clock. But I suspect biology can't afford to be so precise. Or maybe biologists mean something much less precise when they talk about "synchronization".
Yes, chap. 21 for that:
One of the main advantages of using symmetries to characterize dynamics is that the synchrony patterns that we predict are combinatorial/topological properties of the network, thus largely independent of the explicit form of the dynamical equations. Therefore, we do not need to specify the equations that describe the dynamics of the system—be it a neural system, a genetic network, or any other physical, chemical, or biological system—beyond the general form from Definition 3.10, as long as the ODEs are admissible for the graph. We, therefore, derive all our results for a general type of dynamics, which can then be specialized to particular systems. However, particular systems are characterized by a large number of parameters and most of them are required to be the same for symmetry to exist. Unfortunately, this ‘uniformity condition’ is never strictly realized in biology. How is the coherent synchronization necessary for life to emerge?
We treat this question below. We first elaborate on an ideal model of gene expression that predicts perfect synchrony. Then we show that the conditions for this model are not realized in biology due to the broad parameter space of interactions among genes, proteins, and small molecules. We discuss how to reconcile this result with the observed (quasi-perfect) synchrony in real systems.
John Baez said:
Science proceeds by being bold and making simple guesses, then later correcting them as needed. It's a good method. Simple laws are often reasonably good approximations to the truth. But it's not like math, where our papers are very likely to be 100% correct, because we can prove things. In science you expect to be wrong a lot of the time.
Thanks very much!! I am getting your points. I agree. I am trying to adapt with this mindset.
John Baez said:
Motifs are good for finding basic building blocks that are statistically repeated ubiquitously. However, the functional significance of these network motifs has been debated, and results indicate that the link between motif and function does not exist (Ingram et al., 2006; Payne and Wagner, 2015; Macıa et al., 2009; Ahnert and Fink, 2016). Indeed, except for the FAN motif, other motifs found by Alon and coworkers (Milo et al., 2002) do not lead to synchronization or other clear functional roles.
It would be good to read these papers (although I don't have time now). I suspect that "synchronization" is less important than people want it to be. Human machines often require events to happen in a synchronized schedule, requiring some sort of clock. But I suspect biology can't afford to be so precise. Or maybe biologists mean something much less precise when they talk about "synchronization".
After reading this https://mpinb.mpg.de/en/research-groups/groups/cellular-computations-and-learning/news-ccl-eng/koseska-synergygrant-celearn.html, I was thinking, whether "internal representations" of the external environment of cells (whatever it means to Biologists) are the reasons behind being "not so precise", because they are behaving according to their internal representations which vary, which in turn make them less synchronic. For example, while learning a concept , often our internal representations of the concept is wrong in the begining (which may lead to imperfections, asymmetry, etc), but slowly they tend to get better but never perfect.
David Corfield said:
Yes, chap. 21 for that:
One of the main advantages of using symmetries to characterize dynamics is that the synchrony patterns that we predict are combinatorial/topological properties of the network, thus largely independent of the explicit form of the dynamical equations. Therefore, we do not need to specify the equations that describe the dynamics of the system—be it a neural system, a genetic network, or any other physical, chemical, or biological system—beyond the general form from Definition 3.10, as long as the ODEs are admissible for the graph. We, therefore, derive all our results for a general type of dynamics, which can then be specialized to particular systems. However, particular systems are characterized by a large number of parameters and most of them are required to be the same for symmetry to exist. Unfortunately, this ‘uniformity condition’ is never strictly realized in biology. How is the coherent synchronization necessary for life to emerge?
We treat this question below. We first elaborate on an ideal model of gene expression that predicts perfect synchrony. Then we show that the conditions for this model are not realized in biology due to the broad parameter space of interactions among genes, proteins, and small molecules. We discuss how to reconcile this result with the observed (quasi-perfect) synchrony in real systems.
Thanks!! Interesting!! I will read Chapter 21.