You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Patrick raised a good point I wanted to respond to in another thread that had become a kind of toxic quarantine nightmare … in an attempt to rescue it….
@Patrick Nicodemus said:
Amar Hadzihasanovic I think that the appropriate level of scrutiny for applied category theory (in machine learning, in systems modelling, in engineering) is higher than for category theory as applied in algebraic topology, algebraic geometry, programming language theory and so on. Category theory has been extensively applied to great effect in those domains, so we have lots of evidence that it works there. We don't have as much evidence that it works outside of domains.
I have yet to be persuaded by many accomplishments/promises of applied category theory … I just don't bring up my criticisms constantly because it doesn't seem very collegial to constantly challenge colleagues and say that their work is not meritorious, and I don't want to be divisive. But here is one prominent textbook in applied category theory that uses diagrams in a way that is sometimes purely aesthetic with no mathematical content, which I view as extremely poor scientific communication from someone who should know better. It gives the impression to students that there is some deep profound insight to be gained simply by turning their ideas into diagrams of this form, even if they don't have categorical meaning.
Point 1. Pedagogy.
I just want to raise the point that when you’re teaching / learning something new the simple act of reliably translating is meaningful. Obviously it cannot stop there, and “what is this an example of?” needs rails, but that is how I interpret books like the above book excerpt — “hey, this objects + arrows stuff is intuitively appealing, and if you follow the exercises we’ll show you how to make it precise too!”
Point 2.
Is ACT … not really applied?
On some level I’m in complete agreement. I was looking through some work by VI Arnold this morning that I needed for work, and the sheer density of useful theorems was quite a contrast with ACT. That’s not a fair comparison of course, but it’s relevant for understanding what people like me who make a living working on problems that aren’t “unique up to unique iso” think is “applied”, vs the use in ACT. :)
Possibly a constructive direction is to build some guidelines or best practices or assistance for if someone wants to market their work as applied. Maybe they really just want to do CT and it’s just a marketing decision, or maybe they’d genuinely love more subject matter folks to work with. I think this Zulip can and does serve as an incubator for such discussions… can we do more?
Point 3. Let’s keep a fair score.
I also feel a lot has happened in the past that is incredibly applied / practical that almost doesn’t get called “applied category theory” now (perhaps to avoid claiming others work under a banner they didn’t choose?) — the use of the Curry Howard isomorphism, monads, etc has transformed how concurrency and multiprocessing work. I remember having to write Monte Carlo simulations in FORTRAN code that handled multiprocessing… oh dear god. And it’s still painful in Python, and even C. But Scala? You essentially just write the code you would anyway and then tell the compiler “hey parallelize this code, wouldja?” And it… it just works, pretty much. It’s incredible. Haskellers have similar experiences according to Bartosz Milewski. That’s only possible because of category theory and it’s incredibly practical. Facebook recently changed out much of their Ruby code to Scala and the potential for code with this kind of rigorous reliability thing is why, IMO.
Ryan and David’s company is doing such cool things, and in his talk he makes it quite clear they worked through some genuinely deep and tricky challenges to make a painful and error-prone process of DB migration and maintenance far more reliable and well-controlled. The link between DB query syntax and profunctors was especially exciting, and that’s just scratching the surface.
A key thing in all of this is that it’s good for the community to do outreach and translate category theory out of the language itself most naturally beautiful in, into them words what people actually use. This happens naturally in a sense when CT folks can’t just talk to eachother, but actually have to interact with people from other fields. So we could try to simulate that. I haven’t made it to ACT yet — are there mixers or hackathons specifically targeting this?
I think the field can get there. I hope. And it’s important to recognize what the people who work in applied fields mean by “applied” how it is often used in ACT, which causes some fatigue or skepticism I think.
Anyway, that’s just my view as very much a non-expert non-academic. Thanks for bringing this up Patrick.
Thanks for your response, Eric. I did explicitly include programming language theory as part of my list of fields where category theory has successfully been applied for a long time and has a solid history/track record, so Haskell and Scala are not really what I had in mind here. As you say, of course they are applications of category theory, so it is a somewhat arbitrary distinction. I am in effect defining "applied category theory" to include a certain group of researchers who are relatively closely networked with each other and are experimenting with new and bold applications such as economics. But if others take umbrage with this, I'll use other language.
Apologies if I missed some of the context. I think maybe “Applied Category Theory” “applied Category Theory” is a source of difficulty. One of the downsides of using a generic term as a Brand, with the upside being: you don’t have to define new terms to tell people what you do.
Eric M Downes said:
Apologies if I missed some of the context. I think maybe “Applied Category Theory” “applied Category Theory” is a source of difficulty.
Indeed. ACT did not start this, of course: Applied Mathematics itself is often not an umbrella term for "mathematics that has applications", but a clever name for a specific subfield of differential equations. And yet one should not expect to see any differential equations when learning Applied Model Theory, and if one does, it's not necessarily the model theorists who are to blame.
In the department at Adelaide, there used to be two distinct camps of applied mathematicians: plumbers and gamblers. The former did applied DEs like fluid dynamics or more generally deterministic systems (numerical solutions, Matlab, etc, not 'using Lie algebras and algebraic geometry to study Painlevé transcendants), the latter did stochastic modelling like epidemiology. They are still the two main groups, though the stochastic people managed to branch into data science, network analysis and so on. But in that department, it is literally only 1/4 pure maths, 1/2 applied, and 1/4 stats. I know in some maths departments it is actually all (or almost all) pure maths.
That many applied mathematicians work in engineering departments tells you more about the specific applications they do, rather than what applied mathematics is.
I love that, “plumbers and gamblers” … I believe than in another 10-20 years there will also be… “shibarists”, “climbers”? (people who play with load-bearing strings) commonly seen in the company of the others. And that does fall into the categories Patrick mentioned, and was mentioned by Madeleine.
IMO those things you mentioned (stochastic methods, diff eq, graph/network theory) are the most easily useful for applied problems because of
By homomorphisms I mean there are theorems (in the case of probability: convolutions of probability distributions add cumulants, convex combinations average moments, etc) that allow you to add specific behaviors to a model that the client asks about / that arise, with little thought or effort, without generative effects.
By stability, I mean the ability to translate (the violation of) your assumptions into equivalence classes of qualitative behavior.
By software, I mean: can someone who doesn’t know/remember how to prove the theorems write a simple program that yields answers?
So in particular I think working with monoidal categories using string diagrams etc could one day be just as common. AFAICT It has all but the last three, and at the rate Bob Coecke et al. are going that won’t be an impediment for long!! :)
(Caveat: Though now I think of it, tensor networks have more or less all those things I mentioned except an abundance of software, which should be no problem to overcome… so maybe I missed something else like “ease and likelihood of learning it when you’re an undergrad” Hmmmm)
As someone with an engineering education, I am excited about category theory. What excites me most currently, though, is not any particular idea for using categories to do a certain kind of modelling. Instead, I find the language of category theory to be helpful for clarifying and organizing mathematical thinking. I hope to better understand how to use the ideas of category theory for this purpose by studying areas where it has been used effectively - such as algebraic topology, and algebraic geometry. This is already a lot of fun! But someday, as a long-term hope, I would like to be one of the people who work to organize and clarify mathematical thinking in the area of medical imaging. And I am hopeful that category theory can provide a lot of helpful patterns and tools for such a project.
I fail to understand the criticism of ologs, even putting aside that this is a criticism of the first page of the second chapter of literally the first book ever written in Applied Category Theory (with a capital 'A'). You can define finitely presented categories by drawing diagrams like that. That's perfectly formal and rigorous and real math, even if it's not very much real math. Obviously this is far from the end of the story!
Okay. I agree that this denotes a category sometimes known as 2, the second ordinal number. Sure, that's mathematically contentful. But this is something of an easy out imo because then the labels don't have to mean anything or have significance. Mathematically the labels don't exist. My criticism is that I don't see how this mathematically models the idea that we put things in boxes. It does look like the diagrams get better through the rest of the book but this turned me off. I personally will just say I'd be unable to recommend it to any of my friends in the applied sciences with the implication that it will assist them with their research, but this just reflects on me that I don't understand the utility of the ideas in the book.
A lot of so-called applied category theory might be called "hopefully eventually applied category theory", and much of what Spivak's work has been of that sort - but a lot of it is actually getting applied. I forget if his book on ologs came out before or after @Ryan Wisnesky started the database company Conexus, which uses these ideas. Ryan would know.
If the book came out afterwards, Spivak could have better illustrated the utility of the ideas in his book by talking about the software used by Conexus. But perhaps the book came first and then Spivak and Wisnewsky started the company!
Either way, a bunch of the same math is now being used in AlgebraicJulia, a software package developed by a large number of people including but not limited to many at the Topos Institute. I'm working with a team of people to apply David's ideas (and a bunch of other ideas) to agent-based models in epidemiology.
Given all this, I think any book anyone wrote explaining these ideas would quickly go out of date. It might still be useful. But a lot of people seem to instead be writing blog articles, and to keep up with those I highly recommend LocalCharts.
You'll see the idea of ologs has been greatly extended and is being used and further developed in a diversity of ways.
I'm pretty sure the book came first. But, yes, I'd send people in the applied sciences interested in learning about applied category theory to the newer book "Seven Sketches..." these days, still being sure not to imply that this book would assist them in their research in the same way as a good book on, like, matrix factorizations would. But I don't think people are claiming, and if they are they shouldn't be, that ACT currently has books that are just going to directly improve the quality of a typical applied scientist's research. That's a pretty high bar. It's more at the stage now where you probably have to actually collaborate with a category theorist to seriously make use of ACT ideas.
I completely agree. Unfortunately I know people innocent of category theory who have read Category Theory for the Sciences or Seven Sketches hoping they would come out at the other end being able to apply category theory to their work. Alas, it's not that easy.
Whenever I get a chance I tell people that it took me 10 years of work to reach the point of helping people use category theory to create useful software, and it required collaborating with quite a few mathematicians, computer scientists and also - crucially! - some experts in epidemiology who also know a lot of computer science and some category theory. That is, some experts who have a particular task they want to get done, who know enough math and computer science to realistically assess what can be achieved, and are willing to put years of work into it, working with a team of others.
Eventually it will all get easier.
Kevin Carlson said:
I'm pretty sure the book came first. But, yes, I'd send people in the applied sciences interested in learning about applied category theory to the newer book "Seven Sketches..." these days,
I loved seven sketches!
… and I have a gripe. (At least with the free version… haven’t seen the published version) A lot of the one-off applications in the examples are… very thin? They fail to appreciate some important and basic ideas in those fields.
Example*: claiming that chemistry can be treated as a monoidal category, with no further caveats, is just plain wrong. And David could have known this — there is a famously massive crack in the Chemistry Building at Berkeley (viewable from the street even) from an LAH explosion (Lithium Aluminum Hydride). It directly demonstrates non associativity:
Roughly
(given enough of ea. are present)
(The latter is a substitution reaction on a ketone)
So when you use LAH you never use water as a solvent, and carefully dessicate any solvent you do use, even in a reaction that would eliminate water. (Like hydrolysis of fats.)
That’s the danger of claiming an application without consulting more folks from those fields.
I am ok with it, I make mistakes like this all the time, but I do suffer for it and it’s not uncommon for people to think I’m an idiot. :) Many scientists who read stuff outside their field are looking for the “first exit off the highway” and will just dismiss you if you make such mistakes or recommend such resources.
Yeah, I think that's an easy complaint to want to make. Possibly the policy should be not to even mention any semi-realistic applications that haven't already been fleshed out, without talking them over in detail with a domain expert or two. That would be a bit constraining though; the examples, even if they may need substantial refinement to really work, can certainly still be inspiring. The audience isn't just suspicious working scientists, after all...It's a tricky question, I think.
I just wish he’d bought some ppl from chemistry a few beers and was like “hey do you like these two pages?” And then after explaining what a monoid was they’d probably have set him straight. (The problem applies even to reversible reactions, they just aren’t as… demonstrative.) so to do that for every field does require a substantial Beer Budget, but hey at least you make some friends! :)
I think the flavor of the book is all the different cool examples. I think there must be a way to have both.
I started working with David Spivak in 2011 or so, after he wrote the first functorial data migration paper (after he wrote the olog paper with Kent). Conexus incorporated and exited MIT in 2015. I understand that Conexus, and to some extent categoricaldata.net - is often not mentioned, or underplayed, in David's writings to avoid 'commercial bias', which I think is totally fine - it's not David's job to sell our awesome software, it is ours. That being said, we do love being cited and working with people, etc.
In any case, I think Eric may be noticing a phenomenon common to modeling - that all models are wrong, but some are useful. And to a mathematician, it may be unclear when a model is too simplistic to be useful in practice. The very first time I ran CQL's sigma data migration functor, it converted the data {Alice, Bob, Charlie} to the data {1,2,3}, because those two sets are isomorphic and therefore this behavior was mathematically correct. But as a model of data migration, that behavior was simplistic - or even wrong. But David had no way of knowing this, it is something I had to do a postdoc with him to fully communicate and then explore the implications of. So I take the 7 sketches book as a literal invitation to e.g. tell David et al their models are simplistic and work with them to develop more useful ones; and for the entrepreneurial minded, any one of the chapters in his book could even support a business...
I'm not sure whether I'm quite following your example, Eric. So I have maps in my putative monoidal category, simplifying names a little, :explosion: and . And your point is, what, that in a monoidal category I should be able to add on both sides of that morphism, but in fact if I add on the left I end up with :explosion: again on the right?
Yeah. Specifically
Which works best if you consider RRCO a fat so the RHS is the hydrolysis of the fat. Otherwise it just sits around and nothing reacts and you don’t even have a magma :)
(to say what I think chemistry could potentially be modeled by… the space in which reactants of a rxn with a given solvent live, looks like an n-simplex over a Moore closure. Given a rxn finite magma the closure is the smallest containing monoid in . The simplex means you form a subalgebra of containing only convex combinations. Of course that fails Ryan’s usefulness criteria because can be exponentially large compared to .)
(So normalized vectors containing every possible reactant from every binary combination of reactants and products. The extra caveat is that eg even with two reactants you can have side reactions and won’t have 100% yield. So it’s more like but this is getting quickly impractical.)
Eric M Downes said:
Yeah. Specifically
I don't understand this either. I don't see why you would consider that a mix of and ( and ) and a mix of ( and ) and are not the same thing. An associator is precisely what you need in order to be able to consider these two kind of morphisms:
i.e. it gives you the possibility of making either and react together, or making and react together in a situation where both of these two reactions are possible.
I think the point is that in real life, A and (B and C) causes an explosion, and (A and B) and C does not. and therefore the model should not be associative. this phenomenon also happens in programming languages that allow non-termination, and is why Haskell is not a category
Ok, but what is the meaning of A and (B and C), and (A and B) and C precisely? Does it express a different order into which you mix the three components together?
Yeah, that's the impression I have, which is also the literal meaning of the parentheses, right?
It can be an ordering but that’s not realistic. Multiple things are present at once. So the parenthesis (X + Y) + Z are “X reacts with Y but not yet Z” and recall that after a molecule of X reacts there’s still plenty others just like it left.
OK, sure, so an ordering of molecule-by-molecule reactions, as opposed to of mixing the macroscopic substances.
So, it does seem like normal sophomore chemistry, from my memory of this, also assumes this associativity. Nobody requires you to write parentheses on either side of a reaction equation. No?
Ryan Wisnesky said:
this phenomenon also happens in programming languages that allow non-termination, and is why Haskell is not a category
Can you explain this a bit more?
Kevin Carlson said:
So, it does seem like normal sophomore chemistry, from my memory of this, also assumes this associativity. Nobody requires you to write parentheses on either side of a reaction equation. No?
I included the parens for mathematicians.
Sophomore chemistry you might not even write the water, like if it’s a solvent. :/ Sophomore chemistry reaction equations are not quite that well-defined mathematically. The thing you want to be an equals or an implies comes with lots of unwritten contexts.
Like you might not even write the water. I did to show the point.
@Jean-Baptiste Vienney I'm identifying "causing an explosion" with Haskell's "undefined", i.e., a program running forever. In Haskell, you don't have associativity of function composition when undefined is involved, for example, undefined . id != undefined because on the LHS the explosion happens "later", which can be observed by Haskell programs. So to model Haskell correctly, you have to use domain theory and basically, 'keep track of what explodes when', which leads to beautiful topology etc.
Eric M Downes said:
Sophomore chemistry reaction equations are not quite that well-defined mathematically. The thing you want to be an equals or an implies comes with lots of unwritten contexts.
Sure...I guess my point in bringing this up is that these rough mathematical models are good enough to be a critical way in to chemistry for, say, roughly everyone who's ever learned it. So there would seem to be an argument that saying "Chemistry [i.e. in this case the highly simplified model of chemistry taught in high school] can be given the structure of an SMC" is still potentially quite useful. I guess, to draw out the distinction, is the complaint that chemistry doesn't really form an SMC here all that different from if someone would say "space-time is a four-dimensional vector space" and receive a response complaining about general relativity?
Related: I told someone who knows about money and resources about "resource theories" (symmetric monoidal posetal categories, also described in seven sketches) and he correctly pointed out that there is nothing in the theory accounting for resources decaying over time, the maintenance needed to preserve their value, etc. A fuller categorical economics would need to take these things into account.
I really doubt Fong and Spivak were trying to teach people chemistry in those examples involving chemistry, or present a formalism that would help chemists. I imagine they were trying to teach people monoidal categories based on their high-school knowledge of chemical reactions as things of this form:
A + B + C D + E
But by the way, there's a whole branch of mathematical chemistry that studies 'reaction networks'. A reaction network conveys the same information as a Petri net. When each transition in a Petri net equipped with a rate, you can extract a differential equation called the rate equation, and mathematical chemists have proved wonderful theorems about these equations; there are also some deep unproved conjectures about them. On the other hand a Petri net is a presentation of a free commutative monoidal category. So while it doesn't cover everything one wants in chemistry, there is a pretty nice body of theory relating Petri nets to chemistry on the one hand and commutative monoidal categories on the other. I wrote a book about this.
Slight aside John, but I'm interested that you still think of Petri nets as the things that generate free commutative monoidal categories rather than defaulting to whole-grain nets. Do you do this (a) out of habit (b) to interface better with chemists and chemical literature (c) because there's something you really like mathematically better (d) other?
Mainly (a), (b), (c) and (d).
Certainly chemists studying the rate and master equations are happy with commutative monoidal categories, not caring about the extra data in a symmetric monoidal category.
On the other hand, there's a certain sense in which the "ideal" way of presenting a symmetric monoidal category where you have complete control over whether the symmetry obeys equations like or not is a -net - a concept I have some fondness for since I helped write a paper on it. My coauthors showed whole-grain Petri nets are the full image of prenets in the category of -nets. What that means, roughly, is that they're the -nets where you never have extra equations on the symmetry like . So if that's what you want, then that's fine.
The category of whole-grain Petri nets has the apparent advantage of being a presheaf category. People using AlgebraicJulia will be drawn to that. But the morphisms in this presheaf category are too general to be correct for many purposes - as Kock points out, you often want only the 'etale' morphisms.
In short, there's a lot to say about this, but chemistry either old-school Petri nets or whole-grain Petri nets work fine, and having written a book and some papers about the former, I tend to default to the former.
No obviously they weren't trying to teach chemistry. (But these kinds of reactions do show up in even sophomore chemistry, though I don't have an example ready.)
I'm just trying to say, if "applied" is in the title this kind of thing becomes an issue, because it means something to the audience it doesn't mean to you/us. That's all.
Oh 100%. (in reaction to a post that is gone :) So how do we communicate that to people is an interesting question. I've had people dismiss that book for reasons like this, who were even willing to learn Haskell just to understand category theory. :)
I wish they wouldn't. It's a wonderful book, but... shrug we do what we can.
What I think is needed here is textbooks targeted for a specific domain. It isn't sufficient to talk about "applied category theory" in the broad sense; one needs "category theory for machine learning", "category theory for economics", "category theory for systems science", "category theory for engineering", etc, with concrete examples in the textbooks that are relevant to the domain at hand. The problem with "Seven Sketches" is that it's a grab-bag of different examples from various different domains, which isn't really useful if your domain isn't covered in one of the examples, and even if the domain is covered in the textbook, the examples from a particular domain aren't as in depth as they can be in a category theory textbook specifically dedicated to that domain.
This points to a larger point, that the ACT programme today largely consists of category theorists who are tying to find domains to apply category theory in, with lots of hype that category theory may not live up to. I would consider the applied category theory programme to be successful if most of the people were domain experts using category theory as a tool in their fields, and there was no longer any need for category theorists to act as hype persons for category theory.
Just to note that the thing people are calling high school chemistry is actually something professional chemists think about day to day, in my experience (I've worked with a few). e.g. if you have a proposed reaction mechanism then you write down the reaction steps and then you balance them. When you do that you're not worrying about the order in which you mix the reagents, because they've already been mixed and you're trying to work out what's happening at the molecular level. To do that you're trying to find a formal combination of reactions that matches the observed result. It's basically Petri nets, although chemists don't call it that or think about it in those terms. The relevant doesn't describe physical mixing, it's just taking the union of formal multisets of the species' names. This is only a small part of what chemists do of course.
Eric M Downes said:
Kevin Carlson said:
I'm pretty sure the book came first. But, yes, I'd send people in the applied sciences interested in learning about applied category theory to the newer book "Seven Sketches..." these days,
I loved seven sketches!
… and I have a gripe. (At least with the free version… haven’t seen the published version) A lot of the one-off applications in the examples are… very thin? They fail to appreciate some important and basic ideas in those fields.
Example*: claiming that chemistry can be treated as a monoidal category, with no further caveats, is just plain wrong. And David could have known this — there is a famously massive crack in the Chemistry Building at Berkeley (viewable from the street even) from an LAH explosion (Lithium Aluminum Hydride). It directly demonstrates non associativity:
Roughly
(given enough of ea. are present)
(The latter is a substitution reaction on a ketone)
what exactly is the associativity that fails here [nevermind, I found it going on reading]? Meaning, how large is the class of compounds who suffer from this problem, is this the only reason associativity can fail, why (at a fundamental physical level) does this happen...?
I think this is a very instructive example that teaches not to dismiss category theory as useless here, but instead propels the need to study even more general structures motivated by chemistry: like monoidal categories, where the tensor is a total operation, but the axioms of associativity (and unitality?) are not universally quantified.
call these "slow-monoidal categories" (name invented on the fly, on the blueprint of "skew-monoidal" :lol:)
As a mathematician, I find exciting that the mathematics we have is "too naive for the real-real world"; but also as a mathematician, I see no interest at all in actually solving the problem of modeling a chemical reaction using a slow-monoidal category. My aim is instead, "motivated by this example in chemistry [bla bla]", to write down a general theory of slow-monoidal categories, do they have a coherence theorem, is there an example in mathematics now that we have opened the Pandora's box, how do string diagram languages change for slow-monoidal cats, what is a slow-actegory...
I think this is also a very pragmatic divide between pure and applied mathematicians: "how lucky I am, the real world gives me an excuse to study modules whose lattice of submodules is totally ordered" versus "how lucky I am, mathematics gives me a tool to understand the realw ùorld"
(In short, it is extremely hard for me to understand what other appeal the real world can have, other than give pretexts to study mathematical [=linguistic] objects.)
I think we have to ask what an expression like A + B + C D + E ordinarily means in chemistry - and then, a separate question, what it would have to mean for such expressions to be morphisms in a monoidal category, so that we can compose and tensor them.
Here's something that's not the correct answer to either of these questions: "if you mix a mole of A, a mole of B and a mole of B, you'll get a mole of B and a mole of C".
I would say that this "non-associativity" is happening at the level of process, not resource. Rather than saying , the issue seems to be failure of associativity for the mix operation:
why (at a fundamental physical level) does this [non-associativity] happen...?
Not sure how "fundamental" you mean to go. I will err on the side of basic-but-relevant, and we can get into detailed physics if you want.
I can offer the following at the level of thermodynamics and kinetics, which I think is the right level of description. I can elaborate on why.
A mole is a "practical quantity" of a substance (one mole of water is ~ 20 mL); When you pour one mole ( molecules; our smallest unit of reactant and product) of each of liquid substances A, B, and C into a beaker, fluid dynamics (including mixing) causes the molecules of A, B, C to come in contact with one another, affording them an opportunity to react.
Let us say that you want the reaction A + B P to occur. Wether and to what extent this occurs depends on:
To address Spencer's hypothesis, all of the above remain true independent of good mixing or not. You can find some very well-mixed reactions that behave non-associatively. Chemists use stir-bars to mix things as well as possible, but certain reactions (like the LAH + H2O I mentioned) happen faster, are very thermodynamically favorable, and the result is a gas plus a bunch of heat. Hot gasses escape the beaker (or break it if confined), and so they drive the reaction forward, as per what I said in (2) and (4).
I have not told you why physical entropy behaves the way it does, but I could try to speak to that, or provide references. I have not addressed the relationship between physical entropy (a property of the system) and the mathematical entropy relating to probabilities over ensembles of systems, but I can do that. I have not discussed at all the measures (like Gibbs free energy) that physical chemists use to determine the kinetics of transition states and the thermodynamics of reactions in open systems, but I can do that. Finally, I have not addressed the "lore" of chemical observations like the Iodine clock, the Haber process, the replication of DNA, etc. that I would use to illustrate the above principles in a more concrete manner; please ask if that would be useful.
Regarding Jean-Baptiste's comments and others, I don't doubt there is some associative system in which all of this will work out. Because every magma embeds into a monoid, and so on. But, the operations of the enveloping associative algebra are non-trivial transformations of the original, in contrast to the simple example from Seven Sketches that motivated this.
Spencer Breiner said:
I would say that this "non-associativity" is happening at the level of process, not resource. Rather than saying , the issue seems to be failure of associativity for the mix operation:
I like your idea, but disagree; please see the above. Happy to discuss more.
As per (**) the footnote is: the relevant thing to care about for a forward reaction in an open beaker/system is some variant of "free energy" . In the simplest variant, corresponds to an "ensemble" average of which relates to the hamiltonian, in John's thread on energetic sets, and entropy with for a system in which only heat but not gas/chemicals are allowed to escape the beaker, so temperature varies. For Gibbs free energy, which chemists use, another term is subtracted and I can talk about that if anyone would find it helpful.
fosco said:
I think this is a very instructive example that teaches not to dismiss category theory as useless here, but instead propels the need to study even more general structures motivated by chemistry: like monoidal categories, where the tensor is a total operation, but the axioms of associativity (and unitality?) are not universally quantified.
Is there any work smushing together linear temporal logic and monoidal categories in this vein? Because a casual peek at Google Scholar suggests not. Meanwhile something like Alternating-time temporal logic with resource bounds seems to be relevant for the first half of this notional smushing.
Eric M Downes said:
Spencer Breiner said:
I would say that this "non-associativity" is happening at the level of process, not resource. Rather than saying , the issue seems to be failure of associativity for the mix operation:
I like your idea, but disagree; please see the above. Happy to discuss more.
I don't really see the complications of mixing as germane here. To me this says there should be an indexed family of mix processes (e.g., with/out stirring).
The point is that resources only interact through processes (i.e., mix), so your non-associativity says "Mix A and B (), then mix C" "($$id_A \otimes) Mix B and C, then mix A"
The chemist cannot control the order of resource interaction in any but the simplest one-step reactions.
What you suggest looks fine mathematically, but it seems this will require you to abandon the equations that chemists write. does not presume an order of resource interaction; for certain reactions, is violent enough that, the residual amount of remaining from will still react demonstrably because adding precisely equimolar is impossible.
If you see a way to eventually return to the application, or scope to a use-case that is still relevant to chemists, then please disregard this concern. But to me, this suggests an inherent non-associativity that should not be swept under the rug: almost no reaction is ever 100% complete. A more direct way is to just assert you are suppressing any "unexpected reactions"; this is more honest, albiet less categorical and less useful. There are synthetic routes that can exclude such, but outside of very simple situations they can only be planned by people with years of lab experience in the "dark art" of chemistry (their term not mine!) The worst explosions and zero-yields though, have happened from insufficient care/experience in planning the synthesis route, resulting in unexpected reactions of reactants that, assuming associativity and complete conversion "should not have been there".
Spencer Breiner said:
I would say that this "non-associativity" is happening at the level of process, not resource. Rather than saying , the issue seems to be failure of associativity for the mix operation:
@Eric M Downes, would it be better if we replaced mix by "mix then wait 30 seconds"?
John Baez said:
I think we have to ask what an expression like A + B + C D + E ordinarily means in chemistry - and then, a separate question, what it would have to mean for such expressions to be morphisms in a monoidal category, so that we can compose and tensor them.
To me, such a morphism would be a process with output one and one starting from one , one , and one . With this interpretation, must be associative.
Maybe at the level of morphisms we could decompose things as a mix operation and a operation for every .
Now I'm not sure what would be the domain of a mix operation starting from ?
What I know is that for the wait operations, we should impose this:
and
.
This is still super confused, I'm just trying ideas.
One more idea: maybe should be interpreted as: one and one into separate containers.
Maybe we should use another symbol for and together in a same container.
Something like . Then putting a unit of and a unit of together will be represented by a morphism
And we could have probabilistic morphisms for reactions. So morphisms of the type
with an assigned probability .
Maybe Markov categories could be useful.
@Eric M Downes, would it be better if we replaced mix by "mix then wait 30 seconds"?
No, I would not have bothered y’all over such a triviality.
Most reactions never reach 100% completion, so some amount of the reactant is still around. This residual can “poison” future reactions unless care is taken.
And there is not a means to clean the reactants at every stage without considerable loss. So chemists must work in the same glassware (eg beaker) for many steps.
Essentially chemists go through elaborate work to make their reactions “associative to a good approximation” (my phrase not theirs) by carefully cleaning and separating before and after the steps known empirically to be an issue.
But we in math want to treat equations as, well, equations! And compose them! And that’s what I’m trying to do.
I think I see a way to represent things in set theory using algebras, that I sketched far above, and maybe should just write that up, and contrast with the literature John linked to. (Thanks btw!!)
I’ll start a new thread if I manage to do that. Though of course questions still welcome here. Thanks again.
The cheap way to use this “incomplete conversion” to your advantage is to add way more weak acid, say, than you need to, to forcibly convert all of the base. (Assuming here that having that base around would be bad.) But again this is adapting lab practice so that “things work out the way we expect them to” e.g. compositionally rather than the equations actually composing.
i have to work now but will reader your thoughts on Markov categories later!
This is touching on a more general issue that arises all over ACT, which is that in fact for the most part no two things in the real world are ever exactly equal to each other. So there's been a deeply felt need discussed since the first ACT meeting for some nice formalism of "approximate categories." There are a few obvious candidates for such a structure but nothing seems to have really taken off. Fleshing out your ideas for what such a structure would need to be to really work for chemistry could be productive as a bottom-up approach to the problem.
In the context of medical imaging, I am also quite interested in the fact that it is rare for two things in the real world to be exactly equal to one another! For example, two images of the same thing are rarely equal pixel-by-pixel even if they are "the same" for many purposes. I'm also interested in frameworks that support this: determining if different reconstruction techniques are "in agreement" or "are compatible" in some sense, even if each technique specifies a different probability distribution for what is present.
I'd be quite interested in learning about categorical structures that seek to facilitate handling "approximate agreement".
I've seen term re-writing used for chemistry to these ends, e.g. https://jsystchem.springeropen.com/articles/10.1186/1759-2208-4-4 and https://hal.science/hal-02920023/document and https://www.sciencedirect.com/science/article/pii/S1571066106000375
In the real world, real numbers are in practice finite decimals (i.e elements of ), and functions like exponentials, logarithms, and the trigonometric functions are in practice functions from finite decimals to finite decimals, so most of the real-analytic properties associated with those functions don't actually hold up to equality in practice in the real world, only up to some tolerance if at all. Yet most mathematicians use the real numbers, rather then try to develop analysis in to formalise how numbers are actually used in the real world.
So the issue with things not holding up to equality in the real world isn't solely restricted to applied category theory.
Certainly, but this is kind of why the real numbers work so well for applied work, no? The whole subject of analysis is about understanding when your approximations have sufficient quantitative quality. That's a difficult kind of knowledge to export to category theory so far.
Madeleine Birchfield said:
Yet most mathematicians use the real numbers, rather then try to develop analysis in to formalise how numbers are actually used in the real world.
So the issue with things not holding up to equality in the real world isn't solely restricted to applied category theory.
I disagree: [some] mathematicians would gladly do it, the issue is other people, which would be hard to convince that "you should do differential equations in an adic completion of a localization of the integers": it took them centuries to accept calculus on as a necessary evil needed to understand if a bridge will fall, they would whine until XXVIIth century if they had to study XXth century algebra...
Madeleine Birchfield said:
In the real world, real numbers are in practice finite decimals (i.e elements of ), and functions like exponentials, logarithms, and the trigonometric functions are in practice functions from finite decimals to finite decimals, so most of the real-analytic properties associated with those functions don't actually hold up to equality in practice in the real world, only up to some tolerance if at all. Yet most mathematicians use the real numbers, rather then try to develop analysis in to formalise how numbers are actually used in the real world.
I think the real numbers do capture how things work in the real world. I don't see why things holding up to a tolerance means the reals aren't the right notion.
Well, is for computation on a paper on the corner of the table. In the "real world" though, computers use floating-point arithmetic which are not even as neat and clean as . Some people did take the challenge seriously though, e.g. verified compilation of floating-point computations where the authors formalizes (in Coq) a lot of stuff (compiler, target platform, etc.). But, most number-crunching workers skip over those details, and are probably right to do so.
Maybe this example illustrates that gauging real-world phenomena is hard, requires a division of labour (or, I'd say, "composition of labour"?) and a lot of "impedance matching": one team uses real numbers for their proofs and theorems, another uses computers to run the stats, and yet another make sure the last two are talking more or less about the same thing.
I guess wouldn't call what computers do to model real numbers the 'real world'. Computers might use fixed point or floating point numbers, but the real world generally does not. Sure, I agree that it is good for people to prove things about how well these models correspond to the real world, but it feels backwards to take these models as the fundamental notion.
Mmh, I see. I guess the word "real world" has been used with different meanings in the previous messages. In my case, I was thinking about actual people who run experiments in the most concrete meaning. In this setting, just having core models (e.g. my chemical model of the thing in the beaker) is not enough, people also need sensors and computers, which themselves rely on other theories and experiments, and so on.
Peva Blanchard said:
Well, is for computation on a paper on the corner of the table. In the "real world" though, computers use floating-point arithmetic which are not even as neat and clean as .
Digital computers only existed for around 80 years or so, before then almost everybody was using paper and pen and slide rules in the real world.
Graham Manuell said:
I think the real numbers do capture how things work in the real world. I don't see why things holding up to a tolerance means the reals aren't the right notion.
The real numbers are an abstract model which simplifies a lot of the real world away to make it simpler to think about numbers, measurements, and space. It's usefulness is precisely because it allows people to sweep all the computational and uncertainty details from the real world under the rug and simply use abstract symbols like or in its place without having to worry about what rational approximation is being used in practice for or and what equations fail to hold as a result.
Another example, in geometry people treat any line segment drawn on a piece of paper as a continuum (i.e. homeomorphic to the unit interval), but we know from physics that at the microscopic level that the continuum description of the drawn line breaks down into discrete atoms; similarly people treat it as a one-dimensional object, but zoom in under a microscope and one can see the two-dimensional width of the line, and if one zooms in from the side one can also see the three-dimensional thickness of the line. The background on what the line is drawn also consists of discrete atoms if one zooms in far enough.
In the real world, measurements of a line segment (or areas and volumes of objects) come with some form of uncertainty, so in order to reflect reality, the measurements should come with some form of confidence interval (with rational endpoints, if they only can measure rationals), but this usually gets swept under the rug in geometry classes as people work with measurements as single real numbers there.
But it's useful to assume that these are continuous one-dimensional objects with exact real measures on a two-dimensional plane because then people don't have to deal with the extremely complicated mathematics of dealing with uncertainties in the 3 dimensional real world using the rational numbers.
My point is that applied category theory sweeping away the complexities of chemical reactions to postulate equalities in categories is nothing out of the ordinary when it comes down to simplifying the real world to create some abstract model which captures some aspect of how things work in the real world.
Graham Manuell said:
I guess wouldn't call what computers do to model real numbers the 'real world'. Computers might use fixed point or floating point numbers, but the real world generally does not. Sure, I agree that it is good for people to prove things about how well these models correspond to the real world, but it feels backwards to take these models as the fundamental notion.
What exactly is this "real world" for you that computers are not part of? It seems very intuitively clear to me that any kind of numbers, insofar as they're actually going to be used to model specific concrete phenomena (whether physical, social, whatever) are going to factor through some finite approximation of the real numbers.
Once you get mathematicians talking about the ontological and epistemological status of real numbers, you can pretty much say goodbye to whatever you'd been talking about before. :wink:
Yes, I guess we could spin this off into a separate thread if anybody else cares to get into the real numbers stuff, which I actually do. But I'm not sure there was a different subthread ongoing that we're stomping on with this anyway.
I created a new thread for people who want to continue talking about the real numbers
#community: discussion > Ontological and epistemological status of real numbers
I primarily brought up the real numbers as an example that applied category theorists simplifying and hiding the complexities of chemical reactions is nothing new for mathematicians.
Kevin Carlson said:
But I'm not sure there was a different subthread ongoing that we're stomping on with this anyway.
Maybe not ongoing any more. Let me look back and see how we got here. The original topic of "ACT pedagogy and hype" was pretty provocative, capable of supporting endless argument, you'd think. But then people got into a lot of detail about the connection between monoidal categories and chemistry. It started when Eric claimed Fong and Spivak's book was glossing over certain features of real-world chemistry when using chemical reactions to explain monoidal categories. But it became extremely detailed. Then Madeleine brought up the real numbers as another way in which mathematicians idealize situations.
I guess it's fine for conversations to roam this way, but it's a bit amusing: we can't even stick to having a good solid fight over "ACT pedagogy and hype".
I would like to see this fight :) (as an outsider, it is probably a quick way for me to get a map of all the available positions).
So let me put another coin.
Eric mentioned that, in practice, when people mention, say, applied maths, then one expects a specific kind of package: theorems, homomorphisms, translation into numbers, software, etc.
I reformulate this package as "everything you need to run experiments". I am being vague about experiments, but let's say the scope is large enough to include: experiments at the LHC, monitoring and predicting your sales, running a sociological survey, clinical trials, or climate change simulation.
It seems to me that these concrete experiments are actually really difficult and complex, in the sense that they usually involve a whole bunch of domains of expertise, technologies, trials and errors. Hence my point about the necessary division of labour, and the different teams.
I don't remember where I got that impression (probably the hype), but I once thought that category theory could be used as a sort of common formal language, to make sure those teams work well together. For instance, you can define your epidemiological model in AlgebraicJulia, and "run a functor" to get PDE's: in a sense, it is a form of collaboration between epidemiologists and PDE experts. Category theorists would be responsible for this kind of "backstage glue work".
So, has this direction (ACT as glue work) been one of the promises of ACT? To what extent is it a hype/serious? Are there other instances of this glue work?
ps: A bit sadly, glue work is not very rewarding: people at the end tend to forget glue work. If relevant, I think it deserves a name: the "Tragedy of Applied Category Theory".
I'm headed to bed, but just wanted to provide that IMO 80% of what happens on a day-to-day basis in big corporations and at government agencies is glue work. David Graeber identifies
five major "bullshit jobs" in large firms: "flunkies, goons, duct tapers, box tickers, and taskmasters". Glue work comprises much of what duct tapers and box tickers do, so on the low end that's 40% less bullshit potentially! :) This is true even in relatively high paying fields -- there's a joke that most of what software engineers do its maintain code that converts one type into another type.
So, even if all ACT ever does is automate glue work (and I do think its capable of much more) that's still a massive potential cost savings in terms of money and psychological misery. (Obviously there are potential labor issues there, etc., its not all roses.) But remaining optimistic for the moment, maybe a huge area for application, along the "Catgeory Theory for X" books that Madeleine mentioned is in logistics, planning, management, etc.
An example of people who seem to have got it right wrt to automating glue work in government are the Estonians! It might be fun for folks to see if their digital reforms have compositional / categorical aspects.
@Peva Blanchard wrote:
For instance, you can define your epidemiological model in AlgebraicJulia, and "run a functor" to get PDE's: in a sense, it is a form of collaboration between epidemiologists and PDE experts.
By the way, so far you only get ODEs. A user of ModelCollab can draw a stock-flow diagram. This diagram gets sent to the program StockFlow.jl where a functor can convert it to a system of ODEs. These then get solved: the Julia language has very good ODE solving packages. So none of us need to be experts in numerical methods for solving ODEs.
There is already commercial software that lets users draw stock-flow diagrams, which get converted to the ODEs, which then get solved. So if that's all we wanted to do, we wouldn't need to create new software - unless we wanted the software to be free, which we do. But in fact we want to be able to hit our stock-flow diagram with other functors. And we want to be able to take pullbacks within our category of stock-flow diagrams, to build more complicated diagrams from simpler ones. So this is where category theory starts paying off.
In fact, the AlgebraicJulia team is developing a whole 'ecosystem' of (double) categories and (double) functors for system modeling, of which StockFlow.jl is just one part. The real advantage of category shows up as one starts building such an ecosystem.
By the way, so far you only get ODEs.
Oh true, sorry I misremembered.
The real advantage of category shows up as one starts building such an ecosystem.
I'll probably sound like I'm playing devil's advocate (which I'm not), but what kind of advantage do you have in mind?
(On the top of my head, I'd say: rigorous proofs that translation between domains is correct, good software design.)
I can't list all the advantages. But snce I'm writing a paper with Nate and Patty about the advantages for epidemiological modeling, I can easily quote some advantages in that sphere:
(1) Modularity: Models of specific subsystems may be constructed individually by different domain experts, then coupled together using appropriate principles, supporting ongoing collaboration by the parties.
(2) Hierarchy: Multiscale models, such as of spatially distributed human/animal populations, can be constructed in a multi-scale way that mirrors the structure of the system.
(3) Robustness: Models are well-defined mathematical structures, which may be specified, optimized, evolved, and visualized using high-level operations. Simulation code is generated from the specification, reducing programming effort and errors, and accelerating the feedback cycle with stakeholders.
(4) Clarity, accessibility, and transparency: Rigorous yet intuitive diagrammatic languages, such as wiring diagrams, support clear communication of model structure. This allows input and critique from all members of a modelling team, regardless of mathematical experience.
Rigorously proving software correctness would be pretty far down the list, and I don't know anyone involved in AlgebraicJulia who is concerned with that.
Peva Blanchard said:
composition of labour
came to say I love this spin on division of labour, very communitarianistic
I feel like the whole discussion about chemistry came and went without anyone answering the question I was most interested in: does anybody have a roadmap heading towards actually applying category theory in chemistry or chemical engineering?
To me it looks like a classic case of the way I parodied bad ACT several years ago: (1) describe your application domain in category-theoretic terms, then (2) draw the rest of the fucking application
Zanasi has been doing some work with actual chemists. I can't go into details though, I confess I haven't read the papers :P
I hadn't seen them, thanks for pointing them out. I only see one on the arXiv:
Ella Gale, Leo Lobski, Fabio Zanasi
We introduce a mathematical framework for retrosynthetic analysis, an important research method in synthetic chemistry. Our approach represents molecules and their interaction using string diagrams in layered props - a recently introduced categorical model for partial explanations in scientific reasoning. Such principled approach allows one to model features currently not available in automated retrosynthesis tools, such as chirality, reaction environment and protection-deprotection steps.
The people who have gotten the furthest, in my opinion, are the team including Christoph Flamm, Daniel Merkle and Peter Sadler, who are using double pushout rewriting to design ways to synthesize chemicals. Here's a sample paper of theirs from back in 2016, one of many:
Jakob L. Andersen, Christoph Flamm, Daniel Merkle, Peter F. Stadler
Chemical reaction networks can be automatically generated from graph grammar descriptions, where rewrite rules model reaction patterns. Because a molecule graph is connected and reactions in general involve multiple molecules, the rewriting must be performed on multisets of graphs. We present a general software package for this type of graph rewriting system, which can be used for modelling chemical systems. The package contains a C++ library with algorithms for working with transformation rules in the Double Pushout formalism, e.g., composition of rules and a domain specific language for programming graph language generation. A Python interface makes these features easily accessible. The package also has extensive procedures for automatically visualising not only graphs and rewrite rules, but also Double Pushout diagrams and graph languages in form of directed hypergraphs. The software is available as an open source package, and interactive examples can be found on the accompanying webpage.
In 2020 they put out this ad for postdocs:
Several two-year postdoc positions starting 1 July 2020 are available at the University of Southern Denmark (SDU) for research on an exciting project in algorithmic cheminformatics supported by the Novo Nordisk Foundation: “From Category Theory to Enzyme Design – Unleashing the Potential of Computational Systems Chemistry”. We are seeking highly motivated individuals with a PhD in computer science, computational chemistry/biochemistry, or related areas. The ideal candidate has familiarity with several of the following areas: concurrency theory, graph transformation, algorithmics, algorithm engineering, systems chemistry, systems biology, metabolic engineering, or enzyme design. Solid competences in programming and ease with formal thinking are prerequisites.
The project is based on the novel application of formalisms, algorithms, and computational methods from computer science to questions in systems biology and systems chemistry. We aim to expand these methods and their formal foundations, create efficient algorithms and implementations of them, and use these implementations for research in understanding the catalytic chemistry of enzymes.
Here's how they described the project back then:
The proposed project builds on a new and powerful methodology that strikes a balance between chemical detail and computational efficiency. The approach lies at the intersection of classical chemistry, present-day systems chemistry and biology, computer science, and category theory. It adapts techniques from the analysis of actual (mechanistic) causality in concurrency theory to the chemical and biological setting. Because of this blend of intellectual and technical influences, we name the approach computational systems chemistry (CSC). The term “computational” emphasizes both the deployment of computational tools in the service of practical applications and of theoretical concepts at the foundation of computation in support of reasoning and understanding. The goal of this exploratory project is to provide a proof-of-concept toward the long-term goal of tackling many significant questions in large and combinatorially complex CRNs that could not be addressed by other means. In particular, CSC shows promise for generating new technological ideas through theoretical rigor. This exploratory project is to be considered as initial steps towards establishing this highly promising area through the following specific objectives:
• Integrate and unify algorithmic ideas and best practices from two existing platforms. One platform was conceived, designed, and implemented for organic chemistry by the lead PI and his group in Denmark as well as the chemistry partner from University of Vienna. The other platform draws on the theory of concurrency and was designed and implemented for protein- protein interaction networks supporting cellular signaling and decision-making processes by the partner from Harvard Medical School and his collaborators. The combination is ripe with potential synergies as both platforms are formally rooted in category theory.
• Demonstrate a proof-of-concept (PoC) using a biochemical driving project. The goal of this exploratory project is the analysis and design of enzymes whose catalytic site is viewed as a small (catalytic) reaction network in its own right. Such enzymes can then be used in the design of reaction networks.
• Train the next generation of scientists for CSC: This will enable the transition towards a large-scale implementation of our approaches to tackle key societal challenges, such as the development of personalized medicine, the monitoring of pollution, and the achievement of a more environmentally friendly and sustainable network of industrial synthesis.
We argue that CSC is in a position today similar to where bioinformatics and computational biology were a few decades ago and that it has similarly huge potential. The long-term vision is to unleash that potential.
I do not know how the project has been going since then! I used to attend conferences on chemical reaction network theory, but once I teamed up with people using very similar math for epidemiological modeling, I got too busy for that.
I believe @Wilmer Leal is working on chemical applications of AlgebraicJulia with @James Fairbanks. I wonder how that is going, too!
There is also this using AlgebraicJulia:
Rebekah Aduddell, James Fairbanks, Amit Kumar, Pablo S. Ocal, Evan Patterson, Brandon T. Shapiro
Regulatory networks depict promoting or inhibiting interactions between molecules in a biochemical system. We introduce a category-theoretic formalism for regulatory networks, using signed graphs to model the networks and signed functors to describe occurrences of one network in another, especially occurrences of network motifs. With this foundation, we establish functorial mappings between regulatory networks and other mathematical models in biochemistry. We construct a functor from reaction networks, modeled as Petri nets with signed links, to regulatory networks, enabling us to precisely define when a reaction network could be a physical mechanism underlying a regulatory network. Turning to quantitative models, we associate a regulatory network with a Lotka-Volterra system of differential equations, defining a functor from the category of signed graphs to a category of parameterized dynamical systems. We extend this result from closed to open systems, demonstrating that Lotka-Volterra dynamics respects not only inclusions and collapsings of regulatory networks, but also the process of building up complex regulatory networks by gluing together simpler pieces. Formally, we use the theory of structured cospans to produce a lax double functor from the double category of open signed graphs to that of open parameterized dynamical systems. Throughout the paper, we ground the categorical formalism in examples inspired by systems biology.
I believe the killer apps will lie in the field of biochemistry, so category theorists interested in this should team up with biochemists. (Wilmer Leal has a background in biochemistry.)
Wow, very cool, I hadn't heard about any of this
I agree that biochem is definitely a place where ACT will eventually have a killer app. Regular chemistry is also an infinite rabbit hole of complexity. Every time Wilmer tells me a new chemistry fact it blows my mind and reveals a totally new research avenue for ACT.
There’s also PDEs for surface chemistry in Decapodes. https://algebraicjulia.github.io/Decapodes.jl/dev/ch/cahn-hilliard/ we have all the pieces you would need to solve could 2d reacting flow. Some assembly required.
Does @Wilmer Leal (or you) have a game plan for how to create some software that a bunch of chemists will get really excited by and start using?
We’re having success recruiting Chemical Engineering collaborators so that we can get buy-in around adopting existing AlgebraicJulia tools like AlgebraicPetri, Decapodes and Semagrams.
Great! I hope someone occasionally blogs about or otherwise announces such progress. Maybe I've just been missing the news.
I'm curious whether @Eric M Downes or other people who know chemistry can comment on the Zanasi et al work. I think it's a reasonable first order summary to say that it's "OK, so chemistry isn't just an SMC; what if it's a whole bunch of SMCs, parameterized by lots of facts about the reaction context?"
And why biochemistry specifically, John? Because biochemical molecules and reactions are really big and complex so compositionality has more to offer?
I believe biochemists feel more strongly that there's a lot of important stuff they're struggling to understand. For example, they are trying to simulate entire cells and also create artificial cells that are truly alive. Part of this is simulating all the chemical reactions in the cell and seeing what about this chemical reaction network makes the cell live... though besides a mere chemical reaction network, we also need to understand the cell membrane, or for eukaryotes the various organelles in the cell.
So there are papers like this:
The EcoCyc database characterizes the known network of Escherichia coli small-molecule metabolism. Here we present a computational analysis of the global properties of that network, which consists of 744 reactions that are catalyzed by 607 enzymes. The reactions are organized into 131 pathways. Of the metabolic enzymes, 100 are multifunctional, and 68 of the reactions are catalyzed by >1 enzyme. The network contains 791 chemical substrates. Other properties considered by the analysis include the distribution of enzyme subunit organization, and the distribution of modulators of enzyme activity and of enzyme cofactors. The dimensions chosen for this analysis can be employed for comparative functional analysis of complete genomes.
And this:
The ultimate microscope, directed at a cell, would reveal the dynamics of all the cell’s components with atomic resolution. In contrast to their real-world counterparts, computational microscopes are currently on the brink of meeting this challenge. In this perspective, we show how an integrative approach can be employed to model an entire cell, the minimal cell, JCVI-syn3A, at full complexity. This step opens the way to interrogate the cell’s spatio-temporal evolution with molecular dynamics simulations, an approach that can be extended to other cell types in the near future.
@Kevin Carlson said:
I'm curious whether Eric M Downes or other people who know chemistry can comment on the Zanasi et al work. I think it's a reasonable first order summary to say that it's "OK, so chemistry isn't just an SMC; what if it's a whole bunch of SMCs, parameterized by lots of facts about the reaction context?"
Apologies, I just saw this. I'll take a look and see if I have anything intelligent to say! I take it you mean Gale, Lobski, Zanasi (2023) mentioned by John above.
On that note, I was able to convince my friend Shervin, formerly a chemistry professor at UT, to collaborate on this, though so far all that has resulted is my stack of Papers to Read has become deeper... :) DM me if you'd like to be involved as well, otherwise I'll just circle back if we do make any progress.