Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: compositional thermodynamics


view this post on Zulip Robin Piedeleu (May 01 2021 at 13:37):

I've been teaching myself basic thermodynamics and have discovered this book review by J. Willems---the famous control theorist---where he sketches a compositional approach to the topic (in Section III). Given that Willems' original approach to control theory has been discussed before in this community, I was wondering if anyone had come across Willems' ideas in this review?

I think it would be profitable to connect Willem's sketch to the work on open systems by some people here (I'm thinking specifically of prior work by @John Baez, @Brendan Fong and @Blake Pollard but I'm sure there are others interested in related topics). I don't yet have a particular problem in mind---I'm just happy to leave this here for anyone interested to discuss and potentially find out what are relevant questions one could ask.

For Willems, a thermodynamic system is a black box with two different types of terminals, namely those where work is exchanged and those where heat is exchanged. Every heat terminal also has an associated temperature. The system's behaviour is given by its set of possible trajectories: tR(W1(t),,Wn(t),Q1(t),T1(t),,Qm(t),Tm(t))t\in\mathbb{R} \mapsto (W_1(t), \dots, W_n(t),Q_1(t), T_1(t), \dots,Q_m(t), T_m(t)) where WiW_i denotes the rate of work of the ii th work terminal, QjQ_j the rate of heat exchanged, and TjT_j the temperature at the jj th heat terminal. A thermodynamic behaviour is one that respects the first and second law of thermodynamics.

To formalise this, Willems introduces the notion of a storage function for a quantity s:RRs : \mathbb{R} \to \mathbb{R} as a function V:RRV : \mathbb{R} \to \mathbb{R} that satisfies the dissipation inequality V(t1)V(t0)t0t1s(t)dtV(t_1) - V(t_0) \leq \int_{t_0}^{t_1} s(t)dt for all t0<t1t_0<t_1. Let's say the storage function is conservative if the inequality is an equality. A system satisfies the first law if there exists a conservative storage function (its energy) for the supply rate iQiW\sum_i Q_i - W and it satisfies the second law if there exists a storage function (its entropy) for the supply rate iQiTi\sum_i \frac{Q_i}{T_i}. (Actually, Willems sketches two equivalent formulations of thermodynamic behaviour, starting on p.5 at the bottom of the second column and continuing on p.6).

He then goes on to derive a few constraints on the behaviour of interconnected systems, which I won't summarise here but you should check out if interested!

view this post on Zulip Nathaniel Virgo (May 01 2021 at 13:49):

There is also some work on "network thermodynamics", e.g. see the paper below, which struck me as something that would fit quite nicely into the stuff about networks and open systems.

Perelson (1975) Network thermodynamics. An overview.
Biophys J. 1975 Jul; 15(7): 667–685.
doi: 10.1016/S0006-3495(75)85847-4

view this post on Zulip Robin Piedeleu (May 01 2021 at 13:53):

Nice, I had this paper opened in one tab but hadn't gotten around to reading it yet! Looks like I should come back to it.

view this post on Zulip Robin Piedeleu (May 01 2021 at 13:57):

Ah, this is a later version, by Perelson only. Looks like a better introduction than the one I had saved. Thanks!

view this post on Zulip John Baez (May 01 2021 at 14:36):

Thanks very much, @Robin Piedeleu! My student @Joe Moeller now has a job where he's getting paid to understand thermodynamics using categories, so he should read this. And I plan to return to compositional thermodynamics myself, especially in connection to biology and ecology. I'd gotten started on it a while ago but then got distracted by trying to develop general formalisms for networks.

view this post on Zulip John Baez (May 01 2021 at 14:36):

I didn't know Willems had written on this matter. He is always worth reading.

view this post on Zulip John Baez (May 01 2021 at 14:52):

Robin Piedeleu said:

Ah, this is a later version, by Perelson only. Looks like a better introduction than the one I had saved. Thanks!

Yes, thanks - I'd only read Oster, Perelson and Katchalsky's paper Network thermodynamics, which has some nice basic stuff about the analogy between thermodynamics and electrical circuits, but doesn't go terribly far with it.

view this post on Zulip Robin Piedeleu (May 01 2021 at 16:46):

John Baez said:

Yes, thanks - I'd only read Oster, Perelson and Katchalsky's paper Network thermodynamics, which has some nice basic stuff about the analogy between thermodynamics and electrical circuits, but doesn't go terribly far with it.

From skimming it just now, it looks like this paper formulates the analogy between circuits and thermodynamics in terms of bond graphs, a topic I think you worked on before with some of your students. Was there ever a fully worked out algebraic theory for them? Of course, they can be encoded into graphical linear algebra with doubled wires, but the algebra of bond graphs itself must be of independent interest.

view this post on Zulip Cole Comfort (May 01 2021 at 16:55):

Robin Piedeleu said:

John Baez said:

Yes, thanks - I'd only read Oster, Perelson and Katchalsky's paper Network thermodynamics, which has some nice basic stuff about the analogy between thermodynamics and electrical circuits, but doesn't go terribly far with it.

From skimming it just now, it looks like this paper formulates the analogy between circuits and thermodynamics in terms of bond graphs, a topic I think you worked on before with some of your students. Was there ever a fully worked out algebraic theory for them? Of course, they can be encoded into graphical linear algebra with doubled wires but the algebra of bond graphs itself must be of independent interest.

I am going to put a paper on the arxiv next week related to this topic with a coauthor. We have worked out universality and we will put out another paper on completeness hopefuly later this year.

view this post on Zulip Cole Comfort (May 01 2021 at 16:57):

Not on bond graphs specifically but Lagrangian relations

view this post on Zulip Robin Piedeleu (May 01 2021 at 17:38):

Cole Comfort said:

I am going to put a paper on the arxiv next week related to this topic with a coauthor. I have worked out universality and we will put out another paper on completeness hopefuly later this year.

Nice! What you call universality, means a set of generators GG that is enough to obtain a full functor from the free smc on GG to the category of Lagrangian relations?

view this post on Zulip Cole Comfort (May 01 2021 at 17:42):

Robin Piedeleu said:

Cole Comfort said:

I am going to put a paper on the arxiv next week related to this topic with a coauthor. I have worked out universality and we will put out another paper on completeness hopefuly later this year.

Nice! What you call universality, means a set of generators GG that is enough to obtain a full functor from the free smc on GG to the category of Lagrangian relations?

Yes, moreover there is an interpretation of this free prop into linear relations such that the image of this functor is equivalent to lagrangian relations. But it is not a presentation as a monoidal theory; we are working on this, but it won't be out soon.

view this post on Zulip Cole Comfort (May 01 2021 at 17:44):

Appealing to the forgetful functor LagRelkLinRelk{\sf LagRel}_k \to {\sf LinRel}_k, which is symmetric monoidal but not a homorphism of props, thus we don't get a monoidal theory for free.

view this post on Zulip John Baez (May 01 2021 at 23:14):

Robin Piedeleu said:

John Baez said:

Yes, thanks - I'd only read Oster, Perelson and Katchalsky's paper Network thermodynamics, which has some nice basic stuff about the analogy between thermodynamics and electrical circuits, but doesn't go terribly far with it.

From skimming it just now, it looks like this paper formulates the analogy between circuits and thermodynamics in terms of bond graphs, a topic I think you worked on before with some of your students. Was there ever a fully worked out algebraic theory for them? Of course, they can be encoded into graphical linear algebra with doubled wires but the algebra of bond graphs itself must be of independent interest.

My student @Brandon Coya wrote a paper on the symmetric monoidal category of bond graphs:

As Brandon explains, this category has an interesting relation to Frobenius monoids and also Pastro and Street's "weak bimonoids". But there's definitely more left to understand here, including a precise open conjecture regarding the presentation of this symmetric monoidal category.

He polished the presentation a bit more in his thesis, which covers related topics too:

view this post on Zulip Robin Piedeleu (May 02 2021 at 11:11):

Thanks!

One thing I'm still confused about is the reason for the weak bimonoid axioms? Since 0-junctions can be interpreted as copy-sum over two wires, while 1-junctions behave as sum-copy, they should interact as (non-weak) bimonoids like the underlying sum and copy do, right? Where does the weakness come in? I see that there's another interpretation of 1-junctions as the pair-of-pants algebra on two wires (that does indeed form a weak bimonoid when paired with the doubled multiplication of an algebra) but I don't get how the two interpretations are related.

view this post on Zulip John Baez (May 02 2021 at 16:50):

Ugh! It's been a long time since I've thought about this, so I'm having trouble remembering why some things don't work. I see in Brandon's thesis he says various things don't work, but I don't see him coming out and clearly explaining which equations fail and why. At the time he wrote it, this was obvious to him and me. I guess the moral is that when writing you have to clearly explain why things don't work, as well as explaining what does work.

view this post on Zulip John Baez (May 02 2021 at 16:53):

As for your second question, "how the two interpretations are related", I think that Section 1.2 in Brendan's thesis has the best high-level explanation, leading up to some large diagrams relating lots of symmmetric monoidal categories - especially the categories of bond graphs, corelations between finite sets, and Lagrangian relations between finite-dimensional symplectic vector spaces.

view this post on Zulip John Baez (May 02 2021 at 16:53):

There are two functors from bond graphs to Lagrangian relations, and a natural transformation between the two.

view this post on Zulip John Baez (May 02 2021 at 16:56):

The point here is that there are two "electrical" interpretations of bond graphs.

view this post on Zulip John Baez (May 02 2021 at 16:59):

These are described in Section 1.1 of the thesis.

view this post on Zulip John Baez (May 02 2021 at 17:00):

The first is to think of an edge in a bond graph, called a bond, as a pair of perfectly conductive wires, like the pair of wires coming out of a toaster or other electrical appliance.

view this post on Zulip John Baez (May 02 2021 at 17:01):

These naturally have two currents I1,I2I_1, I_2 and two potentials ϕ1,ϕ2\phi_1, \phi_2.

This first interpretation gives a functor from the category BondGraph to LagRel, where each bond gets mapped to the identity morphism on a four-dimensional symplectic vector space.

view this post on Zulip John Baez (May 02 2021 at 17:03):

However, in a properly functioning toaster, the two currents are required to be opposite (the current flowing into your toaster equals the current flowing out, since there's not a "short circuit"), and the functioning depends only on the difference of the two potentials (there is nothing connecting the wires of the toaster to the "ground").

view this post on Zulip John Baez (May 02 2021 at 17:06):

So, when designing toasters and other more complicated appliances, engineers only care about a single current I=I1=I2I = I_1 = -I_2 and a single voltage V=ϕ1ϕ2V = \phi_1 - \phi_2.

view this post on Zulip John Baez (May 02 2021 at 17:08):

This second interpretation of bond graphs gives a functor from BondGraph to LagRel where each bond gets mapped to the identity morphism on a two-dimensional symplectic vector space.

view this post on Zulip John Baez (May 02 2021 at 17:09):

There is a natural transformation from the first functor BondGraph \to LagRel to the second one, based on the equations I just wrote down for II and VV in terms of I1,I2,ϕ1,ϕ2I_1, I_2, \phi_1, \phi_2.

view this post on Zulip John Baez (May 02 2021 at 17:12):

The story I just told is summarized in this pentagon near the end of Section 1.2:

two functors from BondGraph to LagRel, and natural transformation between them

view this post on Zulip John Baez (May 02 2021 at 17:14):

This pentagon, and the introduction of the category BondGraph, arise because there's not a square that commute up to natural transformation, with an upwards pointing arrow from FinCorel\mathrm{FinCorel}^\circ to LagRel\mathrm{LagRel}^\circ.

view this post on Zulip John Baez (May 02 2021 at 17:16):

Here the little circle means the full subcategory on sets with an even number of elements (in the case of FinCorel), or on symplectic vector spaces whose dimensions are a multiple of four (in the case of LagRel).

view this post on Zulip John Baez (May 02 2021 at 17:16):

(The subscript kk is just the field we're using, so I won't include that.)

view this post on Zulip John Baez (May 02 2021 at 17:18):

I'm writing this largely for my own good, and I think I'm now getting close to remembering the answers to your questions, at least in a rough way:

Since 0-junctions can be interpreted as copy-sum over two wires, while 1-junctions behave as sum-copy, they should interact as (non-weak) bimonoids like the underlying sum and copy do, right? Where does the weakness come in? I see that there's another interpretation of 1-junctions as the pair-of-pants algebra on two wires (that does indeed form a weak bimonoid when paired with the doubled multiplication of an algebra) but I don't get how the two interpretations are related.

view this post on Zulip John Baez (May 02 2021 at 17:19):

When you say "0-junctions can be interpreted as copy-sum over two wires, while 1-junctions behave as sum-copy", you are talking about the functor from BondGraph\mathsf{BondGraph} to LagRel\mathsf{LagRel}^\circ.

view this post on Zulip John Baez (May 02 2021 at 17:21):

If this is all we cared about, we might try to define BondGraph\mathsf{BondGraph} to be LagRel\mathsf{LagRel}^\circ.

view this post on Zulip John Baez (May 02 2021 at 17:21):

And then it would obey a bunch of nice relations.

view this post on Zulip Cole Comfort (May 02 2021 at 17:22):

John Baez said:

Here the little circle means the full subcategory on sets with an even number of elements (in the case of FinCorel), or on symplectic vector spaces whose dimensions are a multiple of four (in the case of LagRel).

Am I misunderstanding this? I thought that LagRelk {\sf LagRel}_k^\circ was the (non-full) subcategory of LagRelk {\sf LagRel}_k generated by the Hopf-Frobenius algebra pair (alternatively the subcategory of LinRelk {\sf LinRel}_k generated by the monoid for addition, the comonoid for copying and their transposes)

view this post on Zulip John Baez (May 02 2021 at 17:22):

Okay, you're probably right.

view this post on Zulip John Baez (May 02 2021 at 17:22):

I haven't thought about this since 2018.

view this post on Zulip John Baez (May 02 2021 at 17:23):

But I think I'm on the right track toward answering @Robin Piedeleu's main question.

view this post on Zulip John Baez (May 02 2021 at 17:24):

So let me keep talking for a minute more....

view this post on Zulip John Baez (May 02 2021 at 17:25):

If we only cared about one of the two electrical interpretations of bond graphs, life would be easy. It's the fact that we want to think about bond graphs as giving Lagrangian relations in two ways - the two electrical interpretations I described above - that makes things tricky.

view this post on Zulip John Baez (May 02 2021 at 17:28):

In short: to correctly capture how electrical engineers actually use bond graphs, instead of just defining BondGraph to be FinCorel\mathsf{FinCorel}^\circ or just defining it to be LagRel\mathsf{LagRel}^\circ, we need to define it to be a category that maps to both of these... thus "saving the day": we don't get a square that commutes up to a natural transformation, but we do get a pentagon that commutes up to natural transformation, which captures the two electrical interpretations of bond graphs and the natural transformation from one to the other!

view this post on Zulip John Baez (May 02 2021 at 17:29):

Okay, that's all I want to say right now.

view this post on Zulip John Baez (May 02 2021 at 17:37):

Well, one last thing I'll say is this: I don't know any work on electrical engineering that tries to list all valid relations holding for bond graphs. Also, while they talk about what I'd cal the "multiplication" and "comultiplication" morphisms in the symmetric monoidal category of bond graphs, they never talk about the "unit" and "counit". So, constructing a really nice symmetric monoidal category of bond graphs is to some extent a task left to us, the mathematicians. But Brandon and I were trying to do this in a way that reflects how engineers actually use bond graphs.

view this post on Zulip John Baez (May 02 2021 at 18:23):

I don't think we completely got to the bottom of things.

view this post on Zulip Robin Piedeleu (May 02 2021 at 18:48):

Thanks for going through all this! I think I see what the two interpretations are now: the first interprets 1-junctions as sequential composition (aka in series) and 0-junctions as parallel composition, while the second interprets 1-junctions as adding voltages and equating currents, and 0-junctions as adding currents and equating voltages.

I wonder if the first interpretation is specific to modelling electrical circuits with bond graphs, where there's a precise notion of composing components in series and parallel. Does this work for other applications (mechanics, hydraulics etc.)?

Introductory texts that use bond graphs for different energy domains seem to introduce them more generally as relations between abstract efforts and flows, such that total energy is conserved (i.e. ieifi=0\sum_i e_if_i = 0 for ii ranging over all the ports of the bond graph). I guess this matches the second interpretation above.

view this post on Zulip Robin Piedeleu (May 02 2021 at 18:48):

Hmm... and another thing that is not clear to me now is why we would want to reconcile these two interpretations/usages in the same prop?

view this post on Zulip Robin Piedeleu (May 02 2021 at 18:51):

It seems like one could simply try to work out their equational theory independently of each other as they correspond to different ways of using bond graphs.

view this post on Zulip John Baez (May 02 2021 at 19:03):

Robin Piedeleu said:

Thanks for going through all this!

Thanks for making me think about this again! I see now that Brandon's thesis didn't quite explain things in a simple enough way for an average person - e.g. his thesis advisor 3 years later - to quickly get the point. Maybe I should write some blog articles about this, or a paper that gives me an excuse to explain this material.

I wonder if the first interpretation is specific to modelling electrical circuits with bond graphs, where there's a precise notion of composing components in series and parallel. Does this work for other applications (mechanics, hydraulics etc.)?

I think you can compose components in series and parallel in many of these applications. What seems special to electrical circuits, now that I think about it, is that electrical engineers have decided to build gadgets in such a way that wires enter and leave these gadgets in pairs, such that the current flowing along one wire is equal to and opposite that in the other wire, and only the difference between the two wires' potential is supposed to affect the system. Thus, in electrical engineering a "bond" is two wires, and the symplectic vector space for the bond, k2(V,I)k^2 \ni (V,I), is constructed from a more fundamental 4-dimensional symplectic vector space for a pair of wires, k4(ϕ1,I1,ϕ2,I2)k^4 \ni (\phi_1, I_1, \phi_2, I_2).

view this post on Zulip John Baez (May 02 2021 at 19:03):

I don't think this practice has analogues in other domains (though I might simply not know them).

view this post on Zulip John Baez (May 02 2021 at 19:11):

Btw, the way in which we get k2k^2 from k4k^4 is an example of "symplectic reduction". It's not a subspace, it's not a quotient space, it's a subquotient. First we take the subspace where I1=I2I_1 = -I_2, but this 3-dimensional subspace VV is of course not symplectic: the symplectic structure ω\omega on k4k^4 is degenerate on VV. Then we mod out by the subspace WVW \subset V such that ω(w,v)\omega(w, v) vanishes for all wW,vVw \in W, v \in V. The 2-dimensional quotient V/WV/W gets a symplectic structure again. Doing this quotient says "we don't care if you add the same constant to the potentials ϕ1\phi_1 and ϕ2\phi_2; all we care about is the voltage V=ϕ1ϕ2V = \phi_1 - \phi_2."

view this post on Zulip John Baez (May 02 2021 at 19:13):

It's interesting that electrical engineers have decided it's smart to work with gadgets that only have "ports" - places where two wires enter a gadget - and do this symplectic reduction to get a simplified description of the gadget.

view this post on Zulip John Baez (May 02 2021 at 19:19):

If you think about how a cold-water sink or toilet works, you'll see it has something similar to a port: water flows in along one pipe and water flows out along another pipe. But there are some important differences.

view this post on Zulip John Baez (May 02 2021 at 19:19):

When you have hot and cold water flowing in and mixed water flowing out it gets even more complicated.

view this post on Zulip Robin Piedeleu (May 02 2021 at 21:59):

Nice, I remember discussing these different possible representations of electric circuits as linear relations with @Pawel Sobocinski before. The symplectic reduction construction gives me a better idea of what's going on! I suppose it could also apply to hydraulics, where a reference pressure or altitude would serve as the relative zero for the purpose of all measurements of a given system.

I still find it strange that these two interpretations of bond graphs are used simultaneously. In particular, I do not understand how to reconcile the version of the 1-junction as a series connection, which should be non-commutative, and the one that simply adds voltages and equates currents, which should be commutative. One can be encoded in the other, sure, but they seem like quite different beasts. How do engineers know how to choose the right semantics? Maybe if you're working purely with circuits there's one convention but how would you deal with mixed energy domains? I feel like I am missing something.

view this post on Zulip Nathaniel Virgo (May 03 2021 at 01:22):

It occurs to me that adding voltages and equating currents doesn't seem to make sense for electric circuits - I don't know any component that can do that - but it does make sense for chemical reactions.

Here's what I mean: if "current" means "rate at which a given species gets used up" and "voltage" means "chemical potential", then a reaction like A+BC+DA+B\rightleftharpoons C+D will force the currents to be equal up to their signs (so the rate of AA used up is equal to the rate of BB used up is equal to the rate of CC produced, etc.), and if it's a "fast" reaction (i.e. it can be considered to always be in equilibrium) then it forces μA+μBμCμD=0\mu_A + \mu_B -\mu_C-\mu_D=0.

It seems to me that in general, in this kind of interpretation, that this type of junction should be about being able to freely interconvert between different types of resource, while the other type of junction is about being able to combine different sources/sinks of the same resource. So in electric circuits (the kind where wires don't come in pairs) you usually only have one type of junction, because you only really care about electrons and don't generally interconvert them with other types of thing.

If that doesn't seem to make sense I might me at cross purposes to the discussion, in which case I apologise. I'm super interested in this topic and might start working on something related to it soon, but I know way more about the application domain than the category theory. I'll need to read Brandon Coya's thesis in detail - I didn't know about it before.

view this post on Zulip Cole Comfort (May 03 2021 at 01:39):

Nathaniel Virgo said:

It occurs to me that adding voltages and equating currents doesn't seem to make sense for electric circuits - I don't know any component that can do that - but it does make sense for chemical reactions.

Here's what I mean: if "current" means "rate at which a given species gets used up" and "voltage" means "chemical potential", then a reaction like A+BC+DA+B\rightleftharpoons C+D will force the currents to be equal up to their signs (so the rate of AA used up is equal to the rate of BB used up is equal to the rate of CC produced, etc.), and if it's a "fast" reaction (i.e. it can be considered to always be in equilibrium) then it forces μA+μBμCμD=0\mu_A + \mu_B -\mu_C-\mu_D=0.

It seems to me that in general, in this kind of interpretation, that this type of junction should be about being able to freely interconvert between different types of resource, while the other type of junction is about being able to combine different sources/sinks of the same resource. So in electric circuits (the kind where wires don't come in pairs) you usually only have one type of junction, because you only really care about electrons and don't generally interconvert them with other types of thing.

If that doesn't seem to make sense I might me at cross purposes to the discussion, in which case I apologise. I'm super interested in this topic and might start working on something related to it soon, but I know way more about the application domain than the category theory. I'll need to read Brandon Coya's thesis in detail - I didn't know about it before.

Would there be a notion of Fourier transform that transports the one Frobenius algebra to the other via conjugation?

view this post on Zulip Nathaniel Virgo (May 03 2021 at 02:44):

It should be the Legendre transform, or something derived from it.

Thermodynamics is really all about convex analysis - the conjugate variables are really the slope of a supporting hyperplane of a convex function. That can either be the entropy (as a function of the extensive variables) or the free energy (as a function of the intensive variables), depending on which coordinates you use - the Legendre-Fenchel transform switches between the two.

I feel that there must be convex functions underlying the thermodynamic stuff in this discussion as well - the reason the variables come in pairs should be to do with convex functions in the same way. I conjecture that the symplectic structure can be thought of as arising from that, although I don't know a lot about Lagrangian relations specifically. In that context it makes a lot of sense to me that the Legendre-Fenchel transform would also switch between the two Frobenius monoids.

I was thinking about some very closely related stuff about a year ago and think I have a pretty good idea of how to make that work - I did have a way to make Frobenius monoids out of convex functions and have the Legendre-Fenchel transform switch between them. At the time I wasn't quite able to see the whole picture at once and I ended up doing something else for a bit, but I might go back to it soon and see if I can put all the pieces together.

(Maybe it would make sense to try doing that collaboratively with some of the people here? I'd be up for trying that if people think so!)

view this post on Zulip John Baez (May 03 2021 at 03:40):

Nathaniel Virgo said:

It occurs to me that adding voltages and equating currents doesn't seem to make sense for electric circuits - I don't know any component that can do that [...]

We add voltages (not potentials - potential differences) and equate currents when we put two components in series. This is what a "1-junction" does in bond graphs. It takes a while to get used to if you're used to circuit diagrams.

view this post on Zulip John Baez (May 03 2021 at 03:42):

Similarly, we add currents and equate voltages when we put two components in parallel, and this is what a "0"-junction" does in bond graphs.

view this post on Zulip Nathaniel Virgo (May 03 2021 at 04:01):

I see. I think I was extrapolating a bit too wildly from what @Robin Piedeleu wrote here:

Robin Piedeleu said:

Thanks for going through all this! I think I see what the two interpretations are now: the first interprets 1-junctions as sequential composition (aka in series) and 0-junctions as parallel composition, while the second interprets 1-junctions as adding voltages and equating currents, and 0-junctions as adding currents and equating voltages.

It sounded like this meant that in the second interpretation the edges in a bond graph are just single wires, with a 0-junction being the kind of junction you get in an electric circuit and a 1-junction being what I described. I believe that what I said makes sense, but is maybe not as relevant to the conversation as I thought.

view this post on Zulip John Baez (May 03 2021 at 04:06):

In both electrical interpretations of a bond graph, the edges ("bonds") represent pairs of wires.

view this post on Zulip John Baez (May 03 2021 at 04:07):

In the first interpretation each bond is assigned two currents I1,I2I_1, I_2 and two potentials ϕ1,ϕ2\phi_1, \phi_2.

view this post on Zulip John Baez (May 03 2021 at 04:08):

In the second interpretation each bond is assigned a single current II and a single voltage VV.

view this post on Zulip John Baez (May 03 2021 at 04:09):

The second interpretation is related to the first via the equations I=I1=I2I = I_1 = -I_2 and V=ϕ1ϕ2.V = \phi_1 - \phi_2.

view this post on Zulip John Baez (May 03 2021 at 04:10):

All this stuff is formalized in Brandon's thesis.

view this post on Zulip John Baez (May 03 2021 at 04:10):

The second interpretation is the one that bond graph theorists focus on. They often call II flow and VV effort.

view this post on Zulip Nathaniel Virgo (May 03 2021 at 06:03):

I see, thank you, that helps me to understand the electric circuit analogy in a way I didn't before. I'm only familiar with bond graphs from Perelson's paper, where that isn't fully spelled out.

I could still be confused about this, but now I think the analogy in that paper is

single thermodynamic fluxpair of electric currentssingle thermodynamic potentialdifference between electrical potentials.\begin{align*} \text{single thermodynamic flux} &\leftrightarrow \text{pair of electric currents} \\ \text{single thermodynamic potential} &\leftrightarrow \text{difference between electrical potentials}. \\ \end{align*}

This is probably obvious to you, but to spell it out for my own benefit if nothing else: Perelson draws this diagram for the reaction network consisting of the reaction X+Y2XX + Y \rightleftharpoons 2\,X.

image.png

I think, if I got it right, the interpretation as an electric circuit would be

image.png

where the "1:2" thing on the right is an electrical component (non-existent as far as I know) that acts like a transformer coil but for DC, forcing the current on its right-hand side to be twice that on the left without dissipating any energy. The capacitors and the resistor are nonlinear.

In this electrical analogy, CXC_X, for example, is a capacitor with two wires connected to it, so it has a current I=I1=I2I = I_1 = -I_2 through it and a voltage V=ϕ1ϕ2V = \phi_1-\phi_2 across it.

But in the chemical interpretation CXC_X is just the concentration of XX. The current in or out of it corresponds to the net rate at which XX is produced or consumed, and the voltage corresponds to its chemical potential. So I think in the chemical interpretation the 'effort' is really just a single potential and not a difference between two potentials as it is in the electrical interpretation.

Maybe we can call this a third, different interpretation of bond graphs. (But it's a physically different interpretation, instead of a mathematically different interpretation of the same physical system.) I was mixing things up because I was imagining the analogy was thermodynamic fluxes \leftrightarrow electrical currents and thermodynamic potentials \leftrightarrow voltages, which isn't right.

But I think what I said about interconverting resources does make sense in terms of this third interpretation that Perelson sets up. The 0-junction is saying I can add "X" that's produced by the reaction to the "X" that was there already, whereas the 1-junction is saying I can freely interconvert one unit of "X" and one unit of "Y" into one unit of "X+Y", and vice versa. (Then the reaction converts between "X+Y" and "2X" but at the cost of dissipation, and the transducer thing freely converts between "2X" and "X" at the appropriate ratio.)

For sure I will read Brendon Coya's thesis.

view this post on Zulip Robin Piedeleu (May 03 2021 at 08:42):

Thanks, @Nathaniel Virgo, I found it helpful to get the chemical reaction perspective on bond graphs. Your work on the Legendre transform also sounds very exciting and I'd be interested to learn more!

For the moment I'm still trying to understand your circuit translation of the reaction network X+Y2XX+Y\rightleftharpoons 2X. Perelson writes that bond graphs models of thermodynamic systems require only C, R, 0 and 1 symbols but then uses an additional "TD" symbol. Is this the 1:2 transformer in your corresponding circuit? I am also not sure what you mean when you say that capacitors and resistors are nonlinear. From my limited understanding, capacitors and resistors are passive components whose interpretation is a linear relation (between current and voltage), but maybe you mean linear in some other sense. I thought the nonlinearity came rather from the transformer component. Finally, what is the role of the resistor? Sorry if these are all elementary questions---I am unfamiliar with the application domain and trying to wrap my head around basic things.

view this post on Zulip Nathaniel Virgo (May 03 2021 at 11:38):

@Robin Piedeleu I'm happy to elaborate a bit and will reply in more detail later today or tomorrow but just wanted to note I've made a small correction to the circuit diagram image in my previous post - the parallel connection was connected the wrong way round.

view this post on Zulip Nathaniel Virgo (May 03 2021 at 11:42):

The 1:2 thing that I made up a symbol for is the "TD(+2)\text{TD}(+2)" symbol from Perelson's diagram. He refers to it as a "scaling transducer" but doesn't elaborate, so my interpretation is an educated guess, based on what will make it behave like a reaction network. I think the "TD(+1)\text{TD}(+1)"'s are no-ops, which is why only one of them appears in the diagram. (I'll explain in more detail later.)

view this post on Zulip Nathaniel Virgo (May 03 2021 at 13:44):

Hi @Robin Piedeleu

Here's the promised additional detail. I'm sorry that the following is (a) a bit of a mouthful and (b) such an informal sketch. The case of general reaction networks is outlined in more detail later on in Perelson's paper, and I should follow up the references there - this post is just explaining the circuit analogy intuitively.

My guess is that a symbol "TD(+n)\mathrm{TD}(+n)" means an element defined by the relations

Iout=nIinVout=Vin/n,I^\text{out} = n I^\text{in} \\ V^\text{out} = V^\text{in}/n,

where I'm using the 'second interpretation' that John Baez described, so Iin=I1in=I2inI^\text{in} = I_1^\text{in} = -I_2^\text{in} and so on. This hypothetical component is passive and doesn't dissipate any power because IinVin=IoutVoutI^\text{in}V^\text{in} = I^\text{out}V^\text{out}.

If this definition is right then TD(+1)\text{TD}(+1) is a no-op - it behaves just the same as a pair of wires - so that's why they don't appear on my diagram. The TD(+2)\text{TD}(+2) is the "2:1" thing that I made up a symbol for.

I think in Perelson's terminology such a transducer is a special case of a resistive element, so that's why he doesn't count it as an additional symbol besides RR, CC, 00 and 11. He counts anything as a resistive element as long as it only imposes a relation (linear or nonlinear) between flows and efforts, rather than their time derivatives, so that's quite a bit more general than a usual ideal resistor in an electric circuit.

That's also what I meant by saying the resistors and capacitors in the circuit I drew are nonlinear - I'm using the symbols that are usually used for resistors and capacitors, but they are really nonlinear "resistive elements" and "capacitive elements" with nonlinear 'constitutive relations' rather than linear ones.

When drawn like this,

image.png

a normal resistor would impose the relations

Iin=IoutVinVout=IinR I^\text{in} = I^\text{out} \\ V^\text{in} - V^\text{out} = I^\text{in} R

which forces the difference in voltages on its two ports (equal to the voltage across the actual resistor element) to be proportional to the current across it. However, you can also have nonlinear circuit elements that behave like resistors but are nonlinear, such as a diode

image.png

An ideal diode imposes the relations

Iin=IoutIin=IS(eVinVoutV01). I^\text{in} = I^\text{out} \\ I^\text{in} = I_S \left(e^{\frac{ V^\text{in} - V^\text{out}}{V^0} } - 1 \right).

Here ISI_S and V0V^0 are constants that depend on the diode. (V0V^0 is usually expressed in terms of physical constants.) This imposes a nonlinear relation between the current and voltage, so it can be thought of as a resistor whose resistance is a function that depends on the current (or voltage) across it, instead of being a constant.

The thing that I drew as a resistor above is meant to be a nonlinear resistor in the same sense. The relation it imposes will depend on what assumptions you want to make about the kinetics of the reaction. Its role is to determine the rate at which the reaction proceeds, as a function of the chemical potentials. It wouldn't be too hard to work out what its constitutive relation should be in the case of mass action kinetics, but I haven't done that yet.

(I realised in writing this that it's not really correct to draw the resistor as a one-port in the way I've drawn it, though. The rate of reaction actually depends on the values of both the chemical potentials, and not just on their difference.)

Similarly for the things I drew as capacitors. A normal (ideal) capacitor imposes the relation

V(t)=Q(t)C,V(t) = \frac{Q(t)}{C},

where Q(t)Q(t) is the integral of current over time. But one can also imagine a variation on this where VV depends on QQ nonlinearly instead of linearly. Similarly to resistive elements, Perelson counts anything as a capacitive element if it imposes a relation between efforts and the integral of flows over time. The things I drew as capacitors are meant to be nonlinear capacitive elements of this kind.

Specifically, in this case the 'charge' on each 'capacitor' is the amount of XX or YY that's currently present in the system (so it's the integral of the current), and the 'effort' (the analog of voltage) is the chemical potential. In the case of an ideal solution this is given by μX=μX0+RTlog[X]\mu_X = \mu^0_X + RT\log [X], where [X][X] is the concentration of XX, μX0\mu^0_X is the free energy of formation of XX (a constant that depends on the properties of the species) and RTRT is the gas constant times temperature.

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:08):

Thank you so much for the detailed answer, @Nathaniel Virgo. I think I was initially wondering where the nonlinearity was coming from (because the mass action kinetics in chemical reaction networks is proportional to products of reactant concentrations). But I see now that if we can encode that in the generalised resistors, there's no problem of expressiveness. It'll probably take me a bit more time to digest everything so I might have more questions later.

view this post on Zulip John Baez (May 03 2021 at 16:10):

I've never thought about trying to systematically draw chemical reaction networks as bond graphs. So, it's taking me a while to digest this too.

Frankly, since "reaction networks" and "Petri nets with rates" can handle the nonlinearity of the chemical rate equations so nicely, and there are good theorems relating them, I've never felt any urge to study yet another formalism for graphically representing these equations.

view this post on Zulip John Baez (May 03 2021 at 16:14):

But I guess if one has decided to describe many kinds of systems using bond graphs, it's irresistible to bring chemical reactions into this formalism.

view this post on Zulip John Baez (May 03 2021 at 16:17):

I think bond graph theorists are to engineers as category theorists are to mathematicians: they really like trying to fit everything they know into a single big framework. F. T. Brown's book Engineering System Dynamics: A Unified Graph-Centered Approach is one of the books I read to study bond graphs. It's 1059 pages long!

view this post on Zulip John Baez (May 03 2021 at 16:19):

Actually I found Jean Thoma's much older, shorter book Introduction to Bond Graphs and their Applications to be more insightful.

view this post on Zulip Cole Comfort (May 03 2021 at 16:24):

From my uninformed point of view it seems bond graphs and Lagrangian relations are two ways to describe (linear) circuits governed by the laws of classical mechanics. Is there any reason why one would chose to use bond graphs over Lagrangian relations or vice-versa?

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:27):

Bond graphs are a syntax to express Lagrangian relations. One alternative is to write the system differential equations directly. I thought the idea of a graphical syntax was to highlight topological connection between the different components, modularity etc.

view this post on Zulip John Baez (May 03 2021 at 16:27):

Bond graphs are what I'd call "syntax" while Lagrangian relations are what I'd call "semantics".

In other words, people like drawing bond graphs, designing machines by sticking such drawings together to build bigger drawings, etc. Lagrangian relation are what these drawings describe.

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:28):

Ditto :)

view this post on Zulip John Baez (May 03 2021 at 16:28):

Robin's post arrived while I was writing mine. Yes, "the idea of a graphical syntax was to highlight topological connection between the different components, modularity etc."

view this post on Zulip John Baez (May 03 2021 at 16:30):

Engineers apparently find it faster to design things using a graphical syntax than by writing down lots of linear differential equations, and that makes perfect sense to me, especially because their ultimate goal is to build machines, not systems of differential equations - and there's a second syntax-to-semantics functor that maps bond graphs to actual machines!

view this post on Zulip John Baez (May 03 2021 at 16:31):

(At least, there should be such a functor. It would be a bit hard to formalize it, and I don't know if anyone has tried, but it's very important.)

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:32):

I love this last point. Physical reality is the ultimate semantics!

view this post on Zulip John Baez (May 03 2021 at 16:33):

Yes, we shouldn't forget the ultimate point of all this formalism. :upside_down:

There are lots of gnarly aspects, like: you can't actually build an electrical circuit where 5000 wires meet at one node, though the mathematical formalism of circuit diagrams or bond graphs allows it. Physical reality has more complicated constraints than our elegant formalisms.

view this post on Zulip Cole Comfort (May 03 2021 at 16:35):

Hmmm I see, but the bond graphs only express some subset of Lagrangian relations. So it is appropriate when you don't need all of the extra stuff that comes with the general category of Lagrangian relations, I suppose.

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:36):

John Baez said:

Yes, we shouldn't forget the ultimate point of all this formalism. :upside_down:

Yes, that's what I try to keep in mind (even if, coming from a computer science background, a lot of my training has conspired to hide the material reality that makes computing possible).

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:38):

Cole Comfort said:

Hmmm I see, but the bond graphs only express some subset of Lagrangian relations. So it is appropriate when you don't need all of the extra stuff that comes with the general category of Lagrangian relations, I suppose.

So this is what I was wondering about earlier, because of the different possible interpretations of 0 and 1-junctions. Is this the case even for the interpretation of 0 as copy-sum and 1 as sum-copy (or the other way around, I cannot remember)?

view this post on Zulip John Baez (May 03 2021 at 16:38):

Cole Comfort said:

Hmmm I see, but the bond graphs only express some subset of Lagrangian relations. So it is appropriate when you don't need all of the extra stuff that comes with the general category of Lagrangian relations, I suppose.

"Pure" bond graphs only give a few Lagrangian relations; we need things like resistors and capacitors too to get anything useful, but even these only describe some subcategory of Lagrangian relations. And that's fine: we're trying to build things from a limited set of parts.

view this post on Zulip Cole Comfort (May 03 2021 at 16:44):

John Baez said:

Cole Comfort said:

Hmmm I see, but the bond graphs only express some subset of Lagrangian relations. So it is appropriate when you don't need all of the extra stuff that comes with the general category of Lagrangian relations, I suppose.

"Pure" bond graphs don't do much; we need things like resistors and capacitors too, but even these only describe some subcategory of Lagrangian relations. And that's fine: we're trying to build things from a limited set of parts.

If I understand bond graphs correctly, you don't need much more to get everything, which is the result of the paper I am trying to put out this week.

There is actually a very close connection to the ZX-calculus and Lagrangian relations. In particular affine Lagrangian relations over Fp\mathbb F_p, for pp an odd prime are equivalent to pp-dimensional qudit stabilizer circuits (modulo invertible scalars). Now if you look at the subcategory generated by these pure, affine things (in analogy to bond graphs) you get the fragment of the qudit ZX-calculus with no Hadamard gate, and the qudit generalizations of the Z and X π\pi-phases.

view this post on Zulip Cole Comfort (May 03 2021 at 16:45):

Which is why I am interested in these things.

view this post on Zulip John Baez (May 03 2021 at 16:46):

Robin Piedeleu said:

I still find it strange that these two interpretations of bond graphs are used simultaneously. In particular, I do not understand how to reconcile the version of the 1-junction as a series connection, which should be non-commutative, and the one that simply adds voltages and equates currents, which should be commutative. One can be encoded in the other, sure, but they seem like quite different beasts. How do engineers know how to choose the right semantics? Maybe if you're working purely with circuits there's one convention but how would you deal with mixed energy domains? I feel like I am missing something.

I had to think about this question a while. You're right, if we think of a 1-junction as a morphism in FinCorel\mathsf{FinCorel}^\circ it's a noncommutative product, and this is because sticking together two or more gadgets in series is inherently noncommutative.

I'd never thought enough about how amazing it is that sticking together resistors, capacitors and inductors in series is commutative. In a sense we're just "lucky" that this is the case. But this example leads to a formalism where the 1-junction is a morphism in LagRel\mathsf{LagRel}^\circ, and a commutative product.

This is just one of the balancing acts that Brandon was trying to deal with.

view this post on Zulip John Baez (May 03 2021 at 16:48):

It might be better to "give up" and define two different categories of bond graphs.

view this post on Zulip Cole Comfort (May 03 2021 at 16:58):

This is definitely off-topic from the original question, but has there been any work done in modelling nonlinear circuits with monoidal categories?
For example, the and/xor gates and wires generate the full subcategory of Span(FinSet){\sf Span}({\sf FinSet})/Mat(N){\sf Mat}(\N) with objects 2n2^n. However, there is no obvious way to define resistors, capacitors and so on in this setting as there is in the Lagrangian setting. Like what is the proper categorical semantics for this?

view this post on Zulip Robin Piedeleu (May 03 2021 at 16:59):

John Baez said:

I'd never thought enough about how amazing it is that sticking together resistors, capacitors and inductors in series is commutative. In a sense we're just "lucky" that this is the case.

Yes! I visualise this better using graphical linear algebra: the semantics of resistors and inductors look like this for example
image.png
image.png
I find it easier to get the intuition that composing them in series is a commutative operation from this representation.

view this post on Zulip Robin Piedeleu (May 03 2021 at 17:02):

(The --|x)-- node denotes integration here)

view this post on Zulip Brandon Coya (May 03 2021 at 17:23):

Hey, I've been silently following this conversation. I think John already answered all of the questions, but I'd be happy to try to say more if there are other questions about the bond graph paper.

view this post on Zulip John Baez (May 03 2021 at 18:41):

Cole Comfort said:

This is definitely off-topic from the original question, but has there been any work done in modelling nonlinear circuits with monoidal categories?
For example, the and/xor gates and wires generate the full subcategory of Span(FinSet){\sf Span}({\sf FinSet})/Mat(N){\sf Mat}(\N) with objects 2n2^n. However, there is no obvious way to define resistors, capacitors and so on in this setting as there is in the Lagrangian setting. Like what is the proper categorical semantics for this?

Brandon and I studied nonlinear circuits a bit in Props in network theory. We focused on linear circuits, but mentioned that our techniques apply to nonlinear circuits as well. We only illustrated this in the simplest nonlinear case: the affine circuits, which allow current sources and voltage sources as well as linear components. But there's nothing to stop you from throwing in diodes or other nonlinear circuit elements. Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop, and you can get models (or "algebras") of this prop by mapping them to relations between current and potential.

view this post on Zulip John Baez (May 03 2021 at 18:45):

Of course it would be good for people to work this out in detail for certain classes of nonlinear circuit elements.

view this post on Zulip Robin Piedeleu (May 03 2021 at 18:53):

Nonlinear circuits can mean different things (as it is a non-structure), but as John mentions, throwing in ideal diodes would be a nice first step that (I think?) would limit the nonlinearity to piecewise linearity, for which the equational theory might be manageable... It may even allow to approximate the small signal behaviour of more complicated components, like transistors for example.

view this post on Zulip Cole Comfort (May 03 2021 at 18:55):

John Baez said:

Cole Comfort said:

This is definitely off-topic from the original question, but has there been any work done in modelling nonlinear circuits with monoidal categories?
For example, the and/xor gates and wires generate the full subcategory of Span(FinSet){\sf Span}({\sf FinSet})/Mat(N){\sf Mat}(\N) with objects 2n2^n. However, there is no obvious way to define resistors, capacitors and so on in this setting as there is in the Lagrangian setting. Like what is the proper categorical semantics for this?

Brandon and I studied nonlinear circuits a bit in Props in network theory. We focused on linear circuits, but mentioned that our techniques apply to nonlinear circuits as well. We only illustrated this in the simplest nonlinear case: the affine circuits, which allow current sources and voltage sources as well as linear components.

Yes, the affine linear and affine Lagrangian relations are somewhat well studied in the literature. For example, in the papers you and your coauthors have written on this subject; as well as the paper on graphical affine algebra. I was thinking more about adding generators which act multiplicatively.

But there's nothing to stop you from throwing in diodes or other nonlinear circuit elements. Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop.

By this you mean the Circ approach to things? I don't think this would give us a full picture of what is going on, because this is more of a syntactical approach to things; equally important would be the semantic side. For the classes of circuits you have already studied there is a semantics in terms of affine Lagrangian relations.... but I suppose my question is, what is the larger category that would serve as the semantics for circuits which additionally can have multiplicative components? Would this just be something like complex matrices considered as a monoidal category with the bilinear tensor product, as this is how multiplicative stuff is added to stabilizer quantum circuits.

view this post on Zulip John Baez (May 03 2021 at 19:04):

I wrote:

But there's nothing to stop you from throwing in diodes or other nonlinear circuit elements. Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop.

Cole replied:

By this you mean the Circ approach to things? I don't think this would give us a full picture of what is going on, because this is more of a syntactical approach to things; equally important would be the semantic side.

I had meanwhile edited my sentence to say this:

Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop, and you can get models (or "algebras") of this prop by mapping them to relations between current and potential.

The point is that if these generators obey no relations (except those implicit in the definition of prop), you can map them to whatever you want. Of course if they obey no relations, you lose the ability to reason with them diagrammatically in exciting ways. But I think things like nonlinear diodes actually don't obey any relations (except those implicit in the definition of prop).

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:05):

If it's a semantics you're looking for then I think the Lagrangian formalism could be extended to relations that are more generally power preserving, i.e. for which ieifi=0\sum_i e_if_i = 0 for ii ranging over all the ports, eie_i is the ith effort and fif_i is the ith flow. This also allows for nonlinear components that already appear in the bond graph literature. For example, the RS node has one input and one output with constitutive relation e1f1=e2f2e_1f_1 = e_2f_2. Apparently, it is useful to model entropy flow (going back to the initial motivation of this thread). From what I understand, it can replace dissipative components, by modelling the energy lost as flowing into an additional thermodynamic domain as entropy.

view this post on Zulip Cole Comfort (May 03 2021 at 19:09):

John Baez said:

I wrote:

But there's nothing to stop you from throwing in diodes or other nonlinear circuit elements. Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop.

Cole replied:

By this you mean the Circ approach to things? I don't think this would give us a full picture of what is going on, because this is more of a syntactical approach to things; equally important would be the semantic side.

I had meanwhile edited my sentence to say this:

Our formalism is "plug-and-play": the different circuit elements can be taken as generators of a prop, and you can get models (or "algebras") of this prop by mapping them to relations between current and potential.

The point is that if these generators obey no relations (except those implicit in the definition of prop), you can map them to whatever you want. Of course if they obey no relations, you lose the ability to reason with them diagrammatically in exciting ways. But I think things like nonlinear diodes actually don't obey any relations (except those implicit in the definition of prop).

This is opposed to what I wish were true.

view this post on Zulip John Baez (May 03 2021 at 19:11):

Cole wrote:

but I suppose my question is, what is the larger category that would serve as the semantics for circuits which additionally can have multiplicative components?

You could be lazy and use Rel. In some other work I've used "algebraic relations" between vector spaces. Unfortunately smooth relations between vector spaces don't compose in general, but there's a certain class that do.

view this post on Zulip Cole Comfort (May 03 2021 at 19:11):

Robin Piedeleu said:

If it's a semantics you're looking for then I think the Lagrangian formalism could be extended to relations that are more generally power preserving, i.e. for which ieifi=0\sum_i e_if_i = 0 for ii ranging over all the ports, eie_i is the ith effort and fif_i is the ith flow. This also allows for nonlinear components that already appear in the bond graph literature. For example, the RS node has one input and one output with constitutive relation e1f1=e2f2e_1f_1 = e_2f_2. Apparently, it is useful to model entropy flow (going back to the initial motivation of this thread). From what I understand, it can replace dissipative components, by modelling the energy lost as flowing into an additional thermodynamic domain as entropy.

This kind of reminds me like things like the Toffoli gate in quantum computing. But by applying the CPM construction to this, I assume that we we can just as well get a semantics where circuits that just don't increase the power.

view this post on Zulip John Baez (May 03 2021 at 19:13):

Cole Comfort said:

John Baez said:

But I think things like nonlinear diodes actually don't obey any relations (except those implicit in the definition of prop).

This is opposed to what I wish were true.

But is it opposed to what is true? :upside_down:

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:13):

Cole Comfort said:

This kind of reminds me like things like the Toffoli gate in quantum computing. But by applying the CPM construction to this, I assume that we we can just as well get a semantics where circuits that just don't increase the power.

Hmm... to what exactly are you applying the CPM construction here?

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:14):

Cole Comfort said:

John Baez said:

The point is that if these generators obey no relations (except those implicit in the definition of prop), you can map them to whatever you want. Of course if they obey no relations, you lose the ability to reason with them diagrammatically in exciting ways. But I think things like nonlinear diodes actually don't obey any relations (except those implicit in the definition of prop).

This is opposed to what I wish were true.

My intuition was also that diodes would obey non-trivial equations that capture some interesting semantics. And, in a sense it would be extremely interesting if they obeyed no equations because that would give us an easy completeness result...suspiciously easy.

view this post on Zulip Cole Comfort (May 03 2021 at 19:15):

Robin Piedeleu said:

Cole Comfort said:

This kind of reminds me like things like the Toffoli gate in quantum computing. But by applying the CPM construction to this, I assume that we we can just as well get a semantics where circuits that just don't increase the power.

Hmm... to what exactly are you applying the CPM construction here?

To whatever semantics you are extending Lagrangian relations to? I just assume that if there is a semantics where the power were forced to be preserved, you could just as well consider a semantics where you can throw it away (but not get it for free).

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:19):

Cole Comfort said:

To whatever semantics you are extending Lagrangian relations to? I just assume that if there is a semantics where the power were forced to be preserved, you could just as well consider a semantics where you can throw it away (but not get it for free).

My intuition was different: I'm not even sure the class of relations forms a category, but I thought that it at least allowed for more expressive components (such as the one I gave as an example) because it would leave room for some nonlinearity.

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:36):

(But it occurs to me I might have misunderstood what you wrote.)

view this post on Zulip John Baez (May 03 2021 at 19:52):

Robin Piedeleu said:

Cole Comfort said:

My intuition was also that diodes would obey non-trivial equations that capture some interesting semantics. And, in a sense it would be extremely interesting if they obeyed no equations because that would give us an easy completeness result...suspiciously easy.

Let's talk about diodes for concreteness. A diode imposes a nonlinear relation between current and voltage; in fact current is a function of voltage. But there are different kinds of diodes - people use diodes for different things - and this function can be lots of things. One class of such functions is given by the Schockley diode equation. It's hard for me to imagine that these functions obey a lot of equations that we'd want to incorporate in a generators-and-relations definition of a prop for circuits with diodes.

view this post on Zulip John Baez (May 03 2021 at 19:54):

However, people used to use diodes for digital computers, and then it seems they used some simplifying assumptions to get something called diode logic, which is more like Boolean logic.

view this post on Zulip John Baez (May 03 2021 at 19:54):

This could have some nice generators-and-relations description!

view this post on Zulip Robin Piedeleu (May 03 2021 at 19:55):

Agreed, I also do not see how you could capture this sort of behaviour with any manageable set of equations. This is why I was mentioning ideal diodes whose behaviour seems much simpler: image.png
I was thinking about a semantics in terms of piecewise linear relations, which might be better behaved than full-blown nonlinearities of the type you cited.

view this post on Zulip John Baez (May 03 2021 at 20:05):

Yes, that sort of ideal diode you just graphed seems a lot better - I guess they use something like this in "diode logic"?

view this post on Zulip Cole Comfort (May 03 2021 at 20:06):

Robin Piedeleu said:

Agreed, I also do not see how you could capture this sort of behaviour with any manageable set of equations. This is why I was mentioning ideal diodes whose behaviour seems much simpler: image.png
I was thinking about a semantics in terms of piecewise linear relations, which might be better behaved than full-blown nonlinearities of the type you cited.

what kind if diagram is this?

view this post on Zulip John Baez (May 03 2021 at 20:08):

Hmm? It's a graph of the voltage-current relation for a diode that lets no current through if the voltage is negative, and any positive amount of current if the voltage is zero.

view this post on Zulip John Baez (May 03 2021 at 20:09):

This is a Lagrangian submanifold of the plane with its usual symplectic structure... except at the origin, where it's not smooth!

view this post on Zulip Spencer Breiner (May 03 2021 at 20:18):

John Baez said:

Unfortunately smooth relations between vector spaces don't compose in general, but there's a certain class that do.

Do the relation objects (apices of the spans) need to be vector spaces, or can they be more general (e.g., closed subsets)? Is there an easy counter-example to see what goes wrong?

view this post on Zulip Robin Piedeleu (May 03 2021 at 20:28):

John Baez said:

Yes, that sort of ideal diode you just graphed seems a lot better - I guess they use something like this in "diode logic"?

I'd not heard of diode logic before but from reading the wikipedia page now the answer to your question seems to be yes: "this discussion assumes idealized diodes that conduct in the forward direction with no voltage drop and do not conduct in the reverse direction".

view this post on Zulip Robin Piedeleu (May 03 2021 at 20:30):

Oh no, maybe that's different. Would this behaviour correspond to I=0 when voltage is negative and V=I for positive voltage?

view this post on Zulip Robin Piedeleu (May 03 2021 at 20:33):

Spencer Breiner said:

Do the relation objects (apices of the spans) need to be vector spaces, or can they be more general (e.g., closed subsets)? Is there an easy counter-example to see what goes wrong?

I can see the intuition that composition (i.e. intersection of relations) could create singularities but I don't have an example in mind. Should not be too hard to concoct.

view this post on Zulip John Baez (May 03 2021 at 20:48):

Spencer Breiner said:

John Baez said:

Unfortunately smooth relations between vector spaces don't compose in general, but there's a certain class that do.

Do the relation objects (apices of the spans) need to be vector spaces, or can they be more general (e.g., closed subsets)? Is there an easy counter-example to see what goes wrong?

By a smooth relation from a vector space VV to a vector space WW I mean a smooth submanifold of V×WV \times W.

I don't have a really good example ready for you, but this problem is morally connected to the fact that the category of smooth manifolds and smooth maps doesn't have pullbacks.

view this post on Zulip John Baez (May 03 2021 at 20:51):

As Robin just suggested, it's easy to find two smooth submanifolds of the plane whose intersection is not a smooth manifold. In fact I could draw two smoothly embedded copies of R\mathbb{R} in the plane whose intersection is a Cantor set!

(I am very good at drawing. :upside_down:)

view this post on Zulip John Baez (May 03 2021 at 20:53):

Just take the first to be the x axis and take the second to be another smoothly embedded copy of the real line formed by taking the x axis and "pushing it up" to form a bump in each of the middle thirds you remove from [0,1] to get the Cantor set.

view this post on Zulip John Baez (May 03 2021 at 20:54):

This is a fairly "exciting" counterexample, but it's much easier to find two smoothly embedded curves in the plane whose intersection is homeomorphic to [0,1], and thus not a submanifold.

view this post on Zulip John Baez (May 03 2021 at 20:57):

All this means, really, is that requiring relations between smooth manifolds MM and NN to be smoothly embedded submanifolds RM×NR \subseteq M \times N is a bad idea.

view this post on Zulip John Baez (May 03 2021 at 20:58):

It's a bit of a pity, because differential geometry is nice in some ways, but one has to get used to it.

view this post on Zulip Robin Piedeleu (May 03 2021 at 20:58):

More generally, transversality is a condition that guarantees that the intersection of two submanifolds is again a submanifold.

view this post on Zulip Cole Comfort (May 03 2021 at 21:00):

Of course, one could never hope to have a reasonable monoidal theory for such things, sadly

view this post on Zulip John Baez (May 03 2021 at 22:06):

Right, the problem is that in the category of manifolds you can't put a transversality condition on relations "taken one at a time" that's necessary and sufficient for any two of these relations to have a well-defined composite.

view this post on Zulip John Baez (May 03 2021 at 22:10):

However, when you have current-voltage relations where the currents are a smooth function of the voltages, I believe the composites are well-defined and again of the same form.

Here I'm not talking about how smooth functions are composable, which is obvious. I'm talking about relations R:Rn×RnRm×RmR: \mathbb{R}^{n} \times \mathbb{R}^n \nrightarrow \mathbb{R}^m \times \mathbb{R}^m which are of the form

(ϕ,I,ϕ,I)R(\phi, I, \phi', I') \in R iff (I,I)=f(ϕ,ϕ)(I,I') = f(\phi,\phi')

for some smooth f:Rn×RmRn×Rmf: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n \times \mathbb{R}^m

view this post on Zulip John Baez (May 03 2021 at 22:11):

I'm claiming - on the basis of some old memories, which should be checked if anyone cares - that composing two smooth relations of this particular sort gives another smooth relation of this sort.

view this post on Zulip John Baez (May 03 2021 at 22:11):

A lot of electrical components give smooth relations of this particular sort.

view this post on Zulip John Baez (May 03 2021 at 22:13):

An example is a diode obeying Schockley's diode equation.

view this post on Zulip John Baez (May 03 2021 at 22:15):

By the way, @Robin Piedeleu, are you planning to work on something like compositional thermodynamics or electrical circuits or bond graphs something like that?

view this post on Zulip Spencer Breiner (May 03 2021 at 22:38):

John Baez said:

Spencer Breiner said:

John Baez said:

Unfortunately smooth relations between vector spaces don't compose in general, but there's a certain class that do.

Do the relation objects (apices of the spans) need to be vector spaces, or can they be more general (e.g., closed subsets)? Is there an easy counter-example to see what goes wrong?

I don't have a really good example ready for you, but this problem is morally connected to the fact that the category of smooth manifolds and smooth maps doesn't have pullbacks.

Something like this, I suppose:

Rt2,t3R2!y2x310R\begin{CD} \mathbb{R} @>\langle t^2,t^3\rangle>> \mathbb{R}^2\\ @VV!V @VVy^2-x^3V\\ 1 @>0>> \mathbb{R} \end{CD}

Edit: Fixed labels

view this post on Zulip Spencer Breiner (May 03 2021 at 22:48):

John Baez said:

All this means, really, is that requiring relations between smooth manifolds MM and NN to be smoothly embedded submanifolds RM×NR \subseteq M \times N is a bad idea.

To me (as you say), this suggests that submanifolds are the wrong choice of apex. At a minimum, we should allow manifolds with boundary. Of course, that doesn't address the Cantor set example, but if I remember correctly, the Cantor space still gets some kind of smooth structure, and the inclusion ought to smooth as well. Is there any notion of a pro-finite manifold, that looks like Euclidean space only on infinitesimal neighborhoods?

The example I gave above is a bit different, as the problem is that the sub(not-quite)manifold has the "wrong" smooth structure. However, it's not obvious to me that this is a big problem (if we admit non-manifolds as apices).

view this post on Zulip John Baez (May 04 2021 at 05:48):

If you want a well-behaved category of "smooth spaces" I recommend diffeological spaces:

We explain them and show they form a complete and cocomplete quasitopos. (We explain what a quasitopos is, too.)

If you want a topos it gets hard to have something with a faithful functor to Set.

view this post on Zulip John Baez (May 04 2021 at 05:49):

Any subset of a diffeological space naturally becomes a diffeological space of its own, so the Cantor set as a subset of the real line is indeed a smooth space in a nontrivial way.

view this post on Zulip Robin Piedeleu (May 04 2021 at 12:00):

John Baez said:

By the way, Robin Piedeleu, are you planning to work on something like compositional thermodynamics or electrical circuits or bond graphs something like that?

Yes, I first got interested in the thermodynamics and energy cost of computation; this then extended to an interest in compositional modelling of thermodynamical systems more generally. I am not sure that bond graphs are the way to go or if some other formalism would be better suited. I am still very much exploring the area and trying to identify valuable research questions (that would benefit from my own knowledge in category theory and diagrammatic languages in particular).

view this post on Zulip John Baez (May 04 2021 at 14:30):

Great! I hope we keep talking about this stuff.

My ultimate goal with this stuff is to understand nonequilibrium thermodynamics for biology and ecology, as part of understanding what a "thriving ecosystem" is, and developing an approach to economy that seeks a thriving ecosystem rather than mere "growth". This is a long road, which I may never reach the end of, but there's a lot of interesting stuff to do en route.

view this post on Zulip Robin Piedeleu (Jul 26 2021 at 08:56):

I just came back to this topic for the summer and found this interesting paper that formulates thermodynamics in terms of contact geometry and deals with composite systems in this framework. It might be useful if you don't already know about it.

view this post on Zulip Robin Piedeleu (Jul 26 2021 at 08:59):

It should be possible to reverse engineer a category from their description of composite systems in Section 4.

view this post on Zulip Joe Moeller (Jul 26 2021 at 11:45):

That link doesn't work for me.

view this post on Zulip Robin Piedeleu (Jul 26 2021 at 13:45):

Sorry, edited it above. Does it work now? For reference it's Stability of composite thermodynamic systems with interconnection constraints, D. Gromov & P. Caines

view this post on Zulip Joe Moeller (Jul 26 2021 at 13:55):

Yeah, now it works. Thanks!

view this post on Zulip John Baez (Jul 26 2021 at 20:32):

Cool! I didn't know this paper. I'm starting to write a little series of blog articles about contact geometry, thermodynamics and statistical mechanics. So yeah, I think it'd be great to cook up some categories of open thermodynamic systems using contact geometry.

view this post on Zulip John Baez (Jul 26 2021 at 20:35):

By the way, @Robin Piedeleu, what's the most detailed introduction to contact geometry and thermodynamics that you've seen? So far I've just found a bunch of papers that say very basic stuff, nothing that really exploits contact geometry to do something exciting for thermodynamics. (Yeah, the Legendre transform is cool but it's pretty basic.)

It's as if people explained symplectic geometry to the point of writing down Hamilton's equations in terms of a symplectic manifold and quit there.

view this post on Zulip John Baez (Jul 27 2021 at 01:04):

Today I blogged about symplectic and contact geometry in thermodynamics - a review of known stuff:

This is a warmup for discussing their appearance in statistical mechanics.

view this post on Zulip Robin Piedeleu (Jul 27 2021 at 09:25):

John Baez said:

By the way, Robin Piedeleu, what's the most detailed introduction to contact geometry and thermodynamics that you've seen? So far I've just found a bunch of papers that say very basic stuff, nothing that really exploits contact geometry to do something exciting for thermodynamics. (Yeah, the Legendre transform is cool but it's pretty basic.)

I'm just discovering this stuff but all the references I saw (articles of Hermann or those of Mrugala, which you must have seen as well) seem to reformulate basic thermodynamics in the framework of contact geometry, so I'm afraid they do not satisfy your criterion of exploiting contact geometry to do something 'exciting' (although, I lack the background to even assess what constitutes an exciting development in this area). And everything I've seen so far has been about phenomenological thermodynamics, not statistical mechanics.

I've been planning to read Loïc Benayoun's thesis (in French), which contains a reasonably detailed introduction to the contact-geometric approach and--apparently--some applications.

In any case, looking forward to reading your blog posts (I've saved the first one for later today)!

view this post on Zulip Spencer Breiner (Jul 28 2021 at 15:16):

The author of that thesis also has this paper in English, for other provincial monolinguals like me.

view this post on Zulip Peiyuan Zhu (Aug 25 2021 at 17:31):

Is there any resource that anyone recommend to study symplectic and contact geometry?

view this post on Zulip John Baez (Aug 30 2021 at 02:38):

For symplectic geometry I recommend starting with V. I. Arnol'd's book Mathematical Methods of Classical Mechanics and Marsden and Ratiu's book Introduction to Mechanics and Symmetry. For more, I recommend to Gullemin and Sternberg's book Symplectic Techniques in Physics and Abraham and Marsden's Foundations of Classical Mechanics.

view this post on Zulip John Baez (Aug 30 2021 at 02:48):

Arnol'd's book also discusses contact geometry.