You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I hope more people start saying a bit about what they're doing - this is the stream for that!
Here's what I'm doing now - not including teaching.
I'm giving a talk on Monday March 8, 2021 at the Zurich Theoretical Physics Colloquium, called Theoretical physics in the 21st century, as part of their Sustainability Week. The talk is prepared now, I just need to give it.
I'm preparing a talk for Kirill Krasnov's seminar Octonions and the Standard Model. This is giving me a chance to think about particle physics - fun!
I've got to resubmit Structured versus decorated cospans, which was rejected by Transactions of the AMS. The first referee said the paper was "outstanding", so they called in two more, one of whom said it used too much category theory to be easy to understand (so it should be reorganized), and the other of whom said it wasn't so interesting. I also want to see if we can simplify the proofs using some ideas of @Mike Shulman. But I think we'll resubmit it before doing that - just because it's easy to do.
I want to restart work on the paper "Schur functors" with @Joe Moeller and @Todd Trimble. Our winter quarter ends this week, and the spring quarter starts on March 29th, so I have a window where I can get a lot of writing done.
I need to prepare a talk for Thursday March 18, 2021 for the Topos Institute seminar. It's time to get cracking!
On Wednesday April 7, 2021 I'm giving a talk at PIMS, the Pacific Institute for the Mathematical Sciences, entitled The answer to the ultimate question of life, the universe and everything. This talk is already prepared so it's no work.
I want to prepare talks on the Rosetta Stone paper and A characterization of entropy in terms of information loss for the Topos Institute workshop "Finding the right abstractions: category-theoretic language for human flourishing", May 5, 12, 19.
I was going to email you about point number 5 tomorrow actually, so I’m glad to see it’s on there ;)
Yes, I need to quickly decide if I'm going to stick with my existing talk or do a different one.
Here's my to-do list now:
Prepare a talk on Mathematics in the 21st century for the Topos Institute seminar this Thursday (March 25, 2021). Brendan Fong wanted me to give a broad programmatic talk instead of a technical one, so I decided to really throw caution to the winds and talk about the future of pure mathematics, applied category theory and climate change, and how they all fit together! I still need to polish it up... I have some ideas about creating something like a "nervous system" for the planet.
Prepare two talk for Kirill Krasnov's seminar Octonions and the Standard Model on Monday April 5th and Monday April 12th. The first one, Can we understand the standard model?, is done. In this one I don't talk about the octonions, just the puzzle of what structure is preserved by the Standard Model gauge group. I'm busy struggling to write the second talk, which is about using the exceptional Jordan algebra, and some other Jordan algebras, to make this structure more "conceptual". I've had to do some last-minute research to pull this talk together. Right now I'd love nothing more than to drop everything else and think about this for a few more months!
Simplify "Structured versus decorated cospans" with Kenny Courser and Christina Vasilakopoulou using ideas from Mike Shulman. This will really improve the paper!
Work on "Schur functors" with @Joe Moeller and @Todd Trimble. We're going to meet a bunch next week to make progress on this.
On Wednesday April 7, 2021 I'm giving a talk at PIMS, the Pacific Institute for the Mathematical Sciences, on The answer to the ultimate question of life, the universe and everything.
Prepare talks on the Rosetta Stone paper and A characterization of entropy in terms of information loss for the Topos Institute workshop "Finding the right abstractions: category-theoretic language for human flourishing", May 5, 12, 19.
So, I'm giving it away here (but don't tell anyone, publishers try to keep papers inaccessible to the world).
So, my list of things to do has shrunk even more!
It now looks like this:
Finish "Schur functors" with @Joe Moeller and @Todd Trimble.
Simplify "Structured versus decorated cospans" with @Kenny and @Christina Vasilakopoulou using ideas from Mike Shulman.
John Baez said:
- After some comic blunders where they made some but not all the corrections we asked for, the Journal of Mathematical Physics has succeeded in publishing my paper with David Weisbart and Adam Yassine:
So, I'm giving it away here (but don't tell anyone, publishers try to keep papers inaccessible to the world).
Are there many differences from the arxiv version? :)
Nothing major. It looks very different, though.
Here's my latest list of things to do. Categories with nets, with @Jade Master, @Fabrizio Genovese and @Mike Shulman was accepted by LiCS and we'll be done revising it and submit it soon!
Next:
Finish "Schur functors" with @Joe Moeller and @Todd Trimble. WORKING ON IT.
Simplify Structured versus decorated cospans with @Kenny and @Christina Vasilakopoulou using ideas from Mike Shulman. WORKING ON IT - Kenny figured out to prove one of the main theorems this new way, and everything else should be easier.
Wednesday May 19, 2021 - There will be a meeting on "Finding the Right Abstractions: Category-theoretic Language for Human Flourishing", organized by David Spivak, Andrew Critch, and Scott Garrabrant. I will go to some of this and give a talk on the Rosetta Stone paper and subsequent work... so I need to make some slides.
Monday May 31, 2021 - I'll explain some category-theoretical tools for compositional system design at a session on Compositional Robotics: Mathematics and Tools at the International Conference on Robotics and Automation (ICRA 2021).
Tuesday June 8, 2021 - At 3 pm I will speak at Categories and Companions 2021, a online symposium for research students of category theory and related disciplines to meet, learn from each other and establish new collaborations. I'll give a 50 minute talk on structured cospans, but I have some slides so I just need to update them.
June 13-17, 2021 - one day in this interval Larry Li, Bill Cannon and I will be running a session on "Non-equilibrium Models in Biology: from Chemical Reaction Networks to Natural Selection" at the annual meeting of the Society for Mathematical Biology.
Wednesday June 23, 2021 - at 10 am I'll give a talk at Google on The answer to the ultimate question of life, the universe and everything.
Thursday July 1, 2021 - I go to Berkeley to visit the Topos Institute.
Okay, here's what's going on. Since I'm retiring at the end of June, I want to finish a bunch of stuff before then, so I can enjoy myself thinking about brand new things.
Monday May 31, 2021 - I'm giving a talk at the 2021 Workshop on Compositional Robotics: Mathematics and Tools. THE HARD PART IS DONE - my slides are prepared and you can see them here.
Finish "Schur functors" with @Joe Moeller and @Todd Trimble. DONE - we just need to check it over a few more times and put it on the arXiv! It's 53 pages, a fairly massive paper on representations of the symmetric groups and how they fit into a wonderful structure which we call a "2-plethory", whose decategorification is the ring of so-called symmetric functions. I've been working on this with Todd since 2007, but the pace picked up a lot when Joe joined in.
Write a short column on "Moduli spaces of polyhedra" for the AMS Notices. This is due June 6th. HALF-DONE, NEED TO FINISH IT NOW.
Simplify Structured versus decorated cospans with @Kenny and @Christina Vasilakopoulou using ideas from Mike Shulman. NEED TO GET BACK TO WORKING ON IT - Kenny figured out to prove one of the main theorems this new way, and everything else should be easier.
Tuesday June 8, 2021 - At 3 pm I will speak at Categories and Companions 2021, a online symposium for research students of category theory and related disciplines to meet, learn from each other and establish new collaborations. I'll give a 50 minute talk on structured cospans, but I have some slides so I just need to update them.
Monday June 14, 2021 - Larry Li, Bill Cannon and I will be running a session on "Non-equilibrium Models in Biology: from Chemical Reaction Networks to Natural Selection" at the annual meeting of the Society for Mathematical Biology.
Wednesday June 23, 2021 - at 10 am I'll give a talk at Google on The answer to the ultimate question of life, the universe and everything.
Thursday July 1, 2021 - I retire and drive to Berkeley to visit the Topos Institute. :tada:
Oh, wow. Is it public knowledge that you are retiring?
Now it is :)
He's mentioned it before now, on the n-Cafe at least I think.
A search on this CT zulip for "retiring" picks up a few times John has mentioned it. The earliest appears to have been in December, when he said he would retire in June.
David Michael Roberts said:
Oh, wow. Is it public knowledge that you are retiring?
Yes, I announced it and said what I plan to do afterwards on the n-Category Cafe.
I've been grappling with this idea (retiring) for a few years. It's both exciting and sort of scary.
John Baez said:
I've been grappling with this idea (retiring) for a few years. It's both exciting and sort of scary.
I hope that your retirement brings you joy! I am obviously barely getting started, but I am kind of dreaming of a day when I can retire one day in the distant and focus on exposition and guidance.
Jonathan Sterling said:
John Baez said:
I've been grappling with this idea (retiring) for a few years. It's both exciting and sort of scary.
I hope that your retirement brings you joy! I am obviously barely getting started, but I am kind of dreaming of a day when I can retire one day in the distant and focus on exposition and guidance.
Surprisingly I feel the same way (I am also just starting though ;))
@John Baez Oh, I had forgotten about that!
Interesting, @Georgios Bakirtzis and @Jonathan Sterling! When I was really young I thought I would quit doing math after I was 40 and do something else, like writing expository books. I got too interested in math, and also too comfortable having tenure, to make a switch like that so soon. I probably still have research I want to do: I seem to need to do some to stay happy. But now that I'll be able to do whatever I want, there's really no way of being sure what that will be.
I realized I could not write a really good column about moduli spaces of polyhedra by the deadline of June 6th. So, I changed the subject to something I've recently been blogging about, and it got a lot easier! I finished it up pretty quickly:
If anyone can point to places where this is unclear, I'd love to hear about it! To whet your interest: it's easy to figure out the coefficients of the derivative of a polynomial given the coefficients of a polynomial. But how can you figure out the roots of the derivative given the roots of the polynomial? There's an answer that uses physics.
You say
and that the roots of are the same as the roots of . You then hit this with the a hyperplane separating theorem. The proof in wikipedia goes slightly more direct and observes that if is a root of then from the above equation you get
which explicitly (*) gives as convex combination of the set . I find that quite satisfying!
(*) I guess it's explicitly of the right form, but the coefficients are sort of implicit!
It's also nice to observe the case, ie a quadratic polynomial with two roots. Obviously, when the roots are real, we are very familiar with the idea of the critical point lying between the two roots.
Anyway, I wasn't aware of this theorem until I saw your Azimuth post. So thanks!
I would say the wikipedia article's way is algebraically quite satisfying, but using the hyperplane separating theorem is geometrically quite satisfying.
It's definitely nice that you can solve for explicitly - thanks, Simon!
But I'm just trying to capture the intuition "the electric field of a bunch of positive charges can't vanish unless you're standing somewhere among those positive charges" - because if you're not among them, they are all pushing you away from the whole lot of them (if you're positively charged particle yourself).
So, I don't think I'll add that formula. But thanks - I hadn't thought of it that way.
(I'll admit I couldn't stand reading the proof on Wikipedia, it looked like "just a bunch of calculations".)
I'm making some progress, finishing stuff up:
Write a short column on The Gauss-Lucas Theorem for the AMS Notices. DONE!
Monday May 31, 2021 - talk at the 2021 Workshop on Compositional Robotics: Mathematics and Tools. DONE!
Finish "Schur functors" with @Joe Moeller and @Todd Trimble. DONE! Soon Joe will upload it to the arXiv.
Simplify Structured versus decorated cospans with @Kenny and @Christina Vasilakopoulou using ideas from Mike Shulman. NEED TO GET BACK TO WORKING ON IT - Kenny figured out to prove one of the main theorems this new way, and everything else should be easier.
Tuesday June 8, 2021 - At 3 pm I will speak at Categories and Companions 2021, a online symposium for research students of category theory and related disciplines to meet, learn from each other and establish new collaborations. I'll give a 50 minute talk on structured cospans, but I have some slides so I just need to update them.
Monday June 14, 2021 - Larry Li, Bill Cannon and I will be running a session on "Non-equilibrium Models in Biology: from Chemical Reaction Networks to Natural Selection" at the annual meeting of the Society for Mathematical Biology.
Wednesday June 23, 2021 - at 10 am I'll give a talk at Google on The answer to the ultimate question of life, the universe and everything.
Thursday July 1, 2021 - I retire and drive to Berkeley to visit the Topos Institute. :tada:
Here's the Schur functors paper! https://arxiv.org/abs/2106.00190 Congratulations to @John Baez @Joe Moeller and @Todd Trimble
Thanks, David! I'm writing a blog article explaining it.
John Baez said:
Thanks, David! I'm writing a blog article explaining it.
I was learning about Young diagrams just yesterday, seeing how they were used to describe simple cases of Hilbert schemes (e.g. ). I know that this paper explicitly says that you don't need Young diagrams to study Schur functors, but I wonder if there are any nice links back to Hilbert schemes anyway...
Young diagrams are useful for almost everything, since the set of Young diagrams is the free commutative monoid on the underlying set of the free semigroup on one generator.
For example Young diagrams are great for classifying finite-dimensional semisimple algebras, or finite abelian groups, or partitions of sets, or irreducible representations of the groups , or irreducible representations of the groups , or...
I have a feeling that the Hilbert scheme stuff is connected to how Young diagrams classify partitions of sets, since a point in a Hilbert scheme is a multiset of points in some variety (like ).
By the way, I blogged about my paper with Todd Trimble and Joe Moeller on the n-Cafe!
If you've read the paper, this won't say much new... but if you haven't, it's probably easiest to just start here.
Tim Hosgood said:
I was learning about Young diagrams just yesterday, seeing how they were used to describe simple cases of Hilbert schemes (e.g. ). I know that this paper explicitly says that you don't need Young diagrams to study Schur functors, but I wonder if there are any nice links back to Hilbert schemes anyway...
Hey, maybe there's a real relation here! Check out this comment by Allen Knutson. I don't really understand it, but he seems to be saying there's a modified version of the ring of symmetric functions defined using that Hilbert scheme you just mentioned.
By the way, the link to Allen Knutson's question in the first paragraph of the blog post points to the wrong page.
John Baez said:
Tim Hosgood said:
I was learning about Young diagrams just yesterday, seeing how they were used to describe simple cases of Hilbert schemes (e.g. ). I know that this paper explicitly says that you don't need Young diagrams to study Schur functors, but I wonder if there are any nice links back to Hilbert schemes anyway...
Hey, maybe there's a real relation here! Check out this comment by Allen Knutson. I don't really understand it, but he seems to be saying there's a modified version of the ring of symmetric functions defined using that Hilbert scheme you just mentioned.
interesting! I look forward to your answer in 2035
Nathanael Arkor said:
By the way, the link to Allen Knutson's question in the first paragraph of the blog post points to the wrong page.
Thanks - fixed.
It's very interesting to read the comments on the original article, making some of the first steps towards what would become this paper!
John Baez said:
Nathanael Arkor said:
By the way, the link to Allen Knutson's question in the first paragraph of the blog post points to the wrong page.
Thanks - fixed.
(Less important, but the two first links are now interchanged.)
Weird!!! Ugh.
Fixed.
More progress:
Thursday June 8, 2021 - Speak at Categories and Companions 2021. DONE!
Monday June 14, 2021 - Larry Li, Bill Cannon and I run a session on "Non-equilibrium Models in Biology: from Chemical Reaction Networks to Natural Selection" at the annual meeting of the Society for Mathematical Biology. DONE! I'm talking to Hong Qian about thermodynamics and biology now, and I want to spend more time on this stuff.
Wednesday June 23, 2021 - I give a talk at Google on The answer to the ultimate question of life, the universe and everything. DONE!
Finish Structured versus decorated cospans with @Kenny and @Christina Vasilakopoulou. THIS IS THE MAIN THING I NEED TO DO! It looks done, and the new proofs based on an idea of @Mike Shulman make the paper a lot nicer. I just need Kenny and Christina to agree it's ready for me to put it on the arXiv.
Thursday July 1, 2021 - I retire and drive to Berkeley to visit the Topos Institute. :tada:
I'm looking forward to seeing the improved version of Structured versus decorated cospans.
Then click on the link.
We probably carried out your proof strategy in a much more grungy way than you would have. But it still makes all the arguments 10 times more efficient and conceptual than they had been!
Hah! I assumed the link would be to the old version. Silly of me...
As a curious bystander who didn't follow the whole development of the 'structured vs decorated cospan' paper, how were the proofs improved?
Both structured and decorated cospans turn out to be special cases of a construction invented by Mike. Using this we can avoid masses of computations.
The equivalence of the structured and decorated cospan formalisms - under some conditions - also follows nicely from Mike's work... together with Christina and Joe Moeller's work on the monoidal Grothendieck construction, and a nice result on when opfibrations are right adjoints. The latter stuff was already in our paper.
I guess you could say the new proofs more clearly illuminate the relationship between decorated and structured cospans.
The new version is nice! I'm glad it all worked out the way I hoped.
Is it this result? image.png
What is the reason for the notation
In that paper, I was using the term "framed bicategory" for a double category with companions and conjoints, a.k.a. a proarrow equipment, a.k.a. a fibrant double category, etc. etc. So denotes the "framed-bicategory-ification".
I don't use that terminology much any more, so I probably wouldn't use that notation myself any more either.
Oh I see. Thanks.
We're just using that notation so people can refer to Mike's paper Framed bicategories and monoidal fibrations and have our notation sort of match his when they're looking at this result.
John Baez said:
Both structured and decorated cospans turn out to be special cases of a construction invented by Mike. Using this we can avoid masses of computations.
The equivalence of the structured and decorated cospan formalisms - under some conditions - also follows nicely from Mike's work... together with Christina and Joe Moeller's work on the monoidal Grothendieck construction, and a nice result on when opfibrations are right adjoints. The latter stuff was already in our paper.
Is the special construction 'framed bicategories'?
Matteo Capucci (he/him) said:
John Baez said:
Both structured and decorated cospans turn out to be special cases of a construction invented by Mike. Using this we can avoid masses of computations.
The equivalence of the structured and decorated cospan formalisms - under some conditions - also follows nicely from Mike's work... together with Christina and Joe Moeller's work on the monoidal Grothendieck construction, and a nice result on when opfibrations are right adjoints. The latter stuff was already in our paper.
Is the special construction 'framed bicategories'?
It's presumably Lemma 2.3, as Jade points out.
Oh I didn't notice it was a construction
Good to know
It's nice to see formalisms converging... Also Myers-Spivak's approach to dynamical systems turns out to be doubly categorical, though I'm not 100% sure they're 'doubly indexed categories of systems' are framed.
It's exciting to me to see this construction being actually used to prove something non-obvious. When I wrote that paper I was primarily motivated by making abstract sense out of, and carefully checking the coherence laws for, a construction that had basically already been done (the May-Sigurdsson bicategory of parametrized spectra). Once I had the abstract construction written down, I found other examples, but until now I haven't seen an example where I was really convinced that the abstract point of view offered a significant simplification (taking that example alone) over just doing stuff by hand. And this one doesn't even just use the construction itself, but also the fact that it's functorial (to deduce an equivalence between double categories of structured/decorated cospans from a simpler-to-construct equivalence of monoidal fibrations), which originally was something I included just because I was a category theorist and thought it was important in principle that everything is a functor, rather than because I had really convincing examples demanding it.
Interesting! So Mike did a bunch of wonderful work and waited for someone to come along and apply it. I bet he didn't expect it would be applied to epidemiology!
Indeed, when I wrote that paper 14 years ago I certainly had no idea that it would be applied to epimediology!
More progress:
Finish Structured versus decorated cospans with @Kenny and @Christina Vasilakopoulou and submit it for publication. DONE.
Retire and drive to Berkeley to visit the Topos Institute. DONE.
Polish up Schur functors and categorified plethysm with @Todd Trimble and @Joe Moeller a bit more and submit it for publication. WILL FINISH IT SOON.
Write a short paper on 'Saving Fisher's fundamental theorem using Fisher information', based on my blog articles about this. IN PROGRESS.
I'm also starting to talk to @Sophie Libkind , @Owen Lynch , @David Spivak and @Joe Moeller about category theory and thermodynamics. And I'm talking to @Evan Patterson about discrete differential geometry for applications to solving the heat equation, Maxwell's equations and so on.
@John Baez Minor detail: It looks like you dropped a square root in the definition of Fisher speed in the article you linked (part 3)
Okay, thanks - I'll fix that.
More progress:
Write a paper The fundamental theorem of natural selection. DONE.
Give a talk on structured vs decorated cospans at the Topos Institute, Wednesday July 28th. SLIDES WRITTEN. This is their "Berkeley seminar", not their weekly colloquium, but there will be a video. It'll be fun to tell Brendan Fong how we finally straightened everything out. There are a bunch of subtle issues that I usually gloss over, which I'm getting into here.
Polish up Schur functors and categorified plethysm with @Todd Trimble and @Joe Moeller a bit more and submit it for publication. WILL FINISH IT SOON, HONEST.
Blog about thermodynamics, especially how the treatment using contact geometry is connected to statistical mechanics. Someday this should become part of a treatment of open thermodynamic systems using category theory, but it's fine as a stand-alone thing. I've been thinking about it a lot in the last two weeks, and it's pretty cool. Not exactly new, but somehow not as known as it should be.
I want to blog about thermodynamics, especially how the treatment using contact geometry is connected to statistical mechanics. FIRST ARTICLE DONE, WRITING THE SECOND NOW.
I want to polish up Schur functors and categorified plethysm with @Todd Trimble and @Joe Moeller a bit more and submit it for publication. WILL FINISH IT SOON, HONEST.
I feel that epidemiological models might be the "killer app" that the Topos Institute needs to prove the usefulness of its work on applied category theory. Evan Patterson, James Fairbanks, Sophie Libkind, Owen Lynch and others are already developing modeling tools with AlgebraicJulia, working with a number of actual epidemiologists. I would like to help them out. Evan Patterson is starting up a series of private meetings every other week to talk about these things, so I'll go to those starting August 11th.
Owen Lynch is starting a series of meetings on compositional thermodynamics, and I'll go to those too.
Submit "The fundamental theorem of natural selection" to a journal.
I'm getting excited about a bunch of ideas connected to thermodynamics. With people at the Topos Institute and @Joe Moeller I'm connecting thermodynamics to category theory. But on my own I'm trying to understand its connection to information geometry.
As part of this personal quest, last week Monday I blogged about symplectic and contact geometry in thermodynamics - a review of known stuff:
This was a warmup for discussing their appearance in probability theory and statistical mechanics.
In my next blog article I talked about symplectic and contact geometry in probability theory - original research, as far as I can tell so far:
This opened up the question "what is to momentum as probability is to position", and while I had a mathematical answer it required some massaging to really get a good answer.
I gave the answer here today - it's a known concept from information theory, called "surprisal":
I blogged about "statistical manifolds", which are manifolds parametrizing probability distributions, and "Gibbsian" statistical manifolds, where these probability distributions maximize entropy subject to constraints specified by the point on the manifold:
These probability distributions are called Gibbs distributions.
The most interesting thing right now is the formula for the constants in the exponential. These turn out to be analogous to momenta in classical mechanics - in terms of the analogy I've built up.
Today I blogged about how to derive the Gibbs distribution and especially the formula for these constants in the exponential:
For years I've been suppressing my desire to do computations using calculus, differential geometry, etc. In this blog post I finally do some.
Meanwhile, I'm working with @Owen Lynch on more category-theoretic aspects of thermodynamics. He has some great ideas.
My wife Lisa and I drove to Washington DC, where she'll be working at the Center for Hellenic Studies. We'll be staying here until about December 15th. My life has a new texture now:
On Mondays I'm meeting with @Owen Lynch, @Joe Moeller, @Spencer Breiner and @Sophie Libkind. We're working on thermodynamics. Some subset of us, definitely including Owen and me, are writing a paper called "Compositional thermostatics". Owen will put an article about this on the Topos Institute blog pretty soon.
On Mondays I'll also be meeting with @Christian Williams, working on his thesis. So far a lot of it is about categories of presheaves on Lawvere theories.
On Wednesdays I'm meeting with @Evan Patterson. Half the time we're talking about his work on formalizing Tonti diagrams using category theory. The other half we're meeting with a group of people working on software for compositional models of epidemiology.
On Thursdays I'm meeting with @Joe Moeller and @Todd Trimble. We're talking about things like lambda-rings, Adams operations, the field with one element, and the Riemann Hypothesis.
John Baez said:
- On my own, I'm working on information geometry. I got some good ideas driving across Kansas and Missouri.
Re: “Information geometry is the study of 'statistical manifolds', which are spaces where each point is a hypothesis about some state of affairs.”
I've been looking at this from a basis in logic, that is, beginning with boolean spaces, intending to extend it to real-valued measures eventually. Here's a snippet from an old project proposal ...
What's up now:
I gave the Perimeter Institute colloquium on Wednesday September 15th, on "The fundamental theorem of natural selection". The video is here and my slides are here.
On Thursday September 23 I'm giving the physics colloquium at the University of British Columbia, on "Classical mechanics versus thermodynamics". I'll explain why the Maxwell relations in thermodynamics look just like Hamilton's equations. Short answer: both subjects involve a Lagrangian submanifold of a symplectic manifold.
I need to work with Tom Leinster on revising my application for a Leverhulme Fellowship to visit Edinburgh.
On October 7th I'll meet with Nate Osgood to talk about applications of category theory to epidemiology. He's an old friend of mine from grad school, who is now leading the COVID19 modelling for the province of Saskatchewan and also helping do it for all of Canada. We are both involved in @Evan Patterson's meetings on compositional models of epidemiology using AlgebraicJulia.
I'm working with @Owen Lynch and @Joe Moeller on a paper called "Compositional thermostatics". We're meeting regularly but the writing is going a bit slowly right now.
I had a scare yesterday where I thought the main result I was going to present in my talk at the University of British Columbia this week was completely wrong! Terrible! But then I realized I was just looking at it in a slightly wrong way. Now it's fixed. You can see the slides here:
Such a clear exposition John! Just a question: why do we get and in the partial derivatives on slide 16 (and the next ones)?
More specifically, what's the difference in between something like and ? Is the first a projection of the second along the direction in which remains constant?
I'm glad you liked my slides, Matteo! I'm giving that talk tomorrow.
Matteo Capucci (he/him) said:
More specifically, what's the difference in between something like and ?
The first means something, the second does not.
Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called . Any pair of these functions can serve as a coordinate system.
Partial derivatives only make sense with respect to a coordinate system: given coordinates on a 2-manifold and a function on this 2-manifold we write
to mean the derivative of as we change the value of the coordinate while holding the coordinate fixed. The coordinates we hold fixed matter just as much as those whose values we change!
So, in thermodynamics something like doesn't mean anything because we haven't specified a coordinate system, and there are lots of options.
When we write we are rapidly saying "I'm using the functions as coordinates, and I'm computing the rate of change of change of as I change the value of while holding fixed".
So, this would be different from , for example.
But would be ambiguous - we shouldn't write that.
In math we often fix one coordinate system at the start of a discussion, and assume all partial derivatives are taken in that coordinate system. If our coordinates were , this would allow us to abbreviate as without causing confusion: we know ahead of time that is being held fixed here.
But thermodynamics is all about jumping rapidly between different coordinate systems, so we can't let ourselves use such abbreviations!
John Baez said:
Matteo Capucci (he/him) said:
More specifically, what's the difference in between something like and ?
The first means something, the second does not.
Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called . Any pair of these functions can serve as a coordinate system.
Partial derivatives only make sense with respect to a coordinate system: given coordinates on a 2-manifold and a function on this 2-manifold we write
to mean the derivative of as we change the value of the coordinate while holding the coordinate fixed.
I was wondering the same thing when I read this blog post on Azimuth. Glad that I found the answer here!
Fabrizio Genovese said:
John Baez said:
Matteo Capucci (he/him) said:
More specifically, what's the difference in between something like and ?
The first means something, the second does not.
Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called . Any pair of these functions can serve as a coordinate system.
Partial derivatives only make sense with respect to a coordinate system: given coordinates on a 2-manifold and a function on this 2-manifold we write
to mean the derivative of as we change the value of the coordinate while holding the coordinate fixed. The coordinates we hold fixed matter just as much as those whose values we change!
So, in thermodynamics something like doesn't mean anything because we haven't specified a coordinate system, and there are lots of options.
When we write we are rapidly saying "I'm using the functions as coordinates, and I'm computing the rate of change of change of as I change the value of while holding fixed".
So, this would be different from , for example.
All this is explained in painful detail in the excellent course on General Relativity by Frederic Schuller, lectures 5-6, that you can find here: https://www.youtube.com/watch?v=7G4SqIboeig&list=PLFeEvEPtX_0S6vxxiiNPrJbLu9aK1UVC_
He is BY FAR the best teacher I've ever seen in my life. I'd suggest these lectures to everyone, if only as a comparison to improve one's teaching abilities.
John Baez said:
Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called . Any pair of these functions can serve as a coordinate system.
Ha, this is something I didn't understand. I thought were coordinates. Of course now that I think about it, this doesn't make sense!
John Baez said:
Partial derivatives only make sense with respect to a coordinate system: given coordinates on a 2-manifold and a function on this 2-manifold we write
to mean the derivative of as we change the value of the coordinate while holding the coordinate fixed. The coordinates we hold fixed matter just as much as those whose values we change!
I find this puzzling again. First of all, the notation is something I only ever seen used with that meaning in thermodynamics.
When I took differential geometry, the derivative of with respect to the coordinate chart would be denoted as , and understood to be a scalar only defined where said chart is defined.
Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So really means in my notation, where is implicitly fixed because no is appearing.
Is that what's going on?
Just to check: if the manifold were n-dimensional, I would write to denote the derivative along the first coordinate chart only.
That's right: the chart gives you a local diffeomorphism with some domain in , which implicitly comes equipped with a choice of coordinates! You see this in action when you switch to polar coordinates, for example.
Matteo Capucci (he/him) said:
Just to check: if the manifold were n-dimensional, I would write to denote the derivative along the first coordinate chart only.
Yes, basically the idea is that your coordinate charts are made of specific functions that you also care about inherently, and that the coordinate charts overlap in which functions make them up so you can't just assume what the whole chart is based on one of the coordinates, so the idea is to specify the entire chart every time you do something coordinate-dependent.
The notation for partials in n dimensions is where and the are n locally linearly independent scalars.
It would probably have been clearer in a sense to put the entire list of after the bar but the notation can get noisy enough without adding redundant information for clarity...
I remember Chevalley, C., Fundamental Concepts of Algebra, Academic Press, 1956 had a nice working out of partial differentials in ring and module contexts. It was very helpful in my work on differential logic (differential geometry over GF(2) sorta.)
Matteo Capucci (he/him) said:
I find this puzzling again. First of all, the notation is something I only ever seen used with that meaning in thermodynamics.
It's used in math and physics whenever the choice of coordinates isn't clear otherwise.
When I took differential geometry, the derivative of with respect to the coordinate chart would be denoted as , and understood to be a scalar only defined where said chart is defined.
Right, but here they are fixing a coordinate chart at the beginning of the discussion, so the notation for partial derivative doesn't need to tell you which coordinates you're using. It's not a good notation for when you're changing coordinates every ten seconds - for example, when the right side of an equation is using different coordinates than the left side!
Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So really means in my notation, where is implicitly fixed because no is appearing.
Is that what's going on?
Right!
Just to check: if the manifold were n-dimensional, I would write to denote the derivative along the first coordinate chart only.
Exactly. So this notation for the partial derivatives tells you that the coordinates being used are .
So, for example, in the identity
we have three functions on a 2-dimensional manifold. In the first term we are using and as coordinates; in the second we are using and as coordinates; in the third we are using and as coordinates.
And this strange identity is true whenever we have 3 smooth functions on a 2-dimensional manifold such that any pair can be used as coordinates!
This is the kind of identity that everyone in thermodynamics knows, that a lot of mathematicians don't know.
Btw, I explained why it's true here:
(I'm having another go at learning thermodynamics, and all the math involved, because I'm trying to connect it to classical mechanics.)
John Baez said:
we have 3 smooth functions on a 2-dimensional manifold such that any pair can be used as coordinates!
I see, part of my confusion was because I didn't get you were using as coordinates themselves. I mean, of course you were, but it didn't typecheck in my head. After all, how can you be sure they can be used as such throughout the manifold?
For a general manifold there are no coordinate frames that can be used throughout the manifold, so this isn't really a problem?
Mmh quite the opposite, no? It means can't be coordinates everywhere
For this we need to get into a little physics. We are considering a class of systems where the entropy and volume can be any positive real number, and they characterize the state of our system so our manifold is . Examples include a container of air, a container of water, etc.
Then we define temperature and pressure in the usual way:
Next we restrict attention to some open set where any two of the four functions can serve as coordinates. We can find such open sets by finding any point where
and taking a sufficiently small open neighborhood of this point.
We can also be more ambitious if we know more about what these functions look like. In practice the above inequalities hold almost everywhere: this is the "generic" situation.
Note that everywhere since are coordinates everywhere.
I wish people asked questions like this on my blog!
I feel like copying these questions (anonymized???) and the answers over to my blog.
I see :thinking: thanks again for explaining
John Baez said:
I wish people asked questions like this on my blog!
Ha, I didn't realize I could have commented there. Next time I'll bring the discussion there!
Thanks! You can use LaTeX there if you do
$latex $
What's up now:
I wrote a blog article "Classical mechanics vs thermodynamics (part 4)" in which I finally answer the question "If thermodynamics is mathematically analogous to classical mechanics, what happens when you quantize it? What's analogous to quantum mechanics?" In the crackpot index, I give 10 points for beginning the description of your theory by saying how long you have been working on it. But I can't resist saying that I've been thinking about this since 2012.
I'm working with @Owen Lynch and @Joe Moeller on a paper called "Compositional thermostatics". And some news: I'm going to be Owen's external advisor for a master's thesis at the University of Utrecht! Wioletta Ruszel will be his internal thesis adviser and first examiner - she does statistical mechanics. Gijs Heuts will be his second examiner - he does homotopy theory and derived algebraic geometry.
I'm working with Tom Leinster on revising my application for a Leverhulme Fellowship to visit Edinburgh.
On November 5th at noon Eastern Time I'll give a 30-45 minute talk on "Visions for the future of Physics" at the Basic Research Community for Physics. So I have to prepare this talk.
On October 7th I'll meet with Nate Osgood to talk about applications of category theory to epidemiology. He's an old friend of mine from grad school, who is now leading the COVID19 modelling for the province of Saskatchewan and also helping do it for all of Canada. We are both involved in @Evan Patterson's meetings on compositional models of epidemiology using AlgebraicJulia.
I wrote some basic introductory stuff on commutative algebra and algebraic geometry:
It may be useful to people who never really studied these subjects.
I also wrote up a little derivation of Stirlng's formula:
This says
and someone on Twitter was wondering where the comes from.
Just a point: a local ring has exactly one maximal ideal
Not exactly one prime ideal.
So you have only one closed point - but many non-closed points ‘around’ it
Whoops, thanks. Right, if we take a field and localize at some maximal ideal we get a local ring that has chains of prime ideals of length .
Fixed!
One reason I like explaining things I'm just learning is that people catch my mistakes.
(Also it forces me to organize my thoughts and spell everything out.)
What's going on now:
Tom Leinster and I finished my application for a Leverhulme Fellowship to visit Edinburgh! :tada: He put a lot of work into it: I sound so impressive now that I can barely recognize myself in the mirror.
I polished up my paper The fundamental theorem of natural selection and submitted it for publication.
Today I'll meet with "the categorical epidemiology gang" and lead a discussion about various more advanced tricks for using Petri nets to create large yet manageable models of the spread of epidemics. They have various pieces of software that use Petri nets for this purpose, but the Petri nets are starting to get big and hard to understand.
Tomorrow I'll meet with Nate Osgood to talk about applications of category theory to epidemiology. He's an old friend of mine from grad school, who is now leading the COVID19 modelling for the province of Saskatchewan and also helping do it for all of Canada. We are both in the "categorical epidemiology gang" along with @Evan Patterson, @Sophie Libkind and others.
I need to deal with the errata in From loop groups to 2-groups that were kindly prepared by @David Michael Roberts. Sign errors, and how to fix them!
On November 5th at noon Eastern Time I'll give a 30-45 minute talk on "Visions for the future of Physics" at the Basic Research Community for Physics. So I have to prepare this talk.
Edinburgh is dangerously close to Glasgow, you might have to endure a visit from the Glasgow gang :stuck_out_tongue: Do you know already when you'll be there?
In fact I plan to endure visiting Glasgow! :stuck_out_tongue_wink:
I asked for money to take a short trip there.
I don't know if I'll get this Leverhulme! But if I do, the plan is to visit for 6 months in 2022-2023 and 6 months in 2023-2024.
So yes, it would be great to meet the Glaswegian category theory gang.
What's going on now:
I wrote a little blog article: Stirling's formula in words. I think it's really cool how Stirling's formula can be stated without mentioning , or : those things show up when you translate the words into equations.
I need to deal with the errata in From loop groups to 2-groups that were kindly prepared by @David Michael Roberts. Sign errors, and how to fix them!
I need to help @Owen Lynch and @Joe Moeller finish the paper "Compositional thermostatics".
On November 5th at noon Eastern Time I'll give a 30-45 minute talk on "Visions for the future of physics" at the Basic Research Community for Physics. So I have to prepare this talk.
Nate Osgood and I had a great conversation about category theory applied to modeling in public health, esp. epidemiology, and it looks like we'll be working together on that. I hadn't known he had made thousands of YouTube videos - some on category theory, many more on modeling for public health issues.
Amazing! Can't wait to have you around
I hope it works! I want to visit Edinburgh no matter what, and I could easily be lured to Glasgow.
I wrote two blog articles on an amazing nonlinear partial differential equations, one of the simplest PDE that exhibits chaos and an "arrow of time": the Kuramoto-Sivashinsky equation
In the first part I pose some conjectures. They're really exciting to me since there are lot of papers on this equation and while they get some really cool results (which I review), they don't get as deep into what the solutions actually look like.
In particular, I conjecture that as time passes, the "stripes" you see above can be born and merge, but never die or split. So, if you think of the solution as a string diagram, we're talking about a monoid object in a monoidal category!
I'm asking people who are good at computing to help me check these conjectures. They are way too hard for me to actually prove, but at least we can become pretty sure they are true.
In the second part I discuss a strange symmetry that these equations have, which had been bothering me.
Namely, invariance under the Galilei group!
I've been doing work with Cheyne Weis, a physics grad student at the University of Chicago, on the Kuramoto-Sivashnsky equation.
John Baez said:
Namely, invariance under the Galilei group!
Doesn't that mean the definition of a bump should be Galilean invariant? Maybe in the integral form, you should be looking spots that are local maxima in any Galilean frame? That might help smooth out those gaps ...
Hm, must have had a braino about this, I am not used to thinking about pre-relativistic physical symmetries ... it already is because at constant time the transformation is just x = x + constant ... nonetheless it feels like something of this sort must work, just maybe with not as good of a justification ... maximum along any linear combination of time and space?
Also, the fact that the PDE has smooth-almost-everywhere solutions, and its form, ought to have some influence on what these 'merge' events actually look like close to the maxima ...
James Deikun said:
Hm, must have had a braino about this, I am not used to thinking about pre-relativistic physical symmetries ... it already is because at constant time the transformation is just x = x + constant ... nonetheless it feels like something of this sort must work, just maybe with not as good of a justification ... maximum along any linear combination of time and space?
It's a good point: it's nice for the definition of "bump" to be Galilean invariant, given that the equation is. As you note, this will be true if there's a definition of "bump at time t" that depends only on the solution at time , and this definition is translation-invariant.
By the way, this differential equation has a "smoothing" property like that of the heat equation: given initial data that's mildly nice (say in ), the solution at any later time is infinitely differentiable, in fact analytic.
John Baez said:
It's a good point: it's nice for the definition of "bump" to be Galilean invariant, given that the equation is. As you note, this will be true if there's a definition of "bump at time t" that depends only on the solution at time , and this definition is translation-invariant.
Indeed. But this is "if" not "iff" and as Galilean transformations take any affine combination of time and space to another such affine combination, definitions that quantify over all of them are also Galilean invariant ... It's possible that the definition "a maximum along some direction in the xt plane" will let a bump keep its identity all the way to merging, though I wouldn't take much worse than even odds on it ...
Bumps also have an interesting tendency: they move from low places to high ones. Besides being evident from the graphs in part 5, this can be seen from the form of the conserved momentum: in integral form, momentum in an interval is equal to the difference in height of its endpoints. Combined with the fact that bumps get higher when they merge, this kind of explains the lopsided distribution of subtree sizes observed in the blog comments.
In fact, it seems that new bumps, or at least the really robust ones, are born from an instability of deep valleys; wavyness in shallow valleys tends to not have time to coalesce into a nice well-defined bump before it gets drawn into or crushed between existing ones. Maybe rather than an absolute "bumps only merge once born" it's more of a thresholdy thing where the probability of an upward disturbance merging versus vanishing some other way has an S-shaped distribution based on its current magnitude.
It strikes me that working on these conjectures might make a good Polymath project.
David Michael Roberts said:
It strikes me that working on these conjectures might make a good Polymath project.
That's true!
Right now I'm working with a bunch of people trying to formulate some conjectures and get some numerical evidence for them. I have no hope of trying to prove them - the analysis becomes very nitty-gritty in a way that I don't enjoy. But I'm hoping that with relatively little work we can write a paper that pushes the study of this equation toward more interesting questions than I've seen discussed so far in the literature.
It took a lot of hard work for people to prove that the Hilbert space of initial data for the Kuramoto-Sivashinsky equations contain a finite-dimensional "inertial manifold" that all solutions approach exponentially fast... and to bound the dimension of this manifold. This is cool... but it's really just the tip of the iceberg: the interesting part is what solutions on this manifold look like.
I just want to let people know this iceberg is there, waiting to be explored!
I can definitely imagine someone like Terry Tao proving some of the conjectures I'm trying to formulate. This equation is much less tricky than Navier-Stokes.
If I wanted to interest a mathematician who doesn't care about PDE, I might say the Kuramoto-Sivashinsky equation exhibits "Galilean-invariant chaos with an arrow of time".
The Perimeter Institute has put two of my talks online, which makes them a bit easier to watch:
Can we understand the Standard Model?
Can we understand the Standard Model using octonions?
Also the University of British Columbia has finally put this talk online:
Classical mechanics vs thermodynamics
Abstract. It came as a shock when I first realized that some of the most famous equations in thermodynamics are just the same as the most famous equations in classical mechanics — with only the names of the variables changed. It turns out that this follows from a deep and not yet completely explored analogy between the two subjects, which I will explain.
By the way, @Owen Lynch and @Joe Moeller and I are almost done with our paper on a category-theoretic approach to thermodynamics!
It's great to be getting back to physics after all these years. With some category theory here and there.
Just for fun, I wrote an article about a fascinating conjecture made by an Iranian woman mathematician in 1982, namely that
It's been checked for all primes less than 18 quintillion!
However, it seems most experts believe it's false.
oh, I hadn't seen this last bit. I wrote to my friend Hugo Mariano, one of the authors of the paper you've mentioned, as neither him nor his ex-student Luan Ferreira is on twitter.
Mariano's paper is nice! In the conclusions they note that Firoozbakht's conjecture contradicts some popular heuristics that use probability theory to guess the statistics of prime gaps.
If these heuristics are right, we should eventually see prime gaps so big they violate Firoozbakht's conjecture. But "eventually" may be a long time.
Tomorrow (Friday November 5th) at noon Eastern Time I'm giving a talk called "Visions for the Future of Physics" at the annual symposium of a small organization called the Basic Research Community for Physics. It'll be zoomed and live-streamed on YouTube. Details are here:
https://basic-research.org/events/annual-brcp-meeting-2021
Quick summary: slow progress in fundamental physics, rapid in condensed matter - but the Anthropocene looms over everything, and nobody should ignore it.
You can see the video of my talk "Visions for the Future of Physics" on YouTube. I made a little advertisement for studying the moduli space of quantum field theories... though I didn't say the word "moduli space.
I did something a bit weird yesterday: I announced two conjectures on homotopy theory on Twitter. I wrote a series of tweets, of which this is the first. You can see the rest by clicking the link.
Each element of πₖ(Sⁿ) is basically a way of wrapping a k-dimensional sphere around an n-dimensional one. I conjecture that there are never exactly 5 ways. Unlike Fermat, I won't pretend I have a truly remarkable proof that this tweet is too small to contain. (1/n) https://twitter.com/johncarlosbaez/status/1460244277817225224/photo/1
- John Carlos Baez (@johncarlosbaez)Since there's not a lot of theoretical evidence for these conjectures, just empirical, I took this opportunity to explain some very basic stuff about homotopy groups of spheres.
But I've also put the conjectures on MathOverflow.
These conjectures seem very hard to prove true given what people know now. If they're false maybe someone can find a counterexample using existing techniques. Mainly I want to encourage new thoughts on homotopy groups of spheres (a subject way beyond my technical abilities).
We've finished a version of this paper, really a column for the AMS Notices:
Steve made up a great rule for what counts as a "stripe" in a solution of this equation, which allows us to state a really exciting and challenging conjecture. (Yes, I'm making lots of conjectures these days.)
It's 2 + pages long and it's supposed to be fun for mathematicians, so if you have any comments/criticisms/corrections, please let me know!
Later we will write a longer paper.
John Baez said:
Steve made up a great rule for what counts as a "stripe" in a solution of this equation, which allows us to state a really exciting and challenging conjecture. (Yes, I'm making lots of conjectures these days.)
I can't shake off my mind the idea that these stripes could be interpreted as string diagrams... what does this even mean? what kind of category -- I assume affine monoidal would be enough?
Mmh yeah, the category would be the free affine monoidal category on one object (the 'stripe'), call it . Then cutting at time defines a morphism in this category, from 'n° stripes at ' to 'n° stripes at '. This sounds like a functor from to .
Funnily enough, I just notice the image you have in the paper shows a 'counterexample' (or simply a non-generic solution), probably an artifact of the definition of stripe and the atypical dynamics near : image.png
Kind of off topic, but if you look at the output of the rule 110 cellular automaton it also looks kind of like a string diagram
Specifically it looks like a string diagram in a 2-category where the 0-cells are the possible phases for the repeating 'background' pattern, 1-cells are types of glider, and 2-cells are collisions between gliders. I've sometimes idly wondered if there's a way that could be made formal.
Turning these wavy, continuous diagrams into string diagrams somehow feels like tropicalization to me. I wonder if there's any content to that.
Matteo Capucci (he/him) said:
Funnily enough, I just notice the image you have in the paper shows a 'counterexample' (or simply a non-generic solution), probably an artifact of the definition of stripe and the atypical dynamics near :
Yes, perhaps I should explicitly mention that in case anyone is puzzled! Our conjecture that stripes never die or split applies to generic solutions on the inertial manifold. This is not a non-generic solution, this is a case of a solution that has not yet had time to approach the inertial manifold - the finite-dimensional manifold of "eventual behaviors". We started this solution off with random initial data. All solutions approach the inertial manifold exponentially fast - that's the definition of an inertial manifold. This splitting of a stripe occurs very early on, at about and you'll notice none happen later on:
Kuramoto-Sivashinsky solution with stripes indicated
I can't shake off my mind the idea that these stripes could be interpreted as string diagrams... what does this even mean? what kind of category -- I assume affine monoidal would be enough?
Yes, this is on my mind too: we seem to be dealing with processes that could happen the free monoidal category on a monoid object, otherwise known as the the [[augmented simplex category]] . That is, the category of linearly ordered finite sets.
Somehow this chaotic differential equation describes "random morphisms in the augmented simplex category".
As @Nathaniel Virgo points out, there are also various cellular automata that give pictures that typically look like morphisms in some monoidal category, or 2-morphisms in some 2-category.
Somehow I'm even more excited about it for a rather simple differential equation, because there the system is fundamentally continuous yet it manages to somehow behave - "typically", and "eventually" - as if it were described by a random discrete process.
When we write a longer paper about this, I should make these points! I think they're a bit too much for this column, which is supposed to be short.
But I will point out the stripe-splitting early on. So thanks for all these comments!
@Owen Lynch, @Joe Moeller and I finished a paper on a categorical approach to equilibrium thermostatics. It should appear on the arXiv on Monday. But you can look at it now:
Greg Egan's picture below is on the cover of the December AMS Notices, and the column that goes with it is here!
Egan: the roots of a polynomial, and those of its derivative
My new blog post got picked up by Hacker News, so it's getting lots of views:
It's about chemicals that help make autumn leaves red. They contain rings of carbon with delocalized electrons that resonate at frequencies that let them absorb and emit photons. My post is full of pictures of leaves I've seen here in D.C. this fall. There's a math question buried in here, too....
I also blogged about how electrons get delocalized in benzene rings, leading up to a more mathematical post that I may write soon:
Today I submitted this paper to the AMS Notices, where with luck it'll appear as my regular column:
I also put my last column, The Gauss-Lucas theorem, on the arXiv.
So I'm feeling rather virtuous today. :upside_down:
But mainly I'm working on some math connected to the hydrogen atom and the periodic table, using group representation theory and quantum field theory. And this is going rather slowly, because I keep getting distracted!
Whew, trying to relate the hydrogen atom to the Dirac operator on the 3-sphere, I'm sinking into a morass of factors of 2, sign conventions, and other more conceptual issues in differential geometry. I used to know a lot of this stuff by heart, but I haven't done this kind of math for over a decade!
It's a bit painful to revive these skills, since it can take an hour to decide if an equation needs a factor of in it. But it's also a bit fun, since it reminds me of the "good old days" - and I'd rather not let these old skills atrophy.
Sign conventions, eh? :wink:
Yeah, I was actually going to mention that after a bit more of this, submitting those corrections you found may start seeming like a pleasant break.
They are, at present, the only actual "chore" I need to do. So I will do them pretty soon.
Before I had various papers with coauthors as excuses.
Thanks to all your hard work, it will be completely painless to submit these corrections if I just trust you and don't think about them. The only "pain" comes from the feeling I should make sure you're right (which I really don't want to do).
I can't remember offhand if I mentioned in the erratum note the hard work of one Kevin Van Helden, who independently verified from go to woe that the HDA 6 definition of weak map of -algebras agreed with Lada and Markl (and this is what corrected version of the Loops groups... papers will have). He sent me his typed notes, but they aren't going to be published, it was an exercise for his own purposes to support a recent paper on the topic.
Yes, I think you mentioned this.
There's thing in math where you can feel sure someone is right yet you can't say it in your own voice until you've checked it yourself.
My work with folks at the Topos Institute on compositional epidemiology is going really well - I should start blogging about it or something.
A bunch of people are involved. I got really excited when Nate Osgood joined. He's been doing "computer science for public health" for 30 years, he helps Canada with their coronavirus modeling, and he's been studying category theory for a few years. Plus, he's energetic and really clear-headed! But I guess what really tickles me is that we were best friends in grad school.
In one great conversation last week we figured out which ways of composing stock and flow diagrams (a popular type of model in epidemiology) are most needed.
And the great news is that it's a really simple case of structured cospans! That is, a stock and flow diagram has a set of "stocks", a set of "flows", and some other stuff, but we compose two such diagrams simply by doing a pushout on the set of stocks; the rest goes along for the ride.
There are other trickier ways to compose them, which we'd been pondering, but Nate says that most of the time professional epidemiologists compose models in this simple way. But they do it by taking two programs, reading them, and writing a new program more or less from scratch.
So, without a lot of programming we may be able to create a tool that will save the people who model COVID-19, malaria and other diseases a lot of work!
The Topos Institute is going to hire a programmer to help this project, and Nate and Evan Patterson are applying for a grant to teach a course on this methodology.
There are also two other epidemiologists involved in this gang, which is great. Getting actual practitioners involved is crucial when trying to apply math to any subject.
John Baez said:
In one great conversation last week we figured out which ways of composing stock and flow diagrams (a popular type of model in epidemiology) are most needed.
This sounds super interesting. I look forward to learning more :blush:
Coincidentallly, I've been working on some stock and flow models recently for financial modeling. It would be cool to find some mathematical synergies :+1:
Companies produce 3 primary financial statements: 1. Balance Sheet, 2. Income Statement, 3. Statement of Cashflows.
Balance sheet accounts are stocks. Income and cashflow items are flows. This is a new model I've been developing for real-time financial reporting and am currently adapting it for distributed ledgers.
Yes, stock and flow models should also work in finance. But I'm trying to help the world: that's why I've chosen epidemiology as a place to apply categories, rather than anything whose main effect is to help rich people get more rich or powerful people get more powerful.
This is why I avoid working on anything to do with AI, machine learning, neural networks, cryptocurrency, quantum computation, etc. (Yes, I worked with DARPA for a while, but I learned my lesson.)
On a wholly different note: I'm trying to write a paper connecting the periodic table of elements to quantum field theory in a strange way. In the process I've been learning basic stuff about the periodic table and blogging about it:
The Hilbert space of bound states of a hydrogen atom decomposes as a direct sum of irreducible representations of SO(3), called 'subshells'. The Madelung rules say how we can approximately understand elements as being built by adding electrons to subshells one at a time: it says the order in which they go into these different subshells.
I sketch the representation theory, explain the Madelung rules, apply them to all the elements up to the first batch of transition metals - scandium to zinc - and explain some failures of the Madelung rules.
I also explain why scandium is scandalous.
But I'm trying to help the world: that's why I've chosen epidemiology as a place to apply categories, rather than anything whose main effect is to help rich people get more rich or powerful people get more powerful.
I find this a little offensive. I like to think I also want to help people. Finance is not all evil just like climate scientists are not all good.
(granted, it is impossible to have an opinion and not offend someone :sweat_smile: )
I would just say that there are things we can do in finance to help a lot of people. Many people in the world still do not have access to the financial system.
I spent the most of the past 10 years of my life thinking about ways to improve insurance and risk management because the current model is broken and insurance is important especially when you have people financially dependent on you.
I know you probably never would, and it's fine, but if you woke up one day and decided to try to help fix the current broken financial system, you could do a whole lot of good for a whole lot of people. That is where I am putting my energy.
Possibly the operative phrase was "main effect". The majority of money in the finance sector is presumably not being spent in order to do things like access to microfinance in developing countries (or the current most altruistic scheme), or other models not aimed at minimising tax burden on the ultrawealthy/companies with profits greater than countries etc.
Yes, I can imagine someone trying to "fix the broken financial system", but I believe mere math is not enough to do that. I think would take a huge amount of political pushing, because there are very rich and powerful people who live off this brokenness, who would fight against fixing it.
Since I don't have the skill or energy to win such a fight, I feel any math I did connected to finance would be more likely to reinforce the status quo rather than improve it. At the very least this would be a great danger that I'd have to carefully study, and I don't have the confidence I could do it well.
The same goes for math connected to AI, machine learning, neural networks, cryptocurrency, quantum computation, etc. I just don't trust my ability to target an area in a way that would accomplish the sort of effects I want. I feel more likely I'd be just "greasing the wheels of progress": making trends that are already happening happen even faster. And I don't want to spend my time doing that.
Lots of people are already doing that: running after the big snowball that's rolling downhill, and pushing it to go even faster.
On a happier note: I'm going to talk to Todd Trimble and James Dolan today at 4. At least if Dolan is awake - he has an erratic sleeping schedule.
I haven't talked to James in almost 10 years! But he's been talking with Todd every Saturday for a long time, and I thought I'd join in. I feel like getting some more good pure math into my life. All I know is that he's doing something connected to principally polarized abelian varieties and Eisenstein series, maybe using the "doctrinal" approach to algebraic geometry that he's been developing.
John Baez said:
Yes, I can imagine someone trying to "fix the broken financial system", but I believe mere math is not enough to do that. I think would take a huge amount of political pushing, because there are very rich and powerful people who live off this brokenness, who would fight against fixing it.
Since I don't have the skill or energy to win such a fight, I feel any math I did connected to finance would be more likely to reinforce the status quo rather than improve it. At the very least this would be a great danger that I'd have to carefully study, and I don't have the confidence I could do it well.
The same goes for math connected to AI, machine learning, neural networks, cryptocurrency, quantum computation, etc. I just don't trust my ability to target an area in a way that would accomplish the sort of effects I want. I feel more likely I'd be just "greasing the wheels of progress": making trends that are already happening happen even faster. And I don't want to spend my time doing that.
This is absolutely true also with respect to environment tho. A lot of people are profiting enormously from polluting businesses. This doesn't seem to bother you too much tho
For me the best indicator for "how much of an impact I can make here" is fragmentedness. When areas are monopolized by very few individuals/ideas/corporations/practices it becomes very difficult to make an impact. When many people tackle the problem from a multitude of angles that are considered "equally promising" the chances increase
This is absolutely true also with respect to environment tho. A lot of people are profiting enormously from polluting businesses. This doesn't seem to bother you too much tho.
Huh? Why do you think this doesn't bother me too much?
I don't recall ever trying to give a near-complete list of things that bother me.
The little list I gave was a list of a few "hot topics" that I've avoided even though math might make an impact there right now. I tried to explain why I'm avoiding those, though actually it'd take a lot longer to really explain it.
John Baez said:
The little list I gave was a list of a few "hot topics" that I've avoided even though math might make an impact there right now. I tried to explain why I'm avoiding those, though actually it'd take a lot longer to really explain it.
Yeah, what I meant is that the reason you gave seems also to apply to topics you are actively working on, so I thought there is maybe also something else that makes you work on something and avoid something else
I'm actively working on epidemiology; that's the only truly practical thing I'm trying to do.
Other stuff I do could be applied by someone, that's true, but epidemiological modeling is the subject where I'm actually working with a team of people to get something practical done.
More stuff:
All known intelligent systems are collectives. Individual organisms are collectives of cells, which develop, heal, sense, and act. Groups of human and non-human animals use a range of mechanisms to coordinate their behavior across space and time, from flocks and swarms to organizations, institutions, and cultural traditions. Deep learning — the dominant approach to artificial intelligence — gains its power from combining simple units into complex architectures; many contemporary architectures (e.g., GANs) combine multiple learners, and multi-agent settings are a critical research frontier for AI, especially settings that integrate human and artificial agents.
These examples imply that a scientific understanding of intelligence must fundamentally grapple with collective intelligence — in a much broader sense than typical usage of the term would suggest. A variety of mathematical and computational models have been developed to explain and design intelligent behavior in particular collectives. Several mathematical fields source the ideas used to build and understand these models, from dynamical systems, statistical mechanics, network science, and random matrix theory to information theory, optimization, Bayesian statistics, and game theory — even applied category theory, in recent years. It remains unclear, however, whether we have the right mathematical language to provide a unified, abstract account of collective intelligence — or whether such a language is even possible! This workshop will bring together leading experts who study and model collective intelligence, as well as those who seek to understand such models. Its goal is to explore the advantages and disadvantages of existing modeling frameworks; to find collective implementations of models of individual cognition; to expand the systems and settings understood as manifesting collective intelligence; and to grow the community of researchers who study the mathematical foundations of collective intelligence.
I've repeatedly told him I don't know anything about intelligence, but he seems to think category theory could be helpful.
Here's some fun math linking polyhedra, 4d polytopes, finite groups and the quaternions:
Greg Egan made some nice animations!
My undergrad student Aaron Lauda, now a tenured professor at the University of Southern California, has a nice article in the January issue of the AMS Notices:
This issue also has an ad for the American Mathematical Society summer program on applied category theory where @Nina Otter, @Valeria de Paiva and I will be teaching! If you are a grad student or know a grad student who could benefit from this, check it out! It's also available to early-career people inside and outside academia:
Applications are due fairly soon!
Isn’t Aaron at USC?
That's what I meant. I decided to spell it out, and I translated USC into University of Santa Cruz. :stuck_out_tongue_wink:
I'll fix that....
I wonder if the MRC should be advertised on a more general stream as well?
Yes, the school is announced here:
I'm also going to announce it repeatedly on Twitter and some blogs as soon as the vacation ends, and I hope everyone else does too!
I'm working away on my paper "Quantum fields and the periodic table", and I needed a nice careful review of basic facts about for the inverse square force law, so I wrote one up on my blog:
I just learned that there's a really simple-to-state open conjecture in graph theory that's stronger than the Four Color Theorem. I think it's great because it suggests the Four Color Theorem is part of a bigger, more conceptual story.
Whoa! I just learned that the Four Color Theorem, so far proved only with help from a massive computer calculation, follows from an even harder conjecture that's still unproved! And this harder conjecture is very nice and easy to state. (1/n) https://twitter.com/johncarlosbaez/status/1478421119736504323/photo/1
- John Carlos Baez (@johncarlosbaez)One of the big surprises when you're first learning about topology is that a lot depends on the dimension of space mod 4. Who would have guessed that 9d spaces are more like 17d spaces than 10d spaces? Just saying this makes me feel like a mad scientist. (1/n) https://twitter.com/johncarlosbaez/status/1479890499804680194/photo/1
- John Carlos Baez (@johncarlosbaez)However, this was so nontechnical that I wasn't able to say the words that were buzzing around in my brain. So I wrote a more technical summary of this on MathOverflow when I realized there was an interesting question lurking here:
As I explain, there's actually stuff you could try that mimics the usual stuff, but involves the dimension mod 3.
Today I tweeted a more technical explanation of why the topology of manifolds depends so much on the dimension mod 4.
Hardcore math tweet: how the topology of manifolds depends on the dimension mod 4. Yesterday I was leading up to an explanation of this using homology theory, but let's use cohomology. DeRham cohomology may be enough - physicists tend to prefer this. https://twitter.com/johncarlosbaez/status/1479901955224788992
- John Carlos Baez (@johncarlosbaez)I'd like to go a lot further, but this was a start.
If I can ask an irreverent question, what is the motivation for writing such things as long sequences of tweets rather than an ordinary blog post? What is the attraction to decomposing n characters into n/280 chunks?
I have about 50,000 Twitter followers, who include lots of people who don't read my blog. That's all.
Oh, and also - for some reason, lots of people will not click a link on Twitter that takes them to a blog article explaining the use of deRham cohomology to study manifolds.
In short: if you want to talk to lots of people, you have to go someplace where there are lots of people.
Also btw, there's no way to get n/280 good tweets by writing n good characters and then chopping them into tweets. The idea of Twitter is that each tweet should be interesting by itself. A haiku is not a tiny fragment of an epic poem.
It's true that explaining deRham cohomology is pushing the limit of what it makes sense to do on Twitter. My easier tweets get more readers and probably do more good. But I think there's also something good about giving a taste of serious math to people who wouldn't otherwise naturally bump into it. Even if they don't understand it, they'll see what math is like - which is often not what they think.
A weird side-effects (for me) of the Twitter thread format is that I don't get much writer's block. I could write a blog post worth of content with minimal review in under an hours whereas I tend to get stuck for ages when writing a blog post. I take myself less seriously, and sometimes that's exactly what you need to deliver. Sometimes no, you need blog posts.
When I write a bunch of tweets on some topic and they seem to amount to something interesting, I'll often fix them up and combine them into a blog article. It requires rewriting since blogs have a less telegraphic tone of voice.
Struggling to finish my paper on quantum fields and the periodic table, I needed to understand the Duflo isomorphism. The description on Wikipedia was very sketchy, so I completely rewrote it.
The basic idea:
In mathematics, the Duflo isomorphism is an isomorphism between the center of the universal enveloping algebra of a finite-dimensional Lie algebra and the invariants of its symmetric algebra. It was introduced by Michel Duflo (1977) and later generalized to arbitrary finite-dimensional Lie algebras by Kontsevich.
Or, if you're a physicist: it's a good way to quantize Casimirs.
It turns out that the Duflo isomorphism involves the number 24 in an interesting way, which winds up being manifested in the hydrogen atom... but that part will be in my paper.
I wrote a few blog articles on chemistry and astrophysics. I guess I'm feeling a strong need to think about physical sciences:
The last one is a rant against certain 'evil' periodic tables that are actually very widespread! And I don't mean the periodic table of n-categories.
I also did a tweet-thread about the signature as an invariant of manifolds whose dimension is a multiple of 4, with examples coming from 4-manifolds:
https://twitter.com/johncarlosbaez/status/1483893626094555136
Hardcore math tweet: I've been explaining how spaces with dimension is a multiple of 4 are special. Using these ideas you can build a 4d topological manifold that can't be made smooth - starting from the Dynkin diagram of E8! But we won't get that far today. (1/n) https://twitter.com/johncarlosbaez/status/1483893626094555136/photo/1
- John Carlos Baez (@johncarlosbaez)And I did one where I sketched how people use the signature to get an example of a non-smoothable 4-manifold:
https://twitter.com/johncarlosbaez/status/1484582359777230850
Hardcore math tweet: Okay, let's do it! Let's sketch, very sketchily, how to build a 4-dimensional topological manifold that cannot be given a smooth structure. I won't prove anything, just state lots of stuff. We'll use E8. (1/n) https://twitter.com/johncarlosbaez/status/1484582359777230850/photo/1
- John Carlos Baez (@johncarlosbaez)I had to rush a lot in the last one, because there were so many results to cover. So it wasn't very easy to follow, I suspect, but at least it shows an outline of the argument - so you can see how the numbers 2, 4, 8, and 16 play significant roles in topology.
Going up to 8 I believe these powers of two are all connected to Bott perioicity, though I need to think harder about that.
John Baez said:
Gotta love the sassiness here :stuck_out_tongue_wink:
image.png
Thanks! I was just wondering who gets to officially say "we're going to call that color "Perano"".
I guess there are some Perano axioms they follow... :sweat_smile:
:drum:
I blogged about a really fascinating puzzle connecting homotopy theory and algebraic geometry:
@David Michael Roberts has also thought about this.
Also, some interesting twists on the famous old Hardy-Ramanujan story about the taxicab with number 1729:
Hardy said the number on his taxi, 1729, was rather dull. "No, Hardy," said Ramanujan, "it is a very interesting number. It is the smallest number expressible as the sum of two cubes in two different ways." Genius! But he'd thought of it years before, and written this: (1/n) https://twitter.com/johncarlosbaez/status/1487479314945773570/photo/1
- John Carlos Baez (@johncarlosbaez)If anyone knows which in the "Lost Notebook" this is, I'd like to hear about it.
There's a paper coauthored by Ken Ono that connects 1729 with, I think, mock modular forms, maybe about 5-7 years ago. Possibly in PNAS. Ramanujan was seemingly thinking about near-misses to Fermat in the n=3 case: being off by 1 gives two ways to sum three cubes to get the same answer.
And as far as the vector field goes, I thought about it enough to ask the question... :grinning:
Thanks. Maybe you mean The 1729 K3 surface by Ono and Trebat-Leder. It's discussed here:
In 2013, Ono searched through the Ramanujan archive at Cambridge and unearthed a page of formulas that Ramanujan wrote a year after the 1729 conversation between him and Hardy. “From the bottom of one of the boxes in the archive, I pulled out one of Ramanujan’s deathbed notes," Ono recalls. “The page mentioned 1729 along with some notes about it," said Ono.
Ono and his graduate student Sarah Trebat-Leder are publishing a paper about these new insights in the journal Research in Number Theory. The paper will describe how one of Ramanujan’s formulas associated with the taxi-cab number can unearth secrets of elliptic curves. “We were able to tie the record for finding certain elliptic curves with an unexpected number of points, or solutions, without doing any heavy lifting at all," Ono explains. “Ramanujan’s formula, which he wrote on his deathbed in 1919, is that ingenious. It’s as though he left a magic key for the mathematicians of the future," Ono added.
Yes, I think that's it.
I corrected my tweets about Ramanujan and wrote a much more detailed analysis of the incident involving taxi number 1729:
I'm really happy that Jason Starr has largely cracked the problem of finding how to explicitly find a vector field with 24 zeros, all of index 1, on a K3 surface - in order to see that the third stable homotopy group of spheres is . He did it in a comment to this article:
To understand his solution I needed to learn about 'elliptic fibrations' for K3 surfaces - these are maps from K3 surfaces to the Riemann sphere whose fibers are elliptic curves.... except for up to 24 'singular' fibers!
I explained what I learned in a comment.
I still want to get ahold of a beautiful concrete example of a K3 surface with an elliptic fibration, and get ahold of the necessary vector field.
Have a look at some equations in Tables 1 and 2 in Weierstrass Equations for the Elliptic Fibrations of a K3 Surface
I think the elliptic K3 surfaces here are singular, but I haven't read through or back at the previous papers to check exactly how or where
I'll look at that. Will Sawin gave me a close-to-worked-out-but-not-completely-worked-out example on MathOverflow.
Mrowka says maybe nobody knows a single K3 surface whose hyper-Kaehler structure is completely explicitly worked out! That's pretty sad. We don't need a hyper-Kaehler structure here, an almost quaternionic structure will suffice - but Sawin wasn't able to give a completely explicit one of those either.
If I were not busy with too many other things, including moving to a new career, I'd try to dig into this and see what I could do :-(
You've helped out already!
I'd like to write a new blog article sort of summarizing where we are, and then quit, since this stuff is getting harder for me at this point.
Yeah, it would be a good masters thesis, say, for someone at the intersection of differential topology, algebraic geometry and algebraic topology, to figure out such a vector field.
Since Tom Mrowka and Will Sawin couldn't do it, it might be harder than a typical master's thesis.
Next week I'm going to give a talk at the Institute of Pure and Applied Mathematics at UCLA, where they are having a workshop on the Mathematics of Collective Intelligence. Here's my abstract:
As we move from the paradigm of modeling one single self-contained system at a time to modeling "open systems" which interact with their - perhaps unmodeled - environment, category theory becomes a useful tool. It gives a mathematical language to describe the interface between an open system and its environment, the process of composing open systems along their interfaces, and how the behavior of a composite system relates to the behaviors of its parts. It is far from a silver bullet: at present, every successful application of category theory to open systems takes hard work. But I believe we are starting to see real progress.
David Spivak will be there, and there's a chance David Mumford will actually be there, which would be amazing.
Not to derail the thread, but perhaps by 'Masters thesis' we have a slightly different idea :-) Here people do a research Masters degree that makes a thesis half or so of what a PhD thesis would be. It seems to me that this is made up of a lot of standard stuff (plus the little extra secret sauce), and that people on MO are not going to write what amounts to a whole paper just to answer a question, what with all the calculations that will be needed...
But perhaps it's much more than I estimate, and it could take a whole PhD thesis to sort it out.
Owen Lynch is coming up with new foundations for thermodynamics for his master's thesis with me, so I know there's a wide range of how ambitious such theses can be.
The remaining 'hard part' - the math of which is more interesting to me than the question of whether a master's student could do it - is to get a concrete formula for a smooth lift of the vector field on to Sawin's K3 surface fibered over .
Hmm, maybe this isn't so hard.
Either way, the main reason I'm quitting here is that in this approach the problem seems to be devolving into a mess of formulas, when what I'd really want is a beautiful construction that I can understand just using words.
It could be that if someone thinks about this harder the formulas will dissolve into words. Or it could be that a different choice of elliptic fibration will make everything clearer! Unfortunately I don't have much skill yet for cooking up elliptic fibrations.
On Wednesday next week (February 16, 2022) I'm going to give at the Institute for Pure and Applied Mathematics, at UCLA. Jacob Foster is running a program on The Mathematics of Collective Intelligence, and various interesting people will be there including David Spivak.
Here's a preliminary version of my talk slides:
As we move from the paradigm of modeling one single self-contained system at a time to modeling 'open systems' which interact with their — perhaps unmodeled — environment, category theory becomes a useful tool. It gives a mathematical language to describe the interface between an open system and its environment, the process of composing open systems along their interfaces, and how the behavior of a composite system relates to the behaviors of its parts. It is far from a silver bullet: at present, every successful application of category theory to open systems takes hard work. But I believe we are starting to see real progress.
This is a very elementary 25-minute talk. Since there's not enough time to explain serious category theory to this audience, in the second half I'm trying to explain the kind of collaboration that might be required to apply category theory to "collective intelligence"... and also to hint at some of the tools and resources that are available now.
Since I don't know anything about "collective intelligence", I'm using a different example: compositional models of epidemiology. It's good to know how these arose from work on electrical circuits and then chemistry, to see how the same mathematical ideas can be repurposed for different applications.
It's kind of weird to be talking about epidemiology at a workshop on "collective intelligence", but I don't want to pretend I know anything about collective intelligence, so I'm just talking about how to apply categories to understand open systems (which could easily include networks of intelligent agents).
Yay! My paper with Kenny Courser and Christina Vasilakopoulou was accepted by Compositionality:
The Topos Institute's work on "compositional epidemiology" seems to be going well. Besides having a biweekly seminar with category theorists, computer scientists and epidemiologists - including some people who fit into two of those subsets! - a smaller group of us are working on a piece of software that will use ideas from category theory to make easily composable models of disease.
This smaller group is not precisely defined, but I'd have to say it includes Evan Patterson, Sophie Libkind, Nathaniel Osgood, Xiaoyan Li and me.
There are others, including James Fairbanks, whose work on AlgebraicJulia and CatLab is a prerequisite for this project to have ever happened.
Nathaniel Osgood is having his student Xiaoyan Li write a fairly simple piece of demonstration software in AlgebraicJulia to demonstrate the ideas - simple, but complicated enough to build working models of infectious disease.
In March they plan to hold a hackathon to get more of Osgood's students to test out the software.
Then it looks like Osgood is getting a grant to hold a course on how to use the software this summer!
This grant is from CANMOD, the Canadian Network for Modeling Infectious Disease.
Some of us will be instructors in this course. So, some people who model the spread of infectious disease will soon be getting a course on the practical use of category-based software!
For anyone who has no idea how categories could be used in modeling infectious disease, and is interested, this article is a decent place to start:
For details, try the videos and the blog article linked to from there!
These describe an old version of the idea, which has substantially changed by now... but a lot of the basic framework is the same.
Here's my talk "Categories: the mathematics of connection":
It's just 25 minutes long. The main interesting part (to me) is a very quick overview of how Brendan Fong's thesis work on electrical circuits eventually grew into the Topos Institute project for using categories to help design software for modeling epidemics!
The slides have references that you can read by clicking on the blue stuff - they're here.
Jim and I have decided to put some conversations on YouTube. They're not very easy to follow unless you know about the math already. Here's one from February 21, 2022:
Topics include:
Neron-Severi groups of abelian varieties:
https://en.wikipedia.org/wiki/Neron-Severi_group
https://en.wikipedia.org/wiki/Abelian_variety
How to describe complex line bundles, or holomorphic line bundles, on a complex torus (e.g. an abelian variety) using Riemann forms:
https://en.wikipedia.org/wiki/Riemann_form
The square tiling honeycomb {4,4,3} and its relation to the abelian surface that's the product of two copies of the elliptic curve given by the complex numbers mod the Gaussian integers:
https://en.wikipedia.org/wiki/Square_tiling_honeycomb
https://en.wikipedia.org/wiki/Gaussian_integer
The hexagonal tiling honeycomb {6,3,3} and its relation to the abelian surface that's the product of two copies of the elliptic curve given by the complex numbers mod the Eisenstein integers:
https://en.wikipedia.org/wiki/Hexagonal_tiling_honeycomb
https://blogs.ams.org/visualinsight/2014/03/15/633-honeycomb/
https://en.wikipedia.org/wiki/Eisenstein_integer
Good news! Evan Patterson and Xiaoyan Li are writing software in AlgebraicJulia using structured cospans, presheaf categories and operads to make a tool to create compositional models of epidemiology. And here's the good news:
In August, they and others in our group will teach a week-long course on this software in Vancouver. This is being paid for by CANMOD, the Canadian Network for Modeling Infectious Diseases. Interestingly, this organization has a lot of mathematicians. So, we may even decide to teach a bit about the underlying category theory - though no such knowledge should be required to use the software.
I think Nate Osgood (Xiaoyan's advisor) wants to talk about two versions of the software: the already existing version that uses Petri nets, and the new version that uses stock and flow diagrams, which are more commonly used by epidemiologists.
Here is Nate's email:
Dear Colleagues,
I am writing to provide final confirmation that our preferred timing for the Compositional Methods for Modeling Infectious Disease event at Simon Fraser University has now been confirmed by CANMOD staff, so we're definitively good to go with this week of August 8, 2022 at Simon Fraser University.
I am now liaising with the CANMOD staff regarding second-order issues, most notably the choice of SFU campus (the main Burnaby campus vs. downtown campus, the latter located near the ferry terminals and with far more ready subway access.). Having lectured or organized events at both campus, I'm inclined to be open to either, have would suggested whichever is best with respect to amenities such as the on-campus lodging for instructors/TAs, catering, videorecording support, transportation access for visitors (such as ourselves!), access by other CANMOD-affiliated researchers at SFU, and good projection and black/whiteboards and -- of course -- room availability. There are also some financial matters in terms of support for student time (primarily Xiaoyan) to aid in creation of example models.
I'll keep you posted on this front and welcome any questions or raising of issues to be considered, but with the first order logistical issues out of the way, I think that we can now focus our efforts on where they are rightly due: On preparation of quality plans and materials for the event.
Thanks so much for your enablement of this event!
Sincerely,
Nathaniel Osgood
I've decided to create a page where I dump videos of my conversations with James Dolan and emails with him, which are so far both mainly about algebraic geometry:
Neither are particularly well-organized, so I won't advertise them a lot, but doing this removes some pressure I'd otherwise feel to write things up. At least the material is "published" now in some minimal sense.
Here's another conversation about math:
Topics include:
Classifying holomorphic line bundles on an abelian variety using their Riemann forms:
https://en.wikipedia.org/wiki/Abelian_variety
https://en.wikipedia.org/wiki/Riemann_form
Endomorphisms of abelian varieties, the Rosati involution, how to describe polarizations on an abelian variety as 'positive' endomorphisms, and the structure of the Neron-Severi groups tensored with the rational numbers as a Jordan algebra:
https://en.wikipedia.org/wiki/Neron-Severi_group
https://en.wikipedia.org/wiki/Rosati_involution
https://www.jmilne.org/math/CourseNotes/av.html
The Kronecker-Weber theorem and Kronecker's Jugendtraum: the relation between elliptic curves with complex multiplication and abelian extensions of imaginary quadratic fields:
https://en.wikipedia.org/wiki/Kronecker-Weber_theorem
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Complex_multiplication
https://en.wikipedia.org/wiki/Abelian_extension
https://en.wikipedia.org/wiki/Quadratic_field
I just finished editing This Week's Finds in Mathematical Physics (Weeks 51-100) and put it on the arXiv. It's 281 pages, mainly about quantum gravity, topological quantum field theory and n-categories, but also lots of other stuff. It has two "series" of posts in it - one about Coxeter groups, Dynkin diagrams and ADE classifications, and one called "The Tale of n-categories".
I would never have gotten this done without tons of help from @Tim Hosgood!
I think it's time for me to organize my thoughts about what I need and want to do next.
1) I need to work with @Kenny and @Christina Vasilakopoulou to deal with the referee's comments on Structured versus decorated cospans and finish get it published in Compositionality.
That's the most urgent.
2) By May 9th I want to submit a couple of papers or abstracts to Applied Category Theory 2022 - that's the deadline for submissions, for people who want to give talks! (By the way: all you folks should submit talks if you can!)
Nate Osgood, Xiaoyan Lie, @Evan Patterson, @Sophie Libkind and I are going to write and submit a paper on our work on compositional epidemiology. The page limit is 14 pages so we'll just have room to outline the whole strategy. By then Nate should have held a hackathon using the software we're creating, so we'll have some feedback about how well it works so far.
Then there's my paper with @Owen Lynch and @Joe Moeller on Compositional thermostatics. Owen says he wants to give a talk about this. But this paper is already submitted to Journal of Mathematical Physics, so he'll just write an abstract for a talk - 2 pages maximum.
Then there's the paper Structured versus decorated cospans. If either @Kenny or @Christina V wants to give a talk about that at ACT2022, that would be great! But if not, I can do it. Again this would require a 2-page abstract by May 9th.
3) I'm supposed to edit my old paper Rényi entropy and free energy and submit it to Entropy by July 15th. This was rejected by a journal of statistical physics years ago, but since then 77 papers have cited it. The journal Entropy asked me to submit a paper for their special issue Rényi Entropy: Sixty Years Later. They said this paper would be fine. But they said they were going to charge me 1000 Swiss francs to publish it! I said I'd only submit it if they waived that fee... and they said okay.
4) Luckily these are all the papers I need to write! But I'm also about 80% done with a paper "Second quantization for the Kepler problem". So I should work on this whenever I want to have fun doing good old mathematical physics.
Bravo on getting the next volume of TWF out!
There's also the little job of reading through something I sent you :-), but thankfully no writing is needed for that.
Oh, wow - I've procrastinated so long that I succeeded in temporarily forgetting that! :anguished: Let me bump it up the queue:
0) Check and submit corrections to From loop groups to 2-groups.
@Owen Lynch and I have completed a 4-part blog series on our paper with @Joe Moeller , Compositional thermostatics:
Part 1: thermostatic systems and convex sets.
Part 2: composing thermostatic systems.
Part 3: operads and their algebras.
Part 4: the operad for composing thermostatic systems.
Hopefully people who don't understand either operads or thermodynamics can follow this, though it'll be easier if you know about one!
Here's another conversation about math:
Topics include:
The factor of automorphy of an arbitrary holomorphic line bundle over a complex abelian variety:
https://en.wikipedia.org/wiki/Abelian_variety
http://www.bath.ac.uk/~masgks/abvars.pdf
Jordan algebras and the Rosati involution:
https://en.wikipedia.org/wiki/Jordan_algebra
https://en.wikipedia.org/wiki/Rosati_involution
Kronecker's Jugendtraum:
https://en.wikipedia.org/wiki/Hilbert's_twelfth_problem
I wrote a blog post about the classification of line bundles on complex tori:
The basic theorems in this subject are really simple and beautiful, so I've been having a lot of fun learning this stuff.
Here's another conversation about math:
It's more organized than the last one. Topics include:
Hyperbolic Coxeter groups and hyperbolic honeycombs coming from rings of algebraic integers:
https://en.wikipedia.org/wiki/Coxeter-Dynkin_diagram#Hyperbolic_Coxeter_groups
https://en.wikipedia.org/wiki/Ring_of_integers
https://www.cambridge.org/core/journals/canadian-journal-of-mathematics/article/quadratic-integers-and-coxeter-groups/CF262D475903A0104145D1294DA80EF9
Three examples:
1) The integers Z, the group PGL(2,Z), and the closely related Coxeter group {∞,3} and tiling of the hyperbolic plane:
https://en.wikipedia.org/wiki/Modular_group
https://en.wikipedia.org/wiki/Infinite-order_triangular_tiling
2) The Gaussian integers Z[i], the group PGL(2,Z[i]), and the closely related Coxeter group {4,4,3} and honeycomb in hyperbolic 3-space:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Square_tiling_honeycomb
The Eisenstein integers Z[ω], the group PGL(2,Z[ω]), and the closely related Coxeter group {6,3,3} and honeycomb in hyperbolic 3-space:
https://en.wikipedia.org/wiki/Eisenstein_integer
https://en.wikipedia.org/wiki/Hexagonal_tiling_honeycomb
The classification of holomorphic line bundles on complex tori:
https://golem.ph.utexas.edu/category/2022/03/holomorphic_line_bundles_on_co.html
I blogged about my current obsession - line bundles on complex tori, especially abelian varieties:
The first part is an overview of some famously well-known stuff, while in the second I dip my toe into some original research. It's nothing deep, just trying to explain things I'm reading about in a slightly new way that clarifies some things.
I give 3 (or actually 5) equivalent descriptions of the Neron-Severi group of a complex torus , which is the subgroup of that comes from holomorphic line bundles on .
One key bit of "philosophy" is the equivalence between complex tori and complex vector spaces equipped with lattices. It's used a lot in the books I'm reading but I haven't seen it stated. It goes like this:
There's an equivalence between the category where
and the category where
Note that I'm not requiring abelianness of the group because it's automatic: any compact connected complex Lie group is a complex torus, meaning the quotient of a finite-dimensional complex vector space by a lattice. This is the hardest part of the claimed equivalence, but it's well-known.
This equivalence means that studying complex tori is a lot like studying finite-dimensional complex vector spaces! But they're no longer classified by dimension: there are tons of nonisomorphic ones of each dimension > 0.
Some progress: @Kenny, @Christina Vasilakopoulou and I just submitted a revised version of Structured versus decorated cospans to Compositionality. There were 4 referees, each with a lot of comments to deal with. But the paper is better for taking these comments seriously.
My next little job involves a paper called Renyi entropy and free energy. I wrote this paper 11 years ago and never published it. I polished it up a bit yesterday, and now I need to put into the format required for the journal Entropy, which is doing a special issue on Renyi entropy.
The Renyi entropy of a probability distribution is a generalization of the more familiar Shannon entropy: it showed up when Renyi tried to characterize Shannon entropy using a few axioms and found out that other kinds of entropy also obey those axioms!
Okay, I submitted my Renyi entropy paper today!
Btw, in regards to the :open_mouth: comment of @Morgan Rogers (he/him) and @Matteo Capucci (he/him) - you can add an extra axiom to pick out Shannon entropy, but it was actually much cooler that Renyi proposed axioms that only pick out a 1-parameter family of entropy functions, of which Shannon entropy is one!
That's cool! Is there an information geometry intepretation of this fact? Is that 1-parameter family a flow of metrics on the manifold of probability measures?
Someday people will understand the connection between Renyi entropy and quantum groups. Quantum groups are Hopf algebras that depend on a parameter ; when they reduce to familiar Lie groups. Quantum groups are connected to "q-derivatives". The q-derivative of a function is
This reduces to the ordinary derivative in the limit . Kac and Cheung have a book Quantum Calculus showing how you can generalize a huge amount of calculus to q-derivatives.
Renyi entropy reduces to Shannon entropy when . And my paper shows that Renyi entropy is (essentially) minus the q-derivative of free energy with respect to temperature! It was well-known that ordinary entropy is minus the derivative of free energy with respect to temperature.
So something is happening here but I don't know what it is... to quote Bob Dylan.
Matteo Capucci (he/him) said:
That's cool! Is there an information geometry intepretation of this fact? Is that 1-parameter family a flow of metrics on the manifold of probability measures?
I'm not sure, but read this, because it's very related:
More progress: our revised paper Structured versus decorated cospans was accepted for publication in Compositionality!
John Baez said:
Quantum groups are Hopf algebras that depend on a parameter ; when they reduce to familiar Lie groups. Quantum groups are connected to "q-derivatives". The q-derivative of a function is
This reduces to the ordinary derivative in the limit . Kac and Cheung have a book Quantum Calculus showing how you can generalize a huge amount of calculus to q-derivatives.
Uh, this looks very cool! What's quantum about these groups? What's the dependency on like? Is it in the sense of deformation quantization (with )?
John Baez said:
I'm not sure, but read this, because it's very related:
- Tom Leinster, The Fisher metric will not be deformed, The n-Category Cafe, May 5, 2018.
Great pointer, what a cool result!
What's quantum about these groups? What's the dependency on like? Is it in the sense of deformation quantization (with )?
The most important thing about quantum groups is that they're not groups. They're Hopf algebras. Any group gives a cocommutative Hopf algebra. When you 'quantize' the group you get a non-cocommutative Hopf algebra. More precisely, you get a family of Hopf algebras depending on a parameter that has the physical meaning of where is Planck's constant. If you take you get the original group, or more precisely its corresponding cocommutative Hopf algebra.
This works for certain groups, like simple Lie groups.
Quantum groups first showed up in the 1980s when people realized that for certain classical physics problems with a group of symmetries, when you quantize the problem you have to quantize the group too!
And yes, this is deformation quantization.
Jim Dolan started wondering how some results for line bundles generalize to gerbes and -gerbes, so I made a bunch of guesses about this and wrote them up, along with explanations of the basic ideas:
If I have enough energy I'll go further in the case of holomorphic gerbes on complex tori.
oh, interesting! i’ve been thinking about holomorphic gerbes and “sheaf gerbes” for the last few years, trying to use them to get a handle on chern classes in deligne cohomology!
i’ll have a read through of your post in the morning and see if there are any references you’re missing out on :)
Thanks! I could be missing out on tons of stuff. I have a list of "conjectures", some of which should be known (or known to be false).
There's a nice holomorphic gerbe in here: https://arxiv.org/abs/math/0601337, though it lives over a holomorphic stack (equivalently, it's an equivariant holomorphic gerbe)
Neat!
Tom Leinster and I applied for a Leverhulme Fellowship for me to visit him in Edinburgh for July-December this year and next. Yesterday we got it!
So, it looks I'll be doing that. I'm hoping to go to ACT2022 in Strathclyde July 18-22, and later visit Joachim Kock in Copenhagen for a bit.
Today I tweeted about the vacuum Maxwell equations and how the space of solutions has a natural complex structure. This wind up giving the Hilbert space of a single photon its complex structure, but I didn't say that!
It starts here:
https://twitter.com/johncarlosbaez/status/1510276815859838983
Maxwell's equations become simpler in empty space, where there's no electric charge or current. The difference between the electric field and magnetic field disappears! These simpler equations describe how waves of light move through empty space. (1/n) https://twitter.com/johncarlosbaez/status/1510276815859838983/photo/1
- John Carlos Baez (@johncarlosbaez)This comes after a series of tweets explaining Maxwell's equations... but if you're reading this on a laptop it's probably a lot easier to go here for those:
Another conversation with James Dolan.
The Appell-Humbert theorem classifying holomorphic line bundles on a complex torus:
https://math.dartmouth.edu/~lesen/notes/Appel-Humbert.pdf
and Ben-Bassat's analogous result for holomorphic gerbes:
https://arxiv.org/abs/0811.2746
The "belief method" for describing algebras of monads on the bicategory of locally presentable k-enriched categories, where k is a symmetric monoidal locally presentable category:
https://math.ucr.edu/home/baez/conversations/belief_method.pdf
Hey @John Baez , don't know if this is an appropriate place to ask so feel free to move it elsewhere but I just finished reading Dr. Eugenia Cheng's book X + Y: A Mathematician's Manifesto For Rethinking Gender.
Amongst other interesting ideas she presented, she talked about how you approach your research in a very "congressive" manner to borrow her terminology.
In it she described how you have, for a number of years, been putting together a sort of weekly digest for the public on what you have been study in the realm of physics in an attempt to make your physics-based research more accessible as well as to help you understand things more.
Could I hear more about how you have done this and made it a sustainable habit in your life?
I once tried something similar years ago but found myself seeing it more as a chore and difficult to maintain.
I would love to get something going again like this for my students and colleagues.
Thanks and hope you are having a great day!
Interesting! I'm good friends with Eugenia but I haven't read this book of hers so I didn't know she mentioned my series This Week's Finds.
I wrote it from 1993 to 2010, at which point I switched to thinking more about environmental issues for a while.
I still do similar things on a couple of blogs and on Twitter.
Basically explaining what I'm currently interested in has rarely felt like a chore to me. It's mostly other things - the work I "need" to do - that can feel like a chore.
I really enjoy trying to explain things clearly, which means avoiding as much as possible of the notation and terminology that makes it harder to read mathematics (though perhaps easier to write).
Because my explanations are easy to read, a lot of people thank me for writing my explanations, and that increases my desire to do this sort of thing.
I've had little luck convincing other people to do this, so maybe I'm unusual in some way.
Hey John, thanks for the response!
Thanks for linking that archive!
These look great and something I could do!
Reading through them, what I realize my issue may have been was I was trying to also write about things I felt was not interesting personally (i.e. writing to keep my friends updated about my life on top of other avenues of work).
I think that personal motivation piece is key that you mention.
I am going to try this with my lab group and students!
Thanks for the references and talking about your experience some.
P.S Yes, Eugenia gave you a very warm acknowledgement in the book of you being a good mentor figure in her life!
Great! Yes, one key for me is that explaining things I'm currently trying to understand is very helpful to me, and I openly admit it when I'm not an expert so I don't feel too much pressure to be perfect.
Thanks to comments on the n-Cafe I was able to prove a couple theorems about the classification of holomorphic n-gerbes... well, one of these theorems I knew already, and it's almost a definition, but the other two are better:
The nicest one is the third: a classification of which smooth n-gerbes can be given a holomorphic structure.
On May 24-28 there will be a conference at Chapman University called Grothendieck's Approach to Mathematics organized by @Alexander Kurz. There are no talks listed there yet. When there are, I'll try to notice it and post something about the conference here.
I hope to see some of you there! I've decided to talk about something that I don't know much about, just because I think there's some very easy stuff to explain, which a lot of people haven't seen.
Underlying the Riemann Hypothesis there is a question whose full answer still eludes us: what do the zeros of the Riemann zeta function really mean? As a step toward answering this, André Weil proposed a series of conjectures that include a simplified version of the Riemann Hypothesis in which the meaning of the zeros becomes somewhat easier to understand. Grothendieck and others worked for decades to prove Weil's conjectures, inventing a large chunk of modern algebraic geometry in the process. This quest, still in part unfulfilled, led Grothendieck to dream of "motives": mysterious building blocks that could explain the zeros (and poles) of Weil's analogue of the Riemann zeta function. This talk by a complete amateur will try to sketch some of these ideas in ways that other amateurs can enjoy.
John Baez said:
On March 24-28 there will be a conference at Chapman University […]
That's May 24–28
Fixed. I've made mistakes about May versus March ever since I was a kid.
I finished preparing my talk for the Tutorial on the Categorical Semantics of Entropy, which will happen Wednesday May 11th:
I resubmitted my AMS Notices column on the Kuramoto-Sivashinsky equation, written with Cheyne Weis and Steve Huntsman.
This had gotten some flak from the referee, who seemed to think it should be a detailed review of modern research rather than a fun column.
Somehow based on this the editor decided I should spend more time explaining the "take-away message".
I don't quite understand, but I rewrote the piece to explain the key ideas a bit more clearly:
1) The Kuramoto-Sivashinsky equation is a simple example of a PDE exhibiting Galilean-invariant chaos with an arrow of time.
2) Solutions of this equation have visible patterns with very specific behaviors: there's a way to make this into a precise conjecture, and proving this would be a challenge for someone really good at PDE.
Here's a typical solution of the Kuramoto-Sivashinsky equation:
John Baez said:
I finished preparing my talk for the Tutorial on the Categorical Semantics of Entropy, which will happen Wednesday May 11th:
This is very neat :blush:
Thanks! I realized that I never gave a talk that explains this stuff very well. I've given talks, but they weren't very clear.
@Evan Patterson , @Sophie Libkind , @Nathaniel Osgood , @Xiaoyan Li and I are almost done with our paper on compositional modeling of epidemiology using stock and flow diagrams. It's the most exciting paper I've written in quite a while because it's my first on applied category theory that comes with software... and it's software to help epidemiologists design models! The team is a great blend of mathematicians, computer scientists and epidemiologists, with most of us wearing two of those hats. (Not me.) The project could never have succeeded without that combination of expertise.
The paper should hit the arXiv next week.
In July a bunch of us, and @James Fairbanks, will teach a course on how to use this software. (But not me, since with any luck I'll be in Edinburgh.)
Sometime this summer Nate wants to set up a graphical interface for this software, to make it more competitive with existing software and hide all the math - the operads and structured cospans and all that.
I used to create lists to motivate myself to get stuff done when I was feeling overworked. Then I retired and quit needing to do that. Alas, in the last few weeks I've been back to feeling overworked. But I'm making progress... I will survive this. :sweat:
Progress:
1) Revising and resubmitting my AMS Notices column The Kuramoto-Sivashinsky equation. DONE.
2) Submitting an abstract for an ACT2022 talk on the paper with @Kenny and @Christina V about structured versus decorated cospans. DONE.
3) Finishing the paper for ACT2022 on "Compositional modeling with stock and flow diagrams" with @Sophie Libkind, @Xiaoyan Li, @Nathaniel Osgood and @Evan Patterson. ALMOST DONE, EVERYONE SHOULD SIGN OFF BY FRIDAY. I need to submit it by Monday May 9th, and Evan will submit it to the arXiv.
4) At 10 am Wednesday May 11th I'm giving a talk called Shannon entropy from category theory on Zoom, as part of John Terilla's meeting on entropy and category theory. SLIDES ARE DONE, I JUST NEED TO REMEMBER TO GIVE THE TALK.
5) I'm giving a talk at a conference exploring Grothendieck's legacy at Chapman University, organized by @Alexander Kurz. It'll be called "Motivating motives", and it's supposed to be a really gentle quasi-historical introduction to how the quest to understand the Riemann zeta zeros led Weil to his conjectures and Grothendieck to motives. I'll be giving this talk Wednesday May 25th at 9 am. TIME TO START WORK ON THIS, AFTER A LITTLE BREAK.
7) On Sunday May 29 - Saturday June 4, 2022 I'll be going to Buffalo and then upstate New York for meeting of the AMS Mathematics Research Community (or "MRC") on Applied Category Theory, organized by @Simon Cho, @Daniel Cicala, @Valeria de Paiva, @Nina Otter and me. I'VE STARTED MEETING MY STUDENTS.
8) Sometime soon I want to start revising Schur functors and categorified plethysm with @Joe Moeller and @Todd Trimble, which accepted subject to revisions by Higher Structures.
8) Makes me feel a little less bad about my diagonal argument paper, also accepted subject to revisions at Higher Structures, which I've been too busy to get back to yet!
Aha. Good luck! We've got a lot of revisions to make, but the referee seemed to understand what we were doing, so that's okay.
David Michael Roberts said:
8) Makes me feel a little less bad about my diagonal argument paper, also accepted subject to revisions at Higher Structures, which I've been too busy to get back to yet!
+1 here (got the report back from Higher Structures in January and still haven't revised :grimacing: )
The tutorial on the categorical semantics of entropy with my talk Entropy from category theory and @Tai-Danae Bradley's talk Entropy and operads is now on YouTube here.
My paper Renyi entropy and free energy was accepted by the journal Entropy.
The funny thing is that two referees liked it while the third seemed to completely misunderstand it... and when I corrected their misunderstandings, they wrote a scathing report featuring sentences like
This reviewer was embarrassed to estimate this paper. The mathematical statements presented here are basic and probably not the first report because even undergraduate students can derive them. Ideas are easy to come up with and have no novelty.
and
If what is shown here is not clearly defined as thermodynamic entropy or potential, it is merely a report of a rudimentary student in mathematics.
Ah the dream -- explaining your work so well that the reader believes they could have done the work!
So, I wrote to the editor quoting these sentences and pointed out that 78 people have cited this paper already. And it worked.
Just one data point... I don't get truly nasty referee reports very often.
John Baez said:
This reviewer was embarrassed to estimate this paper. The mathematical statements presented here are basic and probably not the first report because even undergraduate students can derive them. Ideas are easy to come up with and have no novelty.
This person must be immensely detached from undergrad teaching to think 'even an undegraduate student can derive them'. I mean, probably if they are guided. Just knowing all the maths mentioned in the paper would be impressive for an undegrad, imo.
Ridicolous refereeing...
Matteo Capucci (he/him) said:
Ridicolous refereeing...
This is the part where I remind everyone that:
John Baez said:
- On Thursdays I'm meeting with Joe Moeller and Todd Trimble. We're talking about things like lambda-rings, Adams operations, the field with one element, and the Riemann Hypothesis.
Eight months later, what's the status on that lil proposition?
The main outcome so far was this paper, which basically categorifies the theory of the free lambda-ring on one element, also known as the algebra of symmetric functions:
We've been slowly gearing up for the next phase of the attack, and we have a lot of nice results brewing, connected to Adams operations - but right now we have to deal with the referee's comments on this paper!
I'd been invited to give a 25-minute virtual talk at a SIAM conference - where SIAM is the Society for Industrial and Applied Mathematics. Now I've been told that because I'm a nonmember, registering to give this virtual talk will cost me $515.
If I were a member it would "only" cost $390.
I hope you're refusing to do it under these circumstances?
Yes, I withdrew my talk.
Some other people have too, including the ecologist John Harte at Berkeley, who wrote:
Dear Punit,
Originally I thought that the fee for virtual presentation was $160, but now I see that it is much much higher! I am sorry to have to tell you that I will not pay that. I have no grant to cover it. As others have observed, that is an outrageous amount of money to ask of someone who is only going to give a 20 minute presentation and not attend in person. So please withdraw my talk.
At universities, I get travel expenses and an honorarium for giving a longer talk on the same topic! I understand the reason to charge for actual attendance, but to do so for virtual participation is unacceptable to me.
Perhaps you could pass these and other’s comments on to SIAM so that in the future they set more reasonable fee requirements.
Stay well,
John
I did the same recently, but after some discussion between organizers, the fee was removed. hopefully the same will happen in this case, it's preposterous!
Valeria de Paiva said:
I did the same recently, but after some discussion between organizers, the fee was removed. hopefully the same will happen in this case, it's preposterous!
Even if the fee gets removed they deserve utmost shame for having thought the unthinkable.
Yes. I think it's good that at least 3 of the speakers pulled out, and at least 2 of us cc'ed all the other speakers - as well as the organizer, of course.
I should probably tweet about this to amplify the effect.
Done.
Some happier news:
I've been tweeting about the number ϖ, which is a lot like π, but algebraically independent from π. I don't know why nobody told me about this number!
I just learned that there's a number ϖ that's a lot like π. Just as 2π is the circumference of the unit circle, 2ϖ is the perimeter of a curve called Bernoulli's lemniscate. There's even a whole family of functions resembling trig functions with period 2ϖ! (1/n) https://twitter.com/johncarlosbaez/status/1534925308712787969/photo/1
- John Carlos Baez (@johncarlosbaez)There are "leminscate elliptic functions" with period 2ϖ, which act a lot like trig functions. They are really a special case of the Jacobi elliptic functions:
The lemniscate sine and cosine functions obey mutant versions of the usual trig identities! For example: sl² x + cl² x = 1 - sl² x cl² x and they have derivatives sl' x = - (1 + sl² x) cl x cl' x = - (1 + cl² x) sl x It's easier to define their inverses: (3/n) https://twitter.com/johncarlosbaez/status/1534930798117150720/photo/1
- John Carlos Baez (@johncarlosbaez)This is all a spinoff of some conversations with Jim Dolan about elliptic curves.
John Baez said:
I've been tweeting about the number ϖ, which is a lot like π, but algebraically independent from π. I don't know why nobody told me about this number!
Where is the information about algebraic independence from ? I ask because usually it's very hard to establish that two numbers are algebraically independent.
I said more about this in my tweets. Wikipedia gives two references:
G. V. Choodnovsky: Algebraic independence of constants connected with the functions of analysis, Notices of the AMS 22, 1975, p. A-486
G. V. Chudnovsky, Contributions to The Theory of Transcendental Numbers, American Mathematical Society, 1984, p. 6
I looked up the first, which is free online, but it's just a short talk abstract by "G. Choodnovsky", who is listed as being from "Kiev, USSR".
So, I hope the second one, a book, contains a proof. But I haven't read it!
Huh, still didn't see it in the tweets! But that's okay; I found it in Wikipedia under Arithmetic-Geometric Mean. Clicking on Gauss's constant, I see the "Wallis-like formula"
and I'll bet one could relate this to the Gamma function, e.g., , along the lines of a recent Cafe comment. ;-)
Well, I took a peek at the Chudnovsky paper which is not an abstract (Google Books). The paper seems to be a summary of some of the state of the art in transcendence theory, so doesn't really gives proofs, but it does give some idea and pointers to the literature. It looks like it involves pretty deep elliptic curve theory! I'll give a sample that looks relevant to the bit I asked about (bottom of page 6):
Then he says, "We can see exactly what is, thanks to the Selberg-Chowla formula. We get from this formula and Corollary 2 that for the discriminant of a quadratic imaginary field and ,
and are algebraically independent."
Well, it's a little confusing to me because I thought discriminants of imaginary quadratic fields would be negative, for example the discriminant of is , but if we close our eyes to the sign and consider , then I suppose the product above is
where
and so by these results, and would be algebraically independent, and therefore Gauss's constant and the lemniscate constant would also be algebraically independent of . But that's probably as far down this rabbit hole I'll go pursuing this.
During the Vietnam war, Grothendieck taught math to the Hanoi University mathematics department staff, out in the countryside. One of them named Hoàng Xuân Sính took notes. She later did a PhD with Grothendieck — by correspondence! She mailed him her hand-written thesis, and later went to Paris to defend her dissertation.
Besides Grothendieck, some other amazing mathematicians were on her thesis committee: Laurent Schwartz, Henri Cartan and Michel Zisman of Gabriel--Zisman fame!
I wrote a series of tweets about her, and then extended these tweets into this longer article:
The video of my talk on motives is on YouTube now:
Motivating motives
Underlying the Riemann Hypothesis there is a question whose full answer still eludes us: what do the zeros of the Riemann zeta function really mean? As a step toward answering this, André Weil proposed a series of conjectures that include a simplified version of the Riemann Hypothesis in which the meaning of the zeros becomes somewhat easier to understand. Grothendieck and others worked for decades to prove Weil's conjectures, inventing a large chunk of modern algebraic geometry in the process. This quest, still in part unfulfilled, led Grothendieck to dream of "motives": mysterious building blocks that could explain the zeros (and poles) of Weil's analogue of the Riemann zeta function. This talk by a complete amateur will try to sketch some of these ideas in ways that other amateurs can enjoy.
You can see the talk slides as webpages here or as a PDF file here.
It's finally here: software that uses category theory to let you build models of dynamical systems! We're going to train epidemiologists to use this to model the spread of disease. My first talk on this will be Wednesday June 29 at 17:00 UTC. You're invited!
Decorated cospans are a general framework for composing open networks and mapping them to dynamical systems. We explain this framework and illustrate it with the example of stock and flow diagrams. These diagrams are widely used in epidemiology to model the dynamics of populations. Although tools already exist for building these diagrams and simulating the systems they describe, we have created a new software package called StockFlow which uses decorated cospans to overcome some limitations of existing software. Our approach cleanly separates the syntax of stock and flow diagrams from the semantics they can be assigned. We have implemented a semantics where stock and flow diagrams are mapped to ordinary differential equations, although others are possible. We illustrate this with code in StockFlow that implements a simplified version of a COVID-19 model used in Canada. This is joint work with Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson.
You can attend live on Zoom if you click here. You can also watch it live on YouTube, or later recorded, here:
My talk is at a seminar on graph rewriting, so I'll explain how the math applies to graphs before turning to 'stock-flow diagrams', like this one here:
Stock-flow diagrams are used to create models in epidemiology. There's a functor mapping them to dynamical systems.
But the key idea in our work is 'compositional modeling'. This lets different teams build different models and then later assemble them into a larger model. The most popular existing software for stock-flow diagrams does not allow this. Category theory to the rescue!
This work would be impossible without the right team! @Brendan Fong developed decorated cospans and then started the Topos Institute. My coauthors @Evan Patterson and @Sophie Libkind work there, and they know how to program using category theory.
Evan started a seminar on epidemiological modeling - and my old grad school pal @Nathaniel Osgood showed up, along with his grad student @Xiaoyan Li! Nate is a computer scientist who now runs the main COVID model for the government of Canada.
So, all together we have serious expertise in category theory, computer science, and epidemiology. Any two parts alone would not be enough for this project.
And I'm not even listing all the people whose work was essential. For example, @Kenny Courser and @Christina Vasilakopoulou helped modernize the theory of decorated cospans in a way we need here. @James Fairbanks, Evan and others designed AlgebraicJulia, the software environment that our package StockFlow relies on. And so on!
So, it became clear to me that to apply category theory to real-world problems, you need a team.
And we're just getting started!
I am probably not seeing it, but where is the code for these compositional models you show in the pictures (are those a GUI or constructed by hand etc.)
Most of the pictures in my slides are drawn by hand. I was mainly explaining the math, not the software based on that math.
The massive COVID model is a screenshot of a program in AnyLogic, the currently most popular system for creating stock-flow models.
In future talks I want to say more about software, and @Nathaniel Osgood is designing a graphical user interface for StockFlow, so we'll have fun pictures to show - or even better, I'll be able to demonstrate the software in my talk. With luck some version will be done in time for my ACT2022 talk sometime July 18-22.
Interesting, Thanks for sharing ..
@Owen Lynch got his master's degree at the University of Utrecht! His thesis is called Relational Composition of Physical Systems: A Categorical Approach. I was his de facto advisor while Wioletta Ruszel was his official advisor.
He'll put it on the arXiv eventually. But he wants to improve it a bit first. So:
Does anyone know a useful abelian category that includes the category of smooth vector bundles on a manifold as a full subcategory, much as the category of coherent sheaves includes the category of holomorphic vector bundles on a smooth complex variety?
Now that that's done, here is my current to-do list:
Write my next column for the AMS Notices by July 15th 2022.
Write my talk on modeling in epidemiology for ACT2022, which starts July 18th. This talk is about the paper Compositional modeling with stock and flow diagrams with @Xiaoyan Li, @Sophie Libkind, @Nathaniel Osgood and @Evan Patterson. I've created the slides but Nathaniel things he'll have a graphical user interface working by the time ACT2022 starts . So I'm hoping to add a little demo!
Shooting from the hip, does the category of -modules work?
That sounds interesting! Since I don't have a crisp characterization of what it means to "work", I'll have to actually try using it for the scheme I have in mind, and see what happens.
For those who are wondering, vector bundles correspond to finitely generated projective -modules (at least when is compact, and more generally they at least give modules), so it makes a lot of sense to embed the category of vector bundles faithfully in the category of arbitrary modules, or maybe finitely generated modules.
There's a curious book called Smooth manifolds and observables that somehow ended up on my shelf, that does a lot of manifold theory from the perspective of .
My book Gauge Fields, Knots and Gravity introduces vector fields and differential forms algebraically starting from .
I was also gung-ho on noncommutative differential geometry as a youth, and that made me think about doing differential geometry starting from an algebra.
But for some bizarre reason I've never thought much about general -modules!
Better link for the book in Mike's comment: https://doi.org/10.1007/978-3-030-45650-4
More official, but less informative?
I wrote a 2-page column for the AMS Notices:
I'd appreciate comments! It's written for a general audience of mathematicians, so I wanted to keep the category theory quite easy, so I don't really want ways to make it harder or deeper. What I mainly hope is that it makes sense and seems intriguing.
As you can see, some of it is secretly an advertisement for the Yoneda embedding. But the first half is trying to get nonexperts to think a bit more about duality.
Very nice, John! a small typo in reference 4:
D. Pavlovic and D. J. D. ]Hughes
the extra bracket.
another typo: "to be the opposite of the category of commutative rings"
and "Gelfand–Naimark duality says the opposite of the category".
So, Isbell duality is the statement that certain two functors are adjoint? You say that "Isbell [2] noticed a wonderful link", but don't quite say what that link is.
John Baez said:
Stock-flow diagrams are used to create models in epidemiology. There's a functor mapping them to dynamical systems.
But the key idea in our work is 'compositional modeling'. This lets different teams build different models and then later assemble them into a larger model. The most popular existing software for stock-flow diagrams does not allow this. Category theory to the rescue!
This is interesting.
I'm working on something very similar. Is it possible to get a job position to work on a problem like this? Is there a way to connect with these authors? Are they university students or industry workers? Anyone working on computer implementations in particular?
John Baez said:
I wrote a 2-page column for the AMS Notices:
Love it! Here are my two cents on it:
@John Baez This is great, thank you very much!
Nice!
I would reverse the order of the two categories in the sentence:
"For example, Gelfand–Naimark duality says the opposite of [the] category of commutative C∗-algebras is the category of compact Hausdorff spaces", in order to maintain the stated pattern "the opposite of spaces is rings".
Thanks, @Steve Awodey, @Tobias Schmude, @Simon Burton and @Valeria de Paiva for your helpful comments!
Simon Burton said:
So, Isbell duality is the statement that certain two functors are adjoint? You say that "Isbell [2] noticed a wonderful link", but don't quite say what that link is.
I thought I did: everything after this sentence was the link. But I'll come out and claim that Isbell duality is the claim that two functors are adjoint.
By the way, I doubt Isbell said it that clearly. I get the feeling that Isbell's original paper was rather confusing by modern standards: Avery and Leinster point out that it was written just a couple years after adjoint functors were defined, and he only handled a special case.
Peiyuan Zhu said:
I'm working on something very similar. Is it possible to get a job position to work on a problem like this? Is there a way to connect with these authors? Are they university students or industry workers? Anyone working on computer implementations in particular?
It's pretty easy to find out where all these authors work and talk to them about these things.
Steve Awodey said:
Nice!
I would reverse the order of the two categories in the sentence:
"For example, Gelfand–Naimark duality says the opposite of [the] category of commutative C∗-algebras is the category of compact Hausdorff spaces", in order to maintain the stated pattern "the opposite of spaces is rings".
Thanks! I'm a big fan of parallel prose, so I'm shocked that I didn't do it right.
Tobias Schmude said:
John Baez said:
I wrote a 2-page column for the AMS Notices:
Love it! Here are my two cents on it:
- I'd say the "out of it" and "into it" are swapped, right?
Yikes! I got it backward. I must have been really out of it.
(English slang is pretty confusing, but that was a pun.)
It might confuse the reader that Isbell duality is only an adjunction, while the previous examples are equivalences. Maybe the example of not necessarily finite dimensional vector spaces might prepare the reader for that, as that of fdvs was already given?
Wow, that's a great point. Instead of doing it earlier - since I don't want to talk about adjunctions near the start - I will do it right after Isbell duality. This will connect Isbell back to the discussion of vector spaces in a nice way. It's nice to return back to the start of the paper that way. :+1:
Thanks, everyone, for helping improve my paper. Normally I'd acknowledge you in the paper but this is a short column and I don't think they'll let me have Acknowledgements.
Here is the new version:
I've started teaching a "course" on entropy and information on Twitter. I realized that Twitter polls make it easy to give "homework" or "quizzes".
We'll see how it goes. I like the idea because it subverts the usual use of twitter and lets thousands of people easily find and take a course.
https://twitter.com/johncarlosbaez/status/1545058735114006540
Here is the simplest link between probability and information: When you learn that an event of probability p has happened, how much information do you get? We say it's -log p. Use whatever base for the logarithm you want; this is like a choice of units. (1/n) https://twitter.com/johncarlosbaez/status/1545058735114006540/photo/1
- John Carlos Baez (@johncarlosbaez)This is really a nice idea!
Coming from a field that does quite a bit of "research" on Twitter, you may be surprised by how this turns out. Also breaking up complex topics in small bits of information that can be contained in a tweet counteracts very nicely the overall shrinkage of our attention span in the age of social media :grinning:
"you may be surprised by how this turns out"
Do you have something in mind - a specific sort of good or bad surprise? Or are you just saying it's unpredictable?
In crypto, overwhelmingly good. People tweet things like "oh, we could use this cryptographic primitive to do this", some other person replies, a giant tweet thread ensues and one year later you have a new protocol coming up based on that thing
Probably it was a bit more frequent a few years ago, now people tend to be a bit more jealous of their ideas because of the economic exploitability. But still...
In general I think you won't get bad surprises. Sure, there's the average tweet troll and stuff like that, but it's nothing new. But I find it at least vaguely possible that you may end up grabbing the curiosity of some very skilled/gifted individuals that lurk on twitter and yet have never been formally exposed to these topics. Which could even result in some kind of contributions
I'm hoping that maybe some people who like to code will jump in later and compute some entropies when we get to problems where it's a bit harder to do it by hand.
My point is: Who knows! For sure this is not the average crowd that learns this stuff. It is worth trying :grinning:
Yes, this sort of stuff is also likely
John Baez said:
I'm hoping that maybe some people who like to code will jump in later and compute some entropies when we get to problems where it's a bit harder to do it by hand.
There's also a non-zero probability that they will code the algorithm in minecraft tho :stuck_out_tongue:
https://www.youtube.com/watch?v=1X21HQphy6I
Here are a bunch more conversations with James Dolan:
2022-04-04:
Ben-Bassat's version of the Néron–Severi group for holomorphic gerbes:
https://arxiv.org/abs/0811.2746
Generalizations of the Jacobian, the Picard group and the Néron–Severi group from holomorphic line bundles to holomorphic n-gerbes:
https://golem.ph.utexas.edu/category/2022/03/holomorphic_gerbes.html
https://golem.ph.utexas.edu/category/2022/04/holomorphic_gerbes_part_2.html
An application of the "belief method" to commutative quantales (which are cocomplete symmetric monoidal posets):
https://math.ucr.edu/home/baez/conversations/belief_method.pdf
2022-04-11:
https://www.youtube.com/4kXlQH1PCMk
Using sheaf cohomology to generalize the Jacobian, the Picard group and the Néron–Severi group from holomorphic line bundles to holomorphic n-gerbes:
https://golem.ph.utexas.edu/category/2022/03/holomorphic_gerbes.html
https://golem.ph.utexas.edu/category/2022/04/holomorphic_gerbes_part_2.html
The belief method applied to the doctrine of symmetric monoidal locally presentable k-linear categories, which gives "algebro-geometric theories":
https://math.ucr.edu/home/baez/conversations/belief_method.pdf
The "theory of an object" is the category of k-linear species with its Cauchy tensor product:
https://ncatlab.org/nlab/show/species
The theory of an object whose exterior cube is the initial object. The representations of GL(2), SL(2) and related groups:
https://en.wikipedia.org/wiki/M%C3%B6bius_transformation
2022-04-18:
Young diagrams and representations of the general linear group GL(N):
https://en.wikipedia.org/wiki/Young_tableau
https://en.wikipedia.org/wiki/General_linear_group
the special linear group SL(N):
https://en.wikipedia.org/wiki/Special_linear_group
and the projective general linear group PGL(N):
https://en.wikipedia.org/wiki/Projective_linear_group
all illustrated in the case N = 2.
2022-05-18:
A basic introduction to motives, a warmup for this talk on YouTube:
https://www.youtube.com/watch?v=jkPOznK2j00
and also here, as a website:
http://math.ucr.edu/home/baez/motives/
For more, see J. S. Milne's expository articles "Motives: Grothendieck's dream"
https://www.jmilne.org/math/xnotes/mot.html
and "The Riemann Hypothesis over finite fields: from Weil to the present day":
https://www.jmilne.org/math/xnotes/pRH.html
2022-05-23:
Counting points on elliptic curves, motives, and supersingular elliptic curves over finite fields, which are those with a quaternion algebra of endomorphisms:
https://en.wikipedia.org/wiki/Supersingular_elliptic_curve
https://en.wikipedia.org/wiki/Quaternion_algebra
Getting abelian varieties from cyclotomic fields, using the example of the 20th cyclotomic field:
https://en.wikipedia.org/wiki/Cyclotomic_field
2022-06-06:
The moduli stack of real elliptic curves. Generalizing Kronecker's Jugendtraum from elliptic curves to abelian varieties:
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
Manin's "Alterstraum" and elliptic curves with real multiplication:
https://arxiv.org/abs/math/0202109
Heegner numbers:
https://en.wikipedia.org/wiki/Heegner_number
2022-06-13:
The category of pure numerical motives (with rational coefficients) over the field with q elements. The classification of simple objects in this category. The absolute Galois group of the rationals acts on the set of Weil q-numbers, and assuming the Tate Conjecture, simple objects correspond to orbits of this action. This is Proposition 2.6. in Milne's "Motives over finite fields'':
https://www.jmilne.org/math/xnotes/Seattle1.html
I got the definition of Weil q-number wrong. Given a prime power q, a Weil q-number is a complex number such that:
1) for some integer n, q^n z is an algebraic integer,
2) for some integer m, |gz| = q^{m/2} for every element g in the absolute Galois group of the rationals.
The point, again, is that the absolute Galois group of the rationals acts on the set of Weil q-numbers, and the set of orbits is isomorphic to the set of simple numerical motives over F_q. This is Proposition 2.6 in Milne's "Motives over finite fields'':
https://www.jmilne.org/math/xnotes/Seattle1.html
2022-06-20:
Correcting the definition of Weil q-number, which is explained in Definition 2.5 of Milne's "Motives over finite fields":
https://www.jmilne.org/math/xnotes/Seattle1.html
Toposes in number theory. The topos of species. Dirichlet species:
https://ncatlab.org/johnbaez/show/Dirichlet+species+and+the+Hasse-Weil+zeta+function
Set-valued functors on the groupoid or category of finite fields of characteristic p. The topos of actions of the Galois group.
2022-06-27:
The exponential sheaf sequence of a complex abelian surface X:
https://en.wikipedia.org/wiki/Exponential_sheaf_sequence
and how the corresponding long exact sequence in sheaf cohomology lets us describe the Néron-Severi group
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Severi_group
as the image of H²(X,𝒪*) in H²(X,Z), or equivalently the kernel of the map from H²(X,Z) to H²(X,𝒪). When the rank of the Néron-Severi group is maximal the image of the map from H²(X,Z) to H²(X,𝒪) is a lattice in a complex vector space - but generically it appears to be a dense subgroup.
How the classification of simple objects in the category of pure numerical motives changes when we use the algebraic closure of the rationals as coefficients, rather than the rationals. This is Proposition 2.21 in Milne's "Motives over finite fields":
https://www.jmilne.org/math/xnotes/Seattle1.html
I said tensoring irreps of R amounts to multiplication of real numbers, but it's really addition.
2022-07-04:
The exponential sheaf sequence:
https://en.wikipedia.org/wiki/Exponential_sheaf_sequence
and a detailed study of the map from H²(X,Z) to H²(X,𝒪) when X is an abelian surface. We can work out this map and its image explicitly in examples. Here we do the easiest case, when X the product of two identical elliptic curves, each being the complex numbers mod the lattice of Gaussian integers.:
https://en.wikipedia.org/wiki/Gaussian_integer
We see that in this case the image of the map from H²(X,Z) to H²(X,𝒪) is a lattice in a 2d complex vector space.
A polarization on an abelian variety X = V/L puts a positive definite hermitian form on V, and this lets us describe the Néron-Severi group in terms of self-adjoint operators on V. This clarifies how a polarization puts a Jordan algebra structure on the Néron-Severi group tensored with the reals.
2022-07-04:
The exponential sheaf sequence:
https://en.wikipedia.org/wiki/Exponential_sheaf_sequence
and a detailed study of the map from H²(X,Z) to H²(X,𝒪) when X is an abelian surface. We can work out this map and its image explicitly in examples. Here we do the easiest case, when X the product of two identical elliptic curves, each being the complex numbers mod the lattice of Gaussian integers.:
https://en.wikipedia.org/wiki/Gaussian_integer
We see that in this case the image of the map from H²(X,Z) to H²(X,𝒪) is a lattice in a 2d complex vector space.
A polarization on an abelian variety X = V/L puts a positive definite hermitian form on V, and this lets us describe the Néron-Severi group in terms of self-adjoint operators on V. This clarifies how a polarization puts a Jordan algebra structure on the Néron-Severi group tensored with the reals.
2022-07-11:
Classifying complex abelian surfaces in order of genericity, and the ranks of their Néron-Severi groups:
https://en.wikipedia.org/wiki/Abelian_surface
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Severi_group
The generic case gives a Néron-Severi group of rank 1, the cartesian product of two distinct elliptic curves gives rank 2, the cartesian square of a generic elliptic curve gives rank 3 and the cartesian square of an elliptic curve with complex multiplication gives rank 4.
To understand this, we should look at the endomorphism ring of the abelian surface and tensor it with the reals, giving the "endomorphism algebra". Upon picking an polarization this algebra gets a Rosati involution:
https://en.wikipedia.org/wiki/Rosati_involution
which makes the algebra into a star-algebra, allowing us to split endomorphisms into a self-adjoint and skew-adjoint part.
The moduli stack of elliptic curves:
https://en.wikipedia.org/wiki/Moduli_stack_of_elliptic_curves
and the associated Coxeter group. How can we generalize this to higher-dimensional principally polarized abelian varieties? How do modular curves generalize to the higher-dimensional case, giving Siegel modular varieties?
https://en.wikipedia.org/wiki/Siegel_modular_variety
My latest conversation with James Dolan was nice because I finally built up my computational abilities to the point of solving a puzzle we'd chatted about repeatedly. James was puzzled by how the map sending smooth complex line bundles on a complex variety to elements of the vector space could have an image that’s “dust-like” — dense, but not the whole space. I finally did the calculation and showed that this really happens in the example we were considering! We still need to think about the consequences.
2022-07-18:
We work out the map from H²(X,Z) to H²(X,𝒪) when X is the product of two elliptic curves, namely the complex numbers mod the Gaussian integers and the complex numbers mod the Eisenstein integers:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Eisenstein_integer
We see that in this case the image of the map from H²(X,Z) to H²(X,𝒪) is a abelian subgroup of rank 4 in a 1-dimensional complex vector space - thus, dense in this vector space.
Six steps of understanding Kronecker's Jugendtraum. First, the patterns of how primes split in abelian extensions of the rational numbers:
https://en.wikipedia.org/wiki/Splitting_of_prime_ideals_in_Galois_extensions
https://en.wikipedia.org/wiki/Abelian_extension
https://en.wikipedia.org/wiki/Quadratic_reciprocity
https://en.wikipedia.org/wiki/Reciprocity_law
See also B. F. Wyman's paper "What is a reciprocity law?":
https://www.jstor.org/stable/2317083
Second, the Kronecker–Weber theorem saying that every abelian extension of the rationals is contained in a cyclotomic field:
https://en.wikipedia.org/wiki/Cyclotomic_field
https://en.wikipedia.org/wiki/Kronecker%E2%80%93Weber_theorem
Third, the correspondence between subfields of the nth cyclotomic field and subgroups of its Galois group:
https://en.wikipedia.org/wiki/Galois_theory
Fourth, relativization: generalizing these ideas to abelian extensions of arbitrary number fields:
https://en.wikipedia.org/wiki/Abelian_extension
Fifth: complications due to nonprincipal ideals. Artin reciprocity:
https://en.wikipedia.org/wiki/Artin_reciprocity_law
Sixth, Kronecker's Jugendtraum:
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
I'm giving a talk at BLAST sometime August 8-12. Since I don't really work on "Boolean algebras, Lattices, Algebraic and quantum logic, universal algebra, Set theory, set-theoretic or point-free Topology" I had to dream up a topic.
I seem to have become the sort of figure who people think can talk entertainingly to any audience. I guess some people in this situation just talk about whatever they want, even if it's not connected to the topic of the conference?
I decided that since quantales are on topic for this conference, and I know a cute way to get quantales from Petri nets, I should talk about all the things you can get from Petri nets, and how they're related.
I've been wanting to write about this for a while, but unfortunately I haven't yet so this will be sort of rough.
Here's a question for everyone. Suppose you have a partially ordered commutative monoid - that is, a commutative monoid object in the category of posets. Do you know any cool functors you can apply to it, to get interesting things?
I know how to turn it into a quantale. But what else?
Putting it this way, I just thought of taking the nerve and getting a commutative monoid in simplicial sets.
So then you could group complete and that would be a potentially interesting object
Yes! Now I've asked about this on MathOverflow:
I conjecture that you can get all connective Eilenberg-Mac Lane spectra. Tim Campion shows you can only get Eilenberg-Mac Lane spectra, and at first he claimed you could get all of them, but then he retreated and said you could get all 's (which is pretty easy: just use abelian groups as your partially ordered commutative monoids).
John Baez said:
Yes! Now I've asked about this on MathOverflow:
I conjecture that you can get all connective Eilenberg-Mac Lane spectra. Tim Campion shows you can only get Eilenberg-Mac Lane spectra, and at first he claimed you could get all of them, but then he retreated and said you could get all 's (which is pretty easy: just use abelian groups as your partially ordered commutative monoids).
I was thinking that 's are easy, since you can use a discrete abelian group -- are you saying that 's are similarly easy?
Oh, I was just counting wrong. I meant .
Let me try though and see what happens. We form in the usual way as a simplicial set, then extract from that a simplicial complex (can we do that?), then barycentrically subdivide it and form the face poset . I believe the multiplication induces a multiplication since the simplices in are lists of elements in and we can multiply those componentwise without getting confused when is abelian. And I believe this makes into a simplicial abelian group. (If I'm wrong, how do we give its simplicial abelian group structure?) I would like this to make into an abelian group object in the category of posets. Does it work? If so, this is my candidate for a partially ordered commutative monoid whose nerve is a .
I guess there are a bunch of functors here that we'd need to check are product-preserving:
The functor from abelian groups to (nice) simplicial sets.
Some functor from (nice) simplicial sets to simplicial complexes.
Barycentric subdivision from simplicial complexes to simplicial complexes.
The face poset functor from simplicial complexes to posets.
The nerve functor from posets (or categories) to simplicial sets.
That's a lot of things to check, and one of them might go wrong.
Is there any sort of "face poset" functor straight from simplicial sets to posets? That might simplify this construction and make it more likely to work.
Yay, I got my visa to go to the UK! So I'll be going there on September 1st and staying until January 2nd.
John Baez said:
Yay, I got my visa to go to the UK! So I'll be going there on September 1st and staying until January 2nd.
Felicitation for the long stay which means less carbon emissions than short trips!
Yes, that's the idea - go somewhere and live there, not run around. I was actually planning to stay for 6 months, starting in July, but the visa process started too late.
Today was a bit tough, dealing with 3 referee's reports:
Adding some applications could easily double the size of this Isbell duality paper. For me the application is that now i am emboldened to read the Avery Leinster paper, etc., whereas before I don't think I could stomach the formality. For such a gentle introduction, I am surprised that someone would even refuse to review it.
Yeah, I'm a bit pessimistic about what the referee will say. There are certainly plenty of people who can't stomach pure category theory without applications.
I think there should be a good application to the study of smooth spaces, since smooth spaces can be defined either as presheaves on the category of s and smooth maps, or as copresheaves. The former approach defines smooth spaces by the map into them from s. The latter approach defines smooth spaces by the maps out of them to s. Isbell duality provides an adjunction between the presehaves and the copresheaves, and Avery/Leinster study the fixed points of this adjunction, which are both presheaves and copresheaves.
But I don't know if anyone has worked this out.
Well, it turns out my column on Isbell duality was accepted after just tiny corrections, so I was being overly nervous.
I'm going to give a bunch of talks this fall:
Friday September 9, 2022 - I'll give an online talk at GEOTOP-A, a series that "concentrates on applications of topology and geometry". The scientific committee is Alicia Dickenstein, Jose-Carlos Gomez-Larranaga, Kathryn Hess, Neza Mramor-Kosta, De Witt Sumners and my contact, Renzo Ricca. Kathryn is an old grad school pal of mine, a homotopy theorist.
Tuesday September 27, 2022 - I'll speak at the Edinburgh Statistical Physics and Complexity Webinar Series at 3 pm. My contacts are Martin Evans and Peter Mottishaw.
Wednesday November 2, 2022 - I will speak at the Africa Mathematics Seminar at 15:00 - 16:00 East African Time, which is probably 13:00 - 14:00 in Edinburgh. My contact is Layla Sorkatti.
Tuesday December 6, 2022 - I'll speak at the Statistics and Data Science Seminar at Queen Mary University of London. My contact is Nina Otter.
Friday December 9, 2022 - I'll speak about compositional modeling in the Mathematics Colloquium at the University of Warwick.
Monday December 19 - Tuesday December 20 - Chris Heunen invited me to speak at SYCO10 in Edinburgh.
SYCO10 in Edinburgh, that's a blessing!
John Baez said:
Well, it turns out my column on Isbell duality was accepted after just tiny corrections, so I was being overly nervous.
This is cool! Toward the end you mention that it is interesting to study functors such that is an isomorphism; could we have some examples?
I guess the best example I know is that if is a poset viewed as a category the self-dual presheaves form a poset called the Dedekind-MacNeille completion of . Unsurprisingly, this is the poset of the downwards closed subsets such that the down-set of the up-set of equals . But it's something order theorists like.
If we work -enriched and treat a ring as an -enriched category, the category of self-dual functors is the category of self-dual modules of .
There are more examples in Section 6 here.
I put 20 lectures on geometric representation theory onto my YouTube channel. Half are by me and half are by James Dolan. Each one comes with lecture notes and a blog article.
We gave these lectures back in 2007. You can think of them as "applications of spans of groupoids".
Thanks for posting these lectures! On a side note, it might be helpful to provide a short description of what prerequisite knowledge is assumed or at least very handy to know. (I got about a half hour in to the first lecture, and realized I would need to read up on representation theory a little to follow... it wasn't clear from the video to me what the "dimension" of a representation would be, for example, which was introduced as a functor.)
[By the way, for anyone else in my shoes here, the book "An invitation to representation theory" by Howe seems really nice - lots of examples, exercises, and clearly explained notation.]
It looks like these could be a lot of fun to watch. The whole theme of defining simple structures (e.g. "D-flags" on sets) and then trying to carry these over to other settings (like vector spaces) via a commutative diagram of functors seems really exciting. Ways of transferring ideas between settings like this seem so cool to me!
(Oh, and I now have learned what the dimension of a representation is, I think! If is a group which acts on a vector space , then the dimension of is called the dimension of the representation. Now following along seems to be going more smoothly - I was able to follow along with the first lecture!)
Yes, that's the definition of the dimension of a representation.
Since this was a grad seminar on a new "groupoidified" approach to representation theory, I guess I automatically assumed some knowledge of representation theory. But surprisingly little, I bet you'll find. Did I ever use Schur's Lemma, the basic fact about representations? I don't remember.
So, Howe's book may be more than enough in most ways. I haven't read it - do you like it? I knew Howe back when I was at Yale; he's very good at representation theory so his idea of an "invitation" may be rather intense. I don't know.
You might want to bone up a bit on representations of the symmetric groups and how the irreducible representations are classified by Young diagrams. Or maybe not!
This quick and breezy introduction to Young diagrams may be all you need:
John Baez said:
- I was told that the referee will only review my 2-page expository column on Isbell duality after I add some applications. I don't like this - would someone say that for a paper on number theory? But I added an application. I doubt it will convince a skeptic that this beautiful theorem is "worthwhile". I was trying to motivate it by its inherent beauty. I already said it hasn't found enough applications yet, probably because not enough people know about it.
FWIW, the tight completion that we get as the nucleus of the isbell duality is the concept category, i.e. the spectral space of whatever data source the original category captured. this is an application that solves the problem of amplification ("echo chambers") which is built-in into the linear (SVD) approach to concept mining. so this application applies the isbell duality to each of your tweets about it when the IR engine decides who to display it to :)
i gave an overview for logicians around xmas 2021 with these slides:
https://www.dropbox.com/s/fep8h6rmcl6bcdf/0-CCA-talk.pdf?dl=0
(the talk was recorded and it might be on youtube...)
re the question of applicability of number theory: whether the Pythagoreans were right that the physical space is built of numbers or not, cyberspace is definitely built of numbers.
as a grad student, i was in the middle of a corridor full of people working on elliptic curves. rene schoof was counting how many points they have. i couldn't wrap my mind around what could these picard groups possibly ever be used for. what kind of invariants do they present? (i guess i was interpreting klein and noether backwards.) --- 20 years later, elliptic curves became the most applied part of math, at least for a while, running behind most tunnels through cyberspace. schoof's theorem and algorithm were like engineering tools and in a certain community my main claim to fame was that i used to have an office close to his :))
John Baez said:
So, Howe's book may be more than enough in most ways. I haven't read it - do you like it? I knew Howe back when I was at Yale; he's very good at representation theory so his idea of an "invitation" may be rather intense. I don't know.
You might want to bone up a bit on representations of the symmetric groups and how the irreducible representations are classified by Young diagrams. Or maybe not!
I'm wondering if the author might be a different Howe than the one you knew. This book is by "R. Michael Howe", but there appears to also be a "Roger Evans Howe", which might be who you were referring to(?).
At any rate, I've only just starting looking at that book today, but I really like it so far. I've tried to look up some basic representation theory concepts before and have generally been stymied by a mass of definitions that feel unmotivated. This book puts in just enough motivation and examples to make it flow for me without too much effort.
Thanks for suggesting the link on Young diagrams!
Yes, Roger Evans Howe is the representation theorist I was referring to!
I got into representation theory through quantum mechanics, which provides a lot of motivation for the subject. So does crystallography. So does finite group theory: there are a bunch of theorems about finite groups that are most easily proved using group representations, and some whose only known proof goes that way!
More conversations with James Dolan:
2022-07-25:
The long exact exponential sheaf sequence for a generic abelian surface:
https://en.wikipedia.org/wiki/Exponential_sheaf_sequence
https://encyclopediaofmath.org/wiki/Abelian_surface
Analogies between Artin reciprocity and the abelian case of children's drawings (dessins d'enfants):
https://en.wikipedia.org/wiki/Artin_reciprocity_law
https://en.wikipedia.org/wiki/Dessin_d%27enfant
A notation for subfields of a cyclotomic field, and an analogous notation for abelian branched covers of the Riemann sphere.
2022-08-01:
The analogy between number fields and 3-manifolds, making primes analogous to knots:
Chao Li and Charmaine Sia, Knots and primes, https://www.julianlyczak.nl/seminar/knots2016-files/knots_and_primes.pdf
Barry Mazur, Thoughts about primes and knots, https://www.youtube.com/watch?v=KTVEFwRbuzU
Masanori Morishita, Analogies between knots and primes, 3-manifolds and number rings, https://arxiv.org/abs/0904.3399
Three Coxeter groups, each with a homomorphism onto the next: the child's drawing Coxeter group, the cartographic group and the modular Coxeter group.
2022-08-15:
Lambda-rings, how the Grothendieck group of a 2-rig becomes a lambda-ring, and the big Witt comonad:
https://en.wikipedia.org/wiki/Grothendieck_group
https://en.wikipedia.org/wiki/Lambda-ring
https://ncatlab.org/nlab/show/Lambda-ring
John Baez, Joe Moeller and Todd Trimble, Schur functors and categorified plethysm, https://arxiv.org/abs/2106.00190
H. Lenstra, Construction of the ring of Witt vectors, http://math.uchicago.edu/~drinfeld/Seminar-2019/Witt_vectors/Lenstra%20on%20Witt%20vectors.pdf
D. O. Tall and G. Wraith, Representable functors and operations on rings, https://maths-people.anu.edu.au/~borger/classes/copenhagen-2016/references/Tall-Wraith.pdf
J. Borger and B. Weiland, Plethystic algebra, https://arxiv.org/abs/math/0407227
Zeta functions and the big Witt ring:
Niranjan Ramachandran, Zeta functions, Grothendieck groups, and the Witt ring, https://www.sciencedirect.com/science/article/pii/S0007449714000852
I put 6 videos of James Dolan explaining [[doctrines]] on my YouTube channel. You can reach them here:
and this page now has another interesting feature: Todd Trimble's notes of his conversations with Dolan about doctrines.
I had a good time at the Topos Institute "blue sky retreat" retreat. I guess "blue sky" is some sort of Silicon Valley jargon for when you collectively dream about your future plans without worrying too much about practical details.
There seemed to be fairly widespread agreement that epidemiological modeling is shaping up to be the "killer app" that the Topos Institute will be able to point to as a success.
One thing weighing heavily on my mind is that last week a bunch of us got this email:
Hi John, Xiaoyan, Sophie, Nathaniel and Evan—
I’m a applied mathematician and statistician working with the California Department of Public Health, primarily focused on covid modeling. From the Topos Institute blog and from John Baez twitter feed, I have been intrigued to learn about your category theory based framework for modeling compartmental models.
Would one or several of you be willing to talk about this project, and ways we may be able to leverage it? We’re working on a couple of stock/flow type modeling projects which may benefit from this approach. One project involves modeling co-circulation of covid and flu, which results in an ODE model like the one described in your paper. Another involves tracking immunity categories including factors like vaccination status, previous exposure to variants, waning immunity.
On the one hand this is a tremendous opportunity to do some good and prove the value of our software. On the other hand Nathaniel Osgood, who is the best at epidemiology among us, is already very busy.
I suddenly realized that while I have the most spare time for sudden requests like this, I'm not very useful because I don't have hands-on knowledge of the software or detailed knowledge of epidemiology.
I was talking to Wesley Phoa about this at the blue-sky meeting. He's a former student of Martin Hyland who is now a partner at Capital Group, an investment management firm in LA. He's on the board of the Topos Institute.
He said I should take time to learn the software. "It would take a couple of weeks, but if you could save even a few hundred lives, it would be worth it."
That hit me like a punch in the gut. Save a few hundred lives? That would be a bigger deal than anything I've ever done.
Of course this particular opportunity may not work out... but this has become a serious business.
I hope I'm worrying about this too much. I hope the California Department of Public Health doesn't need us to help them with their modeling - I hope they're just wondering if our software can save them some time, or wanting to test it out.
I'm giving another talk in the UK:
I hadn't known Abramsky has moved from Oxford to University College London! Someone assures me it's true. Is it true?
I've been asked to give a 20-minute introduction to applied category theory at the Joint Mathematical Meeting in Boston on January 6th.
ALL OF APPLIED CATEGORY THEORY IN 20 MINUTES! :astonished:
Applied Category Theory: Where Are We Now?
Abstract. All along, category theory has been inspired by its applications — first to algebraic topology, then algebraic geometry and logic, then computer science, and so on. Thus, the term "applied category theory", far from being an oxymoron as some have claimed, is almost redundant! However, category theory is now being applied to wide range of new subjects: physics, chemistry, epidemiology, natural language processing, game theory, project management, and many more. This raises new questions about what category theory can do, and where it will go next. We shall try to provide some answers — but more importantly, try to stimulate some productive conversations.
I tried promoting ACT in a 15 minute "pitch" as part of a job interview process. I don't think it went well, but that's on me :-)
I'll try to say something non-obvious... not sure what yet. :upside_down:
John Baez said:
I hadn't known Abramsky has moved from Oxford to University College London! Someone assures me it's true. Is it true?
It is true! Samson moved to UCL last year. As far as I understand, he still lives in Oxford but has been commuting to London at least a couple of times a week.
Looking forward to your talk at UCL!
Thanks!
It's really interesting that Samson Abramsky moved to UCL and Bob Coecke switched to working in industry (partially? entirely?). Is there any remaining activity in what once was the enormous category theory powerhouse in the Oxford computer science department?
Aleks Kissinger and Sam Staton are the surviving faculty in that group. (Of course there are several other category-friendly people in the department, some having joined quite recently such as Bartek Klin.)
Bob has moved to industry but is still Oxford-based and there is quite a bit of back-and-forth between the CS department and Bob's group; one could say the whole thing evolved into a “hybrid” academia/industry group.
Thanks for the information! I'm glad to hear Alex and Sam are still there. I hope they can continue the proud tradition. I know Alex a lot better, and I think he's up to the task.
On Friday Timothy Nguyen and I had a long conversation which he recorded. He'll edit it and put it on his YouTube channel called The Cartesian Cafe.
I had previously seen an episode of The Cartesian Cafe about category theory, with Tai-Danae Bradley.
I like how he's willing to have long technical conversations about math.
I went through a bunch of stuff leading up to the key idea behind the SU(5) grand unified theory: namely, how we get the right hypercharges for all the fermions in the Standard Model if we map SU(3) SU(2) U(1) to SU(5) and then take the representation of the latter on the exterior algebra .
It was pretty fun, because it was a real conversation, not a monologue.
Amar Hadzihasanovic said:
Aleks Kissinger and Sam Staton are the surviving faculty in that group. (Of course there are several other category-friendly people in the department, some having joined quite recently such as Bartek Klin.)
John Barrett is also here, and somewhat active in ACT-adjacent areas. Stefano Gogioso is also a lecturer and still somewhat active in the ACT sphere.
Thanks! I know those guys.
John Baez said:
Oh, wow - I've procrastinated so long that I succeeded in temporarily forgetting that! :anguished: Let me bump it up the queue:
0) Check and submit corrections to From loop groups to 2-groups.
You've got lots of exciting projects, but I was wondering how this one was progressing?
Not at all, sorry. I keep getting busy and forgetting it. Thanks for reminding me.
Not a problem. I didn't want to bother you too much about it!
It's okay to bother me about it until I do it.
Yay! The paper @Kenny, @Christina Vasilakopoulou and I wrote on "Structured versus decorated cospans" has finally been published by the journal Compositionality!
It's nice to see:
There had been a slow-down of getting papers at this journal actually published, for various internal bureaucratic reasons I do not care to explain. Luckily the dam seems to be breaking. But the journal does want some extra help.
Brendan Fong has asked me to look around for mathematicians who are good at LaTeX and reliably able to help a nice journal on applied category theory, with the key word being "reliably". I don't really know.
For years Theory and Applications of Categories relied on the help of Michael Barr.
This is always a struggle for 'diamond open access' journals - free to publish in, free to read.
For years Theory and Applications of Categories relied on the help of Michael Barr.
and on the help of Jurgen Koslovski
I should send my apologies to the person handling mine, I haven't managed to edit my paper that was accepted in Compositionality. I really couldn't decide how much of the extensive feedback from referees was optional, and how much was really being insisted on. I just got a check-up email for the first time since when it was accepted.
There has been a big problem at Compositionality: papers were assigned issue numbers in order of acceptance, but papers could only be published in order of issue number, so if one author took a long time to polish up their paper after it was accepted it holds up the publication of all papers. This is one reason my paper was accepted in May but only published in November.
However, the paper holding up the publication of mine was not yours, @David Michael Roberts.
I have argued for them to change their system to avoid this problem: assign issue numbers only upon publication. I don't think they've done this yet.
(There were also other reasons for the delay.)
Phew. I had no idea. I really hope they change this.
Timothy Nguyen and I had a nice conversation which he made into a video on The Cartesian Café.
Ever wanted to dive deep into Grand Unified Theories of particle physics? Did you know that the 3 most famous proposals (SU(5), SO(10), Pati-Salam) can be unified into a commutative diagram? Then join me at the The Cartesian Cafe with @johncarlosbaez (1/n) https://youtu.be/gjQ9xoW4waQ https://twitter.com/IAmTimNguyen/status/1567538640796123137/photo/1
- Timothy Nguyen (@IAmTimNguyen)Our episode is an elucidation of various parts of the paper by the same name with John Huerta (@broyd), which was awarded the Levi Conant Prize in 2013. We start off by giving a crash course in matrix groups, (iso)spin, and zoology of particles in the Standard Model (2/n): https://twitter.com/IAmTimNguyen/status/1567538651097346048/photo/1
- Timothy Nguyen (@IAmTimNguyen)We then make the observation that within a generation of the Standard Model, there are 32 particles, which is suggestive since 32 = 2^5. This leads the way to our discussion of SU(5) and how it acts on the exterior algebra, ultimately leading to the SU(5) Grand Unification. (3/n) https://twitter.com/IAmTimNguyen/status/1567538658177347584/photo/1
- Timothy Nguyen (@IAmTimNguyen)We then briefly discuss the SO(10) (aka Spin(10)) and Pati-Salam theories and conclude with the beautiful but not well-known result that all 3 GUTs fit into a commutative square. Unification among unifications! (4/n) https://twitter.com/IAmTimNguyen/status/1567538665727082498/photo/1
- Timothy Nguyen (@IAmTimNguyen)I'll be giving a series of 11 talks on these topics from This Week's Finds:
With luck they'll be recorded.
Regardless, my plan is to take material that's diffusely spread through This Week's Finds and use it to write papers on these topics.
It looks like my talks will be streamed live and later appear on YouTube. I'll say more as soon as I find out!
Here are some more conversations about math with James Dolan:
2022-08-22:
Children's drawings, or dessins d'enfant: an approach based on the free group generated by 3 involutions. This is a Coxeter group whose even subgroup is the free group on 2 generators (the fundamental group of the thrice punctured Riemann sphere):
https://en.wikipedia.org/wiki/Dessin_d%27enfant
https://en.wikipedia.org/wiki/Coxeter_group
Two examples: one connected to the Gaussian elliptic curve, and another connected to the symmetric group on 3 letters, which produces a branched covering of the Riemann sphere by itself.
2022-08-29:
Zeta functions, lambda-rings and the big Witt ring, continued:
Niranjan Ramachandran, Zeta functions, Grothendieck groups, and the Witt ring, https://www.sciencedirect.com/science/article/pii/S0007449714000852
Abelian children's drawings and the kagome lattice, continued: an approach based on the abelianization of the free group on 2 generators (the fundamental group of the thrice punctured Riemann sphere):
https://en.wikipedia.org/wiki/Dessin_d%27enfant
https://en.wikipedia.org/wiki/Coxeter_group
https://en.wikipedia.org/wiki/Trihexagonal_tiling#Kagome_lattice
2022-09-09:
An approach to children's drawings based on the free group on 2 generators: the fundamental group of the Riemann sphere punctured at 0, 1, and ∞, with the two generators corresponding to loops around 0 and 1. Any action of this group on finite sets gives a covering of the Riemann sphere branched at 0, 1, and ∞:
https://en.wikipedia.org/wiki/Dessin_d%27enfant
The case where one generator acts as the cycle (ABCDEFG) and the other acts as the permutation (AB)(CDE)(FG). The case where one generator acts as the cycle (ABCDE) and the other acts as the permutation (A)(BCDE). Examples of this sort give a certain class of tree-shaped children's drawings, and branched coverings of the Riemann sphere by itself defined by polynomials in one variable.
You can find a Zoom link to my This Week's Finds seminars here, and we can discuss the math there too.
There will be 11 seminars, one a week. The first will be on Young diagrams, and it will happen at 3:00 pm UK time Thursday September 22nd.
More conversations with James Dolan:
2022-09-16:
Dessin d'enfants, or children's drawings:
https://en.wikipedia.org/wiki/Dessin_d%27enfant
The Shabat polynomial of a tree-shaped child's drawing, and Chebyshev polynomials:
George Shabat and Alexander Zvonkin, Plane trees and algebraic numbers, https://www.labri.fr/perso/zvonkin/Research/shabzvon.pdf
J. Bétréma and J. A. Zvonkin, Plane trees and Shabat polynomials, https://www.sciencedirect.com/science/article/pii/0012365X9500127I/pdf?md5=5892713484a0fc7f642fe23c38f94d3d&pid=1-s2.0-0012365X9500127I-main.pdf
https://en.wikipedia.org/wiki/Chebyshev_polynomials
An analogy between Kronecker's Jugendtraum and Grothendieck's children's drawings:
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
The Grothendieck–Teichmüller group and the absolute Galois group of the rationals:
https://en.wikipedia.org/wiki/Grothendieck-Teichmuller_group
https://en.wikipedia.org/wiki/Absolute_Galois_group
2022-09-16:
The Hurwitz automorphism theorem, the (2,3,7) Coxeter diagram, and children's drawings:
https://math.ucr.edu/home/baez/42.html
https://en.wikipedia.org/wiki/Hurwitz%27s_automorphisms_theorem
https://en.wikipedia.org/wiki/Hurwitz_surface
Young diagrams and the classification of irreps of the symmetric groups Sₙ: an approach using double cosets and Mackey's decomposition theorem:
https://ncatlab.org/nlab/show/double+coset
https://en.wikipedia.org/wiki/Character_theory#Mackey_decomposition
Brian Conrad, Mackey theory and applications, http://math.stanford.edu/~conrad/210BPage/handouts/mackey.pdf
How a Young diagram gives a polynomial that gives the dimension of Y(V) as a function of the dimension of the vector space V, where Y is the Schur functor corresponding to that Young diagram:
http://math.ucr.edu/home/baez/twf_young.pdf
https://ncatlab.org/nlab/show/Schur+functor
An analogy between Kronecker's Jugendtraum and Grothendieck's children's drawings:
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
2022-09-30:
Children's drawings and "designs". Gaussian and Eisenstein children's drawings:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Eisenstein_integer
An analogy between Kronecker's Jugendtraum and Grothendieck's children's drawings:
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
This paper has now been published, free for everyone to read on the web or as a PDF:
We pose a precise and perhaps quite difficult conjecture about the solutions of this equation, which displays 'Galilean-invariant chaos with an arrow of time'. And we explain what that means, with pictures.
If anyone wants to study this equation I can provide MatLab software for doing that.
Also, I've been working away writing short papers that explain ideas from representation theory:
If anyone has comments, I'd love to hear them!
Hi, @Simon Burton! I've moved your question and my long answer to #general: events > This Week's Finds seminar since it's a question someone should have asked at my last talk there!
Another conversation with James Dolan:
2022-10-07:
Digging deeper into the connection between Kronecker's Jugendtraum and Grothendieck's children's drawings, especially in the case of Gaussian children's drawings:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
I think it's neat how Jim is tying together Grothendieck's dessins d'enfant (which is about the absolute Galois group of the rationals) and Kronecker's Jugendtraum (which is about the abelianized absolute Galois groups of quadratic number fields), but it's going to take more work for him to straighten things out and for me to understand what's going on!
I posted something on the n-Category Cafe about how the partition function in statistical mechanics behaves like a form of "cardinality" for objects in the slice category :
I also posted something about how repeatedly applying a certain comonad to the Booleans , we get things that show up naturally in representation theory and elsewhere:
This had been bothering me for a long time because I didn't quite see how to get it to work. The key is the adjunction between commutative monoids and pointed sets.
I decided to stop posting expository math and physics tweets on Twitter.
I may still talk to people I know there (by replying to their tweets) and announce conferences and lectures (since I've got about 65,000 followers, so this is a platform that's hard to replace for announcing things).
I will post things on Mathstodon, including links to my blog articles. You can find me here:
https://mathstodon.xyz/web/@johncarlosbaez
Is there a particular reason for the move? Just curious, in case it's for a reason that you're willing to share. (I'm not on Twitter, but this is a considerable motivator for me to consider joining mathstodon)
I think Musk got twitter?
Elon Musk To Cut Twitter Staff To Single Devoted Hunchback Who Laughs Hysterically At All Of Boss’s Genius Tweets https://bit.ly/3sn7AZl https://twitter.com/TheOnion/status/1584613220693721089/photo/1
- The Onion (@TheOnion)I see, that's finally happening then. Fair enough.
James Dolan has been slowly working away on the connection between dessins d'enfant and Kronecker's Jugendtraum, but it's starting to make more and more sense.
2022-10-14:
The connection between Kronecker's Jugendtraum and Grothendieck's children's drawings, especially in the case of Gaussian children's drawings:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
2022-10-21:
The connection between Kronecker's Jugendtraum and Grothendieck's children's drawings, especially in the case of Gaussian children's drawings:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
2022-10-28:
The moduli stack of elliptic curves versus the moduli stack of acute triangles:
https://ncatlab.org/nlab/show/moduli+stack+of+elliptic+curves
https://en.wikipedia.org/wiki/Moduli_stack_of_elliptic_curves
The connection between Kronecker's Jugendtraum and Grothendieck's children's drawings, especially in the case of Gaussian children's drawings:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Hilbert%27s_twelfth_problem
https://en.wikipedia.org/wiki/Dessin_d%27enfant
I've been writing about modes (in music) over on Mathstodon, and I put these posts together into a blog article:
Short version: the 7 modes formed from the major scale have an ordering by 'brightness', and inversion (in the musical sense) reverses this ordering! This came as a revelation to me recently, though it's been known a long time.
The modes are also a torsor for , but this group action doesn't preserve the brightness order.
I'm talking about my work on category theory and epidemiology tomorrow, Wednesday November 2nd, at noon UK time. This talk is a lot less technical than previous ones I've given, which were aimed only at category theorists!
You can see my talk on YouTube here:
You can register to join on Zoom here.
I wrote another article on modes in music:
The main idea here is a commutative cube:
where the 3 colors of arrows correspond to flatting the 3rd, 6th and 7th note in the scale. Starting from the major scale, on top of the cube, we can get down to the bottom, the natural minor scale, in various ways.
It's a good way to organize a lot of information, though it's just part of a 5-dimensional hypercube - most of which, however, is less important in western music.
I'm having lots of fun tooting about this and other things on Mathstodon. It's also fun to go to "Federated" and watch the refugees from Twitter pour in.
I'm slowly building up a list of my favorite mathematicians, physicists etc. on Mastodon.
How long does it take usually for mathstodon to (dis)approve of membership?i'm betting they're having an influx right now though.
How different is mathstodon to say a category theory zulip? Is it that it is more open to other fields? (more leaky?) (I am there as @bblfish btw)
Simonas Tutlys said:
How long does it take usually for mathstodon to (dis)approve of membership?i'm betting they're having an influx right now though.
By the way, you don't have to be on mathstodon.xyz to engage in covnersations in it. It's just a server. Many other servers have shorter waiting times if none at all, such as mastodon.social
Simonas Tutlys said:
How long does it take usually for mathstodon to (dis)approve of membership?
It did not take long for me. I just wrote "I am following John Baez" and gave my e-mail address and I was in within 10 min. I was a bit worried durring that time because I could not see how they could really find out anything about me from such little information. And there was no way to add more information. No field to add links to other social networks, ...
I'm asking so I won't end up in a situation of accidentally having multiple mastodon accounts.guess i'll find another one :)
If they worked semantically then it should be possible to link two such accounts.
Right now Mathstodon is not accepting new memberships, because of the flood of new applications. But that should change soon. Colin the Mathmo writes:
We have temporarily suspended sign-ups ... the server is struggling, and we need to look at the best way to manage the resources.
I think it's still possible for people to issue invitations, but I'm not sure. There's a lot going on, and this is all new to me.
Thank you for your patience.
For more read Colin's feed, and that's also a good place to look for updates.
Mastodon supports openID apparently:
https://github.com/mastodon/mastodon/pull/16221
I foresee lots of new tech stuff getting popular soon :) will need to learn myself,tried admining a matrix and friendica servers and understand their pain.
I'll be giving a talk about Noether's theorem on Wednesday February 6, 2023, as part of this physics seminar:
Dear friends and colleagues,
On the 5th of December, we are due to start up Round II of the Algebra, Particles, and Quantum Theory seminar series. Again we are lucky enough to have an excellent set of speakers:
Denjoe O'Connor - (On exceptional fuzzy spaces and octonions)
Shogo Tanimura - (On superselection)
John Baez - (On getting to the bottom of Noether's theorem)
Aiyalam Balachandran - (On Gauss' Law: A tale)
L Glaser - (On reconstructing manifolds from truncated spectral triples)
John Huerta - (On the algebra of grand unified theories)
Kasia Rejzner - (On quantization, dequantization, and distinguished states)
Lev Vaidman - (On many worlds)For dates and times, please go to https://furey.space
Talks will run about once every three weeks on Mondays. The times will most often be 18:00 Berlin time, but this varies quite a bit due to the accommodation of time zones.
You can help us out by circulating this announcement in your favourite physics and math departments. In particular, please forward this announcement to your international colleagues outside of Europe and North America. The beauty of zoom is that it gives researchers direct access to each other, largely regardless of distance, regardless of their country's finances, and without excessively polluting the planet.
Finally, if you know of researchers or students wanting to be added to the mailing list, please advise them to email me at nichol@aims.ac.za
Looking forward to seeing you soon,
Nichol Furey
Institut für Physik
Humboldt-Universität zu Berlin
My student John Huerta is also talking about work on grand unified theories!
I finished a short article for the AMS Notices:
In this I explain how the icosidodecahedron, a polyhedron with 30 vertices:
is the shadow of a more symmetrical shape in 6d with 60 vertices, and also the slice of a symmetrical shape in 4d with 120 vertices, which in turn is the shadow of an even more symmetrical shape in 8d with 240 vertices!
@John Baez just another little nudge about the Loop groups to 2-groups paper corrections :-)
Yes! Ugh, I'm so busy right now... :face_exhaling:
Here's what I'm up to:
1) I'm helping finish off a book chapter:
This is much more aimed at epidemiologists than our previous paper, so we push all the stuff about decorated cospans to the background and focus on what our software actually does with them - which is nice, especially now that Nathaniel and Long Pham have produced a version that runs on the web with a graphical interface.
Building models of infectious disease by composing decorated cospans on your web browser!
2) I should be preparing my Leverhulme Lecture called Mysteries of fundamental physics, which is at 6 pm UK time on Tuesday November 29th. By the way, if you want to attend on Zoom you need to register.
3) I should be preparing the last of my This Week's Finds seminars for next Thursday, December 1st. I haven't even decided what to talk about!
4) On Tuesday December 6, 2022 on 11 am I'll speak at the Statistics and Data Science Seminar at Queen Mary University of London. Nina Otter invited me to do this. I was going to talk about maximum entropy reasoning but my research on this with Tom Leinster has not gone as fast as I wanted.
5) On Thursday December 8, 2022 at 2:30 pm I'll speak at the Logic and Semantics Seminar at University College London. I will speak on decorated cospans and our epidemiology software, with a heavy focus on the category theory.
6) On Friday December 9, 2022 at 4 pm I'll speak about compositional modeling in the Mathematics Colloquium at the University of Warwick. I'll again speak on decorated cospans and our epidemiology software, but this is a colloquium talk so it will be much more focused on what we do than how we do it.
John Baez's UK tour :rock_on: :guitar:
John Baez said:
3) I should be preparing the last of my This Week's Finds seminars for next Thursday, December 1st. I haven't even decided what to talk about!
If you remember, we had a vote in the pub: you're talking about the three-strand braid group. :grinning:
@John Baez That's why I'm not bothering you too much! The epidemiology work is clearly more important!
Thanks! When I get back to Riverside at the start of January I will be much more free, until July. So then I must correct the paper From loop groups to 2-groups.
Simon Willerton said:
John Baez said:
3) I should be preparing the last of my This Week's Finds seminars for next Thursday, December 1st. I haven't even decided what to talk about!
If you remember, we had a vote in the pub: you're talking about the three-strand braid group. :grinning:
Yes. I know you're sort of joking... but I think I need to wrap up the whole series in a way that the physicists in the audience will enjoy. The 3-strand braid group story is summarized here:
but to fully enjoy this you need to know how the modular group is related to elliptic curves and cubic polynomials, and how is the Lorentz group in 2+1-dimensional space. I don't think I can get across all that stuff in one hour.
It actually is connected to physics. But I have an easier topic in mind now.
2022-11-11:
Our topics today:
Computing the torsion in ideal class groups using sheaf cohomology, as explained by Mickaël Montessinos:
https://mathstodon.xyz/@johncarlosbaez/109307609411539931
https://mathstodon.xyz/@montessiel/109307323357254194
The moduli stack of elliptic curves versus the moduli stack of acute triangles (including right triangles):
https://en.wikipedia.org/wiki/Moduli_stack_of_elliptic_curves
https://ncatlab.org/nlab/show/moduli+stack+of+elliptic+curves
The Grothendieck topos of S₃-equivariant sheaves on the equilateral triangle (which is itself the moduli space of acute triangles).
2022-11-18:
This week Simon Willerton joined my conversation with James Dolan, so James gave a pretty self-contained introduction to the 'moduli space of acute triangles' and how it maps onto the more well-known - but less easy to explain! - 'moduli space of elliptic curves'.
The moduli space of acute triangles is just the space of all shapes of acute (and right) triangles with labeled vertices, counting similar triangles as the same.
The moduli space of acute triangles is the region outlined in orange.
Taking the points 0 and 1 on the real line as two vertices of your acute triangle, any point in this region can serve as the third! You get all possible acute (and right) triangles with labeled vertices this way, where similar triangles count as the same.
But if you also count triangles with different vertex labelings as the same, you just need one of the 6 black or yellow regions inside the orange outline! That's because you need to mod out by action of action.
This picture was originally conceived by Dedekind in 1877, but first drawn by Klein:
For the history, read this:
http://www.neverendingbooks.org/dedekind-or-klein
It's famous because any two triangles of opposite color, sharing an edge, form a copy of the 'moduli space of elliptic curves'!
Actually you need to glue together the edges of this region to get that moduli space, but never mind... this fact, plus what I said before, means there's a strong relation between elliptic curves (which take a degree in math to understand) and acute triangles (which any kid can understand)!
So, there has to be a way to take an acute triangle and get an elliptic curve from it!
And there is. That's what James has been explaining in recent conversations.
You can take an acute triangle, build a tetrahedron with copies of this triangle as all 4 of its sides, and then form a 'branched double cover' of that tetrahedron made from 8 copies of this triangle. This is the elliptic curve!
If anyone wants to draw this or animate it, I could explain it more carefully.
I gave my talks at Queen Mary University, University College London and the University of Warwick, and now I'm back in snowy Edinburgh.
At Queen Mary I got a good comment that I should remember. I explained how a very general model of population dynamics predicts that the square of the speed of the probability distribution of species, as measured by the Fisher metric on probability distributions, equals the variance in fitness.
But then someone asked how this is related to the Cramér–Rao bound!
The Cramér–Rao bound says that for any 1-parameter family of probability distributions , the variance of any unbiased estimator is at least as high as the inverse of the speed , where the norm here is the Fisher metric!
So both results involve a variance and the speed of a probability distribution with respect to the Fisher metric. But they do so in seemingly very different ways!
Now I want to finish off this paper:
Also, I need to help the chemistry group of the applied category theory Mathematical Research Community with their work on invariants of open Petri nets.
But here's something that distracted me this morning:
Formulas for prime numbers are fun, because there's an urban legend out there that there's no formula that always spits out primes. Actually there are plenty of such formulas!
For example, there are numbers 𝐴 such that for all 𝑛 = 1,2,3,... , if you round
down to the nearest integer, you get a prime! And if the Riemann Hypothesis is true, the smallest such number is
I explained how the Riemann Hypothesis gets into the game on Mathstodon.
Later I asked on MathOverflowwhether there's a number such that is prime for all .
Terry Tao said probably not, but our knowledge of primes is not good enough to prove this yet!
The number of interesting people on Mathstodon has been growing rapidly since the end of October, thanks to Musk's antics on Twitter. Lately I've been having a lot of fun there, and putting in a lot of work to invigorate the conversations there. Like most things, it's more fun when I put more work into it. And I think it's important to develop a good discussion place for mathematics.
People like projects they can work on. On December 14th I asked: if you divide a square into 4 similar rectangles, what proportions can these rectangles have? A bunch of people joined in and showed there were 11 options. On December 15th I summarized our work on Mastodon. Later I wrote a more detailed report on my blog, since it's hard to find things diffusely spread among many Mastodon posts:
On December 20th, motivated by @Morgan Rogers (he/him) here, I asked: how many elements does the free idempotent rig on 3 generators have? A bunch of people sprang into action and soon three had independently confirmed (using computers) that the answer is 284. I summarized their work on my blog:
On December 23rd I tried something different: studying a new purported proof of the Four Color Theorem, which was only 6 pages long. The first mistake I discussed turned out to be just a typo. On December 24th I dug into the paper's treatment of the generating function for rooted planar graphs, which was merely ineffficient, not wrong. But various other people, especially @Noam Zeilberger and @Simon Pepin Lehalleur , discovered serious mistakes, and looking into it I found more.
By the end of that day, I was prepared to say the proof could not be salvaged. I tweeted a link to our findings, since I still have many more followers on Twitter than Mathstodon, and this is the sort of news that should get out.
A short proof of the Four Color Theorem! :tada: Unfortunately it's wrong and cannot be salvaged.:cry: You can see why in the comments starting here: https://mathstodon.xyz/@noamzoam/109567981846531700 I describe one big mistake, while @noamzoam and @plain_simon describe two even bigger ones. https://twitter.com/johncarlosbaez/status/1606719407257899008/photo/1
- John Carlos Baez (@johncarlosbaez)On December 26th I wrote on Mathstodon about a connection between
and
Then I polished this up and blogged about it:
It made sense to write about this on Boxing Day. :upside_down:
Yesterday I was interviewed by Siobhan Roberts of the New York Times about that puzzle I posed on Mathstodon, on dividing a square into similar rectangles. I tried to keep emphasizing that Mathstodon is a good place for collaboratively messing around with math.
With luck her editors will approve an article on this - it's not certain.
Now on Mathstodon we are tackling this mystery:
but
I hope you see those two numbers differ only in the last two digits!
My friend Robin Houston - formerly a category theorist - noticed a discussion of this mystery and reported it on Mathstodon.
I then found a much better discussion of it here.
Then someone on Mathstodon named Sean O made a vast amount of progress in reducing it to the study of the "cosine Borwein integral"
which is known to equal for and then get smaller as increases.
Sean O seems to have shown that
Robin Houston has used Mathematica to check that
is really just a tiny bit smaller than , agreeing with for its first 41 digits.
So, the mystery is largely solved: we just need a more conceptual understanding of why
which equals for small and then starts dropping, just drops a tiny amount, roughly on the order of . This seems like a doable task, given how a lot is known about this sort of integral.
There's a lovely 3blue1brown video explaining the Borwein integrals' peculiar behaviour
Yes, that's great. It came after Greg Egan's animated gif explaining the same thing, but I'm not sure it was influenced by it.
I referred to that video in my tale today:
... where I'm not talking about those Borwein integrals, but rather the related but deeper fact that
while
We figured out a lot of stuff about this on Mathstodon, and then discovered Borwein already had written about it! Still, it was lots of fun cracking this puzzle.
Another fun piece of "anti-category theory" - math that's fun in ways that feel like the opposite of category theory (though this is probably misleading).
Suppose you sum the random series
± 1 ± ½ ± ⅓ ± ¼ ± ⅕ ± ⅙ ± ...
where you flip independent fair coins to choose each sign. The series converges with probability 1.
Graph the probability density of the results.
If you calculate the value of this function numerically, at 2 you seem to get the number 1/8. That should be a clue as to what's going on - right?
Sorry, it's closer to
0.124999999999999999999999999999999999999999764...
But I'm not just studying these crazy near misses. Now that I'm back in California, I'm getting to work on the "ten-fold way"- the fact that in condensed matter physics there are ten basic kinds of matter. It turns out to be a property of semisimple categories enriched over real super-vector spaces!
John Baez said:
But I'm not just studying these crazy near misses. Now that I'm back in California, I'm getting to work on the "ten-fold way"- the fact that in condensed matter physics there are ten basic kinds of matter. It turns out to be a property of semisimple categories enriched over real super-vector spaces!
Looks super exciting!
What are the ten basic kinds of matter?
It's a classification based on symmetries.
Some substances have time reversal symmetry: they would look the same, even on the atomic level, if you made a movie of them and ran it backwards. Some don't - these are more rare, like certain superconductors made of yttrium barium copper oxide! Time reversal symmetry is described by an antiunitary operator that squares either to or to : please take my word for this, it's a quantum thing. So, we get 3 choices, which we can call , or (no time reversal symmetry).
Similarly, some substances have charge conjugation symmetry, meaning a symmetry where we switch particles and holes: places where a particle is missing. The 'particles' here can be rather abstract things, like 'phonons' - little vibrations of sound in a substance, which act like particles - or 'spinons' - little vibrations in the lined-up spins of electrons. Basically any way that something can wave can, thanks to quantum mechanics, act like a particle. And sometimes we can switch particles and holes, and a substance will act the same way!
Like time reversal symmetry, charge conjugation symmetry is described by an antiunitary operator that can square to . So again we get 3 choices, which we can call , or (no charge conjugation symmetry).
So far we have kinds of matter. What is the tenth kind?
Some kinds of matter have neither time reversal nor charge conjugation symmetry, but they're symmetrical under the combination of time reversal and charge conjugation! You switch particles and holes and run the movie backwards, and things look the same! This symmetry is called , and because it's unitary we can always multiply it by a phase to assume without loss of generality that .
This analysis is the easiest to understand (at least for a physicist), but there's a deeper approach using Bott periodicity, which I explain here.
More conversations with James Dolan:
2023-01-05:
The tenfold way:
https://math.ucr.edu/home/baez/tenfold.html
Two dessins d'enfant that are probably in the same orbit of the Grothendieck-Teichmüller group, built starting from the Gaussian integers:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Dessin_d'enfant
https://en.wikipedia.org/wiki/Grothendieck-Teichmuller_group
Thomas Willwacher's course notes on the Grothendieck-Teichmüller group:
https://people.math.ethz.ch/~wilthoma/docs/grt.pdf
2023-01-12:
Dolan has been using Mathematica to study two dessins d'enfant with the same 'vertex census' that are probably in the same orbit of the Grothendieck-Teichmüller group, built starting from the Gaussian integers:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Dessin_d'enfant
https://en.wikipedia.org/wiki/Grothendieck-Teichmuller_group
I continued explaining the tenfold way:
https://math.ucr.edu/home/baez/tenfold.html
Woo-hoo! We're finally done with a big paper that explains the category-theoretic underpinnings of our web-based software that lets you build epidemiological models:
It's long but it assumes no knowledge of categories yet still explains how to build models by composing 'open' models described as cospans, and how to refine or (using the jargon) 'stratify' models using pullbacks.
The first 30 pages explain how everything works; then there's a 30-page appendix that contains Julia code.
I'm curious if the economists use similar kinds of models, does anyone know? It makes sense to me to treat the flow of money just like the flow of a disease, heh. (This is all reminding me of Hari Seldon's Psychohistory ...)
These modeling techniques, generally called System Dynamics, were developed by John Sterman in in his book Business Dynamics. So if you call that subject economics, then the answer to your question is yes.
(I'm not being snarky, I'm just not sure if this is called economics!)
Actually I was wrong: they were developed much earlier, in the 1950s, by Forrester in his book Industrial Dynamics.
I tend to mix up these books! But Sterman's Business Dynamics came much later, after 2000.
This week I've been working away on the tenfold way. I like doing my research as blog articles. I've described two different ways to relate Bott periodicity to symmetric spaces. The relation between them is somewhat mysterious, but when it comes together it'll be a nice piece of category theory - or at least connected to category theory.
Next Monday I'm giving a talk about the tenfold way at the seminar Algebra, Particles, Quantum Theory. It's at 10 am Pacific Time, or 18:00 UTC. To attend, you need to register here.
You can see my slides already here.
Abstract. The importance of the tenfold way in physics was only recognized in this century. Simply put, it implies that there are ten fundamentally different kinds of matter. But it goes back to 1964, when the topologist C. T. C. Wall classified the associative real super division algebras and found ten of them. The three 'purely even' examples were already familiar: the real numbers, complex numbers and quaternions. The rest become important when we classify representations of groups or supergroups on -graded Hilbert spaces. We explain this classification, its connection to Clifford algebras, and some of its implications.
And another conversation with Jim:
2023-01-19:
Bott periodicity:
8 categories of representations of Clifford algebras, related by adjoint functors:
https://golem.ph.utexas.edu/category/2023/01/the_tenfold_way_part_6_1.html
A construction of symmetric spaces from Clifford algebras:
https://golem.ph.utexas.edu/category/2023/01/the_tenfold_way_part_6.html
A second construction of symmetric spaces from Clifford algebras, giving infinite loop spaces, as explained in Milnor's book Morse Theory:
https://golem.ph.utexas.edu/category/2023/01/the_tenfold_way_part_8.html
Using software to study two dessins d'enfant that are probably in the same orbit of the Grothendieck-Teichmüller group, built starting from the Gaussian integers:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Dessin_d'enfant
https://en.wikipedia.org/wiki/Grothendieck-Teichmuller_group
Right now I'm trying to get people apply to run workshops, schools, research groups etc. in Edinburgh, as part of the Mathematics for Humanity project at the ICMS. If anybody knows people who should organize such things, let me know!
In particular, I'm trying to get Nate Osgood, Evan Patterson and maybe William Waites (an epidemiologist / computer scientist who works in Strathclyde) to organize a research group on category theory and epidemiological modeling.
Nate Osgood is quite excited about applying category theory to agent-based models, and suggested that we do a "Manhattan project" on those, though he immediately realized that was a bad comparison.
Nate's choice of phrasing is pretty unfortunate there... :grimacing:
'Mathematics for Humanity... whether they like or not'
Here is a YouTube video of the talk on the tenfold way that I gave in Nichol Furey's seminar Algebra, Particles, and Quantum Theory:
Abstract. The importance of the tenfold way in physics was only recognized in this century. Simply put, it implies that there are ten fundamentally different kinds of matter. But it goes back to 1964, when the topologist C. T. C. Wall classified the associative real super division algebras and found ten of them. The three 'purely even' examples were already familiar: the real numbers, complex numbers and quaternions. The rest become important when we classify representations of groups on super Hilbert spaces. We explain this classification, its connection to Clifford algebras, and some of its implications for quantum physics.
Here's an article about a math puzzle a bunch of us solved on Mathstodon:
It does a pretty good job of capturing how a lot of people worked on this and made good progress on a puzzle I came up with.
The people here mentioned in this article include @Jules Hedges and Stefano Gogioso. Oh, hmm, Stefano isn't here. He should be!
Neat
Amazing! I enjoyed thinking about this when the discussion was happening. It was surprisingly difficult to formalize arguments constraining the number of possibilities; I wonder how conclusive the arguments for the possibilities ended up being.
Stefano tried to write up a human-readable proof that there are only 11 possible proportions of rectangle when you divide a square into 4 similar rectangles. I haven't read the code of the people who tackled this sort of problem using programs - and I never will - so I can't vouch for it.
This paper got published today:
And an editor said he'd accept this paper for publication (though he'd be asking for some changes):
So, two victories for my former grad student Joe!
Also, another conversation with Jim:
2023-01-26:
Rough thoughts on a "diamond cutting" construction of Jordan algebras and symmetric spaces from 3-graded Lie algebras:
https://math.ucr.edu/home/baez/week193.html
A category-theoretic examination of the construction of symmetric spaces from Clifford algebras in Milnor's book Morse Theory:
https://golem.ph.utexas.edu/category/2023/01/the_tenfold_way_part_8.html
Symmetric spaces as involutory quandles:
https://ncatlab.org/nlab/show/symmetric+space
Using software to study two dessins d'enfant that are in the same orbit of the Grothendieck-Teichmüller group, built starting from the Gaussian integers:
https://en.wikipedia.org/wiki/Gaussian_integer
https://en.wikipedia.org/wiki/Dessin_d'enfant
https://en.wikipedia.org/wiki/Grothendieck-Teichmuller_group
I uploaded this article to the arXiv today:
These notes are made to go along with YouTube videos of three talks on this subject. They also talk a bit about , the category whose objects are finite direct sums of Young diagrams. This is the free 2-rig on one generator!
In other news, I'm beginning to grapple with the sign errors that David Roberts caught (and, thank god, fixed) in From loop groups to 2-groups, an old paper where Urs Schreiber, Danny Stevenson, Alissa Crans and I cooked up a model of string 2-groups from central extensions loop groups. So far the grappling is mostly at the emotional level.
I made tons of edits in these and put corrected versions on the arXiv:
Someone named Fridrich Valach caught tons of mistakes. I'm now working on Weeks 101-150 in my spare time... this is 342 pages long. :sweating: After that they get even longer.
Another conversation with James Dolan:
2023-02-09
This was a discussion of my talk on the tenfold way. You can see a video of this talk here:
You can see the slides here:
https://math.ucr.edu/home/baez/tenfold/tenfold_web.pdf
You can see a longer discussion of the tenfold way here:
https://math.ucr.edu/home/baez/tenfold.html
I'm going to give a public lecture in Santa Fe. I like the unusual requirements:
On behalf of the Santa Fe Institute and our president, David Krakauer, and chair of the Community Lecture Committee, Jessica Flack, I am pleased that you have agreed to deliver a talk in Santa Fe as part of our community lecture series.
As you know, you will present a lecture titled “Visions for the Future of Physics” on Tuesday, October 18th at 7:30pm at the Lensic Theater. Our usual format for these evening lectures is as follows:
• An introduction and context setting by an SFI scientist.
• Five-minute personal narrative that outlines your story and how you got into your field (to give the audience – especially our younger viewers – some personal context on who you are and how you got to where you are now).
• Forty-minute talk on a topic of your choice (remember this is for a lay audience so try to avoid too many technical terms or long mathematical proofs).
• Followed by a 10-15 minute conclusion where the speaker presents a reckless idea (one that is either a bit out-there, forward looking, or one that is not fully developed). The reckless idea format can vary from speaker to speaker but should leave the audience excited for the future of your creative inquiry.
SFI would very much like our visitors to interact with the residential community by means of scientific collaborations as well as more informally at lunch and tea, and thus we expect visitors to spend the majority of their time on site. Please provide us as soon as possible with your provisional dates. If there are faculty members with whom you would like to meet during your visit, just let me know. We will send you an agenda in the days before your lecture.
We may request a few brief interviews with local news media. This extends the community outreach benefit of the lecture and helps ensure a full auditorium. These interviews typically include a 20 to 30 minute phone interview with local radio or newspaper reporters.
In addition, we will request slides and background materials a week or two in advance of the lecture. The lecture may be broadcast live on our YouTube page and our communications team may be live on Twitter during the event.
Except it won't be October 18th - it's supposed to be Tuesday May 23rd. I'd better remind them!
Are you going to speak about the future of our planet as you did one time? I think it was a brillant idea!
Thanks! This talk will be somewhat similar to a talk Visions for the Future of Physics that I gave in 2021 (you can see slides and a video of that if you want). I talked about how so-called "fundamental physics" is stuck, but other areas are not - and how climate change will impact research in physics. But that was for an audience of physicists, while this is not, so this will be different!
Also, this part will be interesting:
Followed by a 10-15 minute conclusion where the speaker presents a reckless idea (one that is either a bit out-there, forward looking, or one that is not fully developed). The reckless idea format can vary from speaker to speaker but should leave the audience excited for the future of your creative inquiry.
I look forward to see this!
Thanks! Since the "reckless idea" should be "exciting" rather than gloomy, I'll talk about some very optimistic vision of how physics can help us build a world in harmony with the laws of ecology. This is actually in keeping with the theme of the Santa Fe Institute, which is about complex systems.
John Baez said:
I made tons of edits in these and put corrected versions on the arXiv:
it is fantastic to have these bundled together. the fact that i put it in the same folder with feynman lectures probably tells more than any set of courteous comments that i could make. i know people are busy, but maybe in the long run it could be organized to build an index.
for me at least, the main reason to read this is writer's voice. (the main reason to not read a commissioned handbook or britannica is that they iron out the voices of the people who wrote it.) --- this is not a matter of beauty or comfort, but of the "flow" of a theme, like in music... understanding does not come from piling up correct statements, but from weaving the ideas as they associate with one another. (that is why we make sentences. where a human grandmother says "world is full of dangers", a meerkat grandmother says "hawk, dog, snake, poisonous weed".)
so to be honest, i would prefer if you don't insert corrections but maybe add them somewhere. most errors are not random but carry useful information. error-free text and "authoritative sources" are the building blocks of religion, not science...
when i teach, i actually not only leave in the mistakes that i once made, but often add the new ones deemed interesting. at the beginning i announce that students get points for submitting correct answers or for finding mistakes in the questions and in the lecture notes. (i started doing this with the graduate classes in security protocols: you can ace the exam by proving that a protocol is secure or by finding an attack. including the exam protocol...)
everyone finds mistakes in the wikipedia pages and for things that i know about, they are the most valuable part: what are the rough edges of this or that idea?...
Thanks, Dusko! So far I'm only correcting typos, making the format of references more consistent, adding references to published papers and fixing broken links.
I've been correcting mistakes in the content of This Week's Finds ever since they were written, often thanks to emails pointing out mistakes, so I'm not looking for any more of those now (though they must exist).
If anyone spots any, they should let me know!
I'm going to delete my Twitter account, but David Madore (aka Gro-Tsen) has kindly helped me upload all 39,357 of my old tweets to the Internet Archive. They are now accessible here.
In practice it's easier to read most of my better tweets on my diary, but this is nice for people who like history to be preserved - as David does!
I was told it's a better idea to lock and make one's account private, as it means the handle can't be picked up by someone else later. But I won't presume to second guess your plan
What does 'locking and making private' mean?
I guess I'll try to find out.
I don't want Musk to have control over my tweets. If I could delete all 39,357 tweets, then lock the account that might be good.
When you have a private account, only the people that follow you can see what you like, tweet, or follow on Twitter.
That's not quite what I want. I see that you can get your last 32,000 tweets deleted for free pretty easily - or at least you once could; Twitter seems to be changing its API so I'm not sure.
I gave a talk on the tenfold way at the Rocky Mountain Mathematical Physics Seminar, and you can watch it here:
For the last 2 weeks I've been polishing up issues 101-150 of This Week's Finds, and I just put them on the arXiv! You can get an even nicer version here.
They are 348 pages long, so it feels great to be done.
One of the main themes of these issues was explaining homotopy theory. They contain short explanations of these topics:
• A. Presheaf categories.
• B. The category of simplices, Δ.
• C. Simplicial sets.
• D. Simplicial objects.
• E. Geometric realization.
• F. Singular simplicial set.
• G. Chain complexes.
• H. The chain complex of a simplicial abelian group.
• I. Singular homology.
• J. The nerve of a category.
• K. The classifying space of a category.
• L. Δ as the free monoidal category on a monoid object.
• M. Simplicial objects from adjunctions.
• N. The loop space of a topological space.
• O. The group completion of a topological monoid.
and you can reach them all from the introduction.
One annoying thing about learning math is that I now move in circles where it feels like all this stuff is considered obvious.
When I was first learning it, and writing it up in This Week's Finds, I didn't feel everyone knew this stuff. Now I feel everyone does. :upside_down:
So I have to force myself to remember that even among the mathematicians I know, not all of them know all this stuff well... so it's worth explaining clearly, even for them. And then there's the larger world out there, which still exists.
A surprise: I thought Ian Henderson had shown 1341 numbers appear as allowed proportions of rectangles when you subdivide a square into 7 similar rectangles. But then I got an email from someone, David Gerbet, claiming he got 1342!
And it turns out he's right:
• Dividing a square into 7 similar rectangles.
Ian Henderson found and fixed the bug in his program.
Oh, awesome!
Btw, I just decided it's time to make fixing those sign mistakes my top priority.
Just to help myself see the light at the end of the tunnel of sign errors, let me list more things I want to do:
Bigger projects:
Conversations with James Dolan:
2023-02-16:
Ideal class groups of algebraic number fields:
https://en.wikipedia.org/wiki/Algebraic_number_field
https://en.wikipedia.org/wiki/Ideal_class_group
The case of the 20th cyclotomic field illustrates the Kronecker-Weber theorem, which says that every extension of ℚ with an abelian Galois group is contained in some cyclotomic field:
https://en.wikipedia.org/wiki/Cyclotomic_field
https://en.wikipedia.org/wiki/Kronecker%E2%80%93Weber_theorem
The Galois group of the 20th cyclotomic field is GL(1, ℤ/20), generated by the Frobenius automorphism:
https://en.wikipedia.org/wiki/Frobenius_endomorphism
This group GL(1, ℤ/20) is isomorphic to ℤ/2 × ℤ/4, with the two factors corresponding to the primes 2 and 5.
The seven subfields of the 20th cyclotomic field correspond to the seven subgroups of its Galois group, and we can draw the lattice of these subfields, and indicate which steps up this lattice involve ramification at 2, 5, or the real prime:
https://en.wikipedia.org/wiki/Ramification_(mathematics)
The Hilbert class field of an algebraic number field F is its maximal unramified abelian extension:
https://en.wikipedia.org/wiki/Hilbert_class_field
and its Galois group over F is the ideal class group of F:
https://en.wikipedia.org/wiki/Ideal_class_group
A false but interesting conjecture: trying to express an unramified abelian extension E of a number field as being directly "built out of" the ideals in the corresponding subgroup S of the ideal class group, where E and S correspond to each other via the "covariant Galois correspondence":
https://math.ucr.edu/home/baez/conversations/class_fields.pdf
Some applications of the affine algebraic group scheme of nth roots of unity.
2023-02-23:
A short description of Grothendieck-Teichmüller theory developed from Willwacher's notes:
Thomas Willwacher, The Grothendieck-Teichmüller group, https://people.math.ethz.ch/~wilthoma/docs/grt.pdf
How the composite of normal covers gives a short exact sequence.
How this yields a map from the absolute Galois group of the rationals to the group of automorphisms of the profinite completion of the free group on 2 generators.
https://en.wikipedia.org/wiki/Absolute_Galois_group
https://ncatlab.org/nlab/show/profinite+completion+of+a+group
2023-03-02:
Basics of Grothendieck-Teichmüller theory:
Thomas Willwacher, The Grothendieck-Teichmüller group, https://people.math.ethz.ch/~wilthoma/docs/grt.pdf
In a composite of two normal coverings, the group of deck transformations of the bottom stage acts on the group of deck transformations of the top stage. In Grothendieck-Teichmüller theory, these groups of deck transformations are Galois groups, and the bottom stage is 'arithmetic' while the top stage is 'geometric'. An example involving the two Gaussian designs James Dolan has been intensively studying.
Digression into the relation between degree and genus of an algebraic curve:
https://en.wikipedia.org/wiki/Algebraic_curve#Curves_of_genus_greater_than_one
and the gonality of an algebraic curve:
https://en.wikipedia.org/wiki/Gonality_of_an_algebraic_curve
2023-03-09:
Studying extensions of fields geometrically.
A principal G-bundle over a space X gives a functor F from the category Rep(G) of representations of G to the category Vect(X) of vector bundles over X. This functor F can be seen as a model of the theory whose syntactic category is Rep(G).
We can generalize this idea while making it more precise by thinking of groups as giving commutative Hopf algebras and principal bundles as torsors. We can then define a new general concept of 'torsor' of a commutative Hopf algebra, as follows.
Choose a field k and consider the doctrine of symmetric monoidal cocomplete k-linear categories. For any commutative Hopf algebra H, the category of comodules of H is a theory in this doctrine. We then define a 'torsor' of H to be a model of this theory.
We can further generalize this by working with commutative Hopf algebroids over k, which give a certain notion of 'stack' simultaneously generalizing the concept of 'space' associated to a commutative k-algebra (namely an affine scheme over k) and the notion of 'group' associated to a commutative Hopf algebra over k (namely an affine group scheme over k).
Returning to the Hopf algebra case, let's do an example. Given a finite group G we get a commutative Hopf algebra H = k^G so we can talk about a torsor of H, defined as above. We call this a 'G-torsor over k'.
Unraveling the definitions, a G-torsor over k is a symmetric monoidal colimit-preserving k-linear functor F: Rep(G) → C where C is some symmetric monoidal cocomplete k-linear category. We can even describe this more explicitly.
Bringing it down to earth, take C = Vect and G = Z/2. We can understand quadratic extensions of the rationals as Z/2-torsors over k, and check that such things are indeed classified by square-free integers.
This was the most category-theoretic conversation I've had with Jim in a long time. But I had to write down this summary to really understand it.
I finished writing my next column for the AMS Notices. This is based on a half-written paper from 2007:
But it's shorter than the originally planned paper!
The basic idea is to look at all the roots of all polynomials whose coefficients are . You get an amazing picture, and if you zoom in you see some interesting fractals like this:
We have a guess for how to understand these fractals, but we haven't proved it, so we just explain it using some pictures.... and hope some reader will prove some theorems based on our guess!
On another note, @David Michael Roberts will be glad that I got the go-ahead from all 3 of my coauthors to correct our paper From loop groups to 2-groups and submit an erratum. But they don't want to get involved in this process.
Hi @John Baez , I was trawling through your website and Azimuth and I was trying to find an answer to my question of, "What are you up to now in tackling the issue of climate change?" I couldn't find what you are working on right now regarding climate change -- at least in the past couple months. I was more motivated by curiosity here as to how you are tying category theory to problems within climate change. Would you mind sharing? If you have time and inclination of course -- Thanks!
To see what's up, read this.
Ah I am blind! :man_facepalming:
Thank you!
No problem! As you'll see, the Azimuth Project has drifted from its original goal, but it's finally starting to be successful in practical ways - and some of these ways may loop back around to helping deal with climate change.
Conversations with James Dolan:
2023-03-16
Studying extensions of fields geometrically. Review of what was done on March 9, 2023.
Definition of group object G and group action in an arbitrary cartesian category. The definition of G-torsor can almost be done at this level of generality, but we need a way to rule out things such as the action of a group on the empty set. Digression on the 'group with no elements':
https://golem.ph.utexas.edu/category/2020/08/the_group_with_no_elements.html
Studying abelian extensions of fields using torsors. The case of quadratic extensions of the rationals was relatively easy because ℤ/2 'splits' over the rational numbers: that is, all irreducible representations of ℤ/2 on rational vector spaces are 1-dimensional. An attempt to extend the program to ℤ/3, which does not split over the rationals.
2023-03-23
Studying abelian extensions of fields using torsors. The case of quadratic extensions of the rationals was relatively easy because ℤ/2 'splits' over the rational numbers: that is, all irreducible representations of ℤ/2 on rational vector spaces are 1-dimensional. To extend the program to ℤ/3, which does not split over the rationals, we conduct some preliminary investigations of subfields of the 63rd cyclotomic field, which correspond to subgroups of GL(1, ℤ/63) = ℤ/6 × ℤ/6. There are 30 such subgroups, and we construct a chart of 12 of them mimicking the method used on February 16th, 2023 for the 20th cyclotomic field.
For more on this whole series of conversations, go here:
https://math.ucr.edu/home/baez/conversations/
2023-03-30:
Galois descent: given a Galois extension F' of a field F, how do we classify algebras over F in terms of algebras over F'? They correspond to 'weak fixed points' of the action of the Galois group G = Gal(F'|F) on the category of algebras over F. These weak fixed points are in turn classified by the first cohomology groups H¹(G, Aut(A)), where A runs over all isomorphism classes of algebras over F'. For more see:
Joshua Ruiter, Galois descent https://users.math.msu.edu/users/ruiterj2/math/Documents/Notes%20and%20talks/Galois%20descent.pdf
Philippe Gille and Tamás Szamuely, Central Simple Algebras and Galois Cohomology, https://www.math.ens.psl.eu/~benoist/refs/Gille-Szamuely.pdf
Qiaochu Yuan, Stating Galois descent, https://qchu.wordpress.com/2015/11/16/stating-galois-descent/
John Baez, Group cohomology and homotopy fixed points, https://golem.ph.utexas.edu/category/2020/04/crossed_homomorphisms_part_2.html
Studying extensions of fields geometrically. Studying ℤ/3 torsors over the rationals following the program begun on March 9, 2023, and using this to study subfields of the 63rd cyclotomic field, continuing our work on March 23, 2023.
In case someone knows the basics of Galois theory and wants to understand how it's used to classify algebraic gadgets like associative algebras, Lie algebras etc., in this most recent episode :up: I give a quick self-contained treatment of this and how it's connected to the cohomology of groups.
Some good news: Grothendieck's student Hoàng Xuân Sính is turning 90 this year! And the university she founded, Thang Long University, is having its 35th anniversary! They’re putting out a book in her honor, and they’ve asked me to write a few words about her thesis - because I've worked on that subject and also written about her life.
This will be fun, since her work on 2-groups (or "Gr-categories") is now being used in theoretical condensed matter physics and quantum field theory.
I just wrote very elementary introduction to Grothendieck's idea of "motives":
If anyone finds parts of it unclear, or discovers mistakes, I'd like to hear about it! This paper is due very soon (like, for example, now) for the proceedings of the Grothendieck Conference at Chapman University.
The last section has a bit of category theory, but mainly I'm trying to explain the idea of how you can chop up a variety into "motives", which are abstract things of a linear-algebraic or quantum-mechanical flavor.
very nice paper! wow, can i learn this? riemann would love to know that he is available in dvi.
brave conference name: Grothendieck, a Multifarious Giant
i can totally imagine GrothenGiant thumping along the freeway, pulling a gas station out of the ground with tanks and all, and flinging it onto the campus, multifariously :)))
Yes, it does remind me of Pantagruel or the Jolly Green Giant, now that you mention it.
I had fun looking at the "Motivating Motives" paper for a bit! I learned a little bit about a few things I've never looked at before, and feel somewhat more inclined to learn about related things in the future. In particular, I really enjoyed the example of the "correction term" for the number of solutions of the polynomial equation over finite fields. It gives a sense of something deep and mysterious going on, which seems in the spirit.
I was able to (roughly) follow the paper until the talk of the "pieces" of a torus on page 7. I assume that my lack of background knowledge is kicking in at this point, but just in case you find it helpful, here are some particular things that felt like they contributed to my confusion:
I assume this is likely all very clear to someone with enough background, though!
The paper doesn't define "elliptic curve", but seems to use some (unknown to me) definition. I'm happy to say that a polynomial equation defines an elliptic curve if its complex solutions form a torus with one point removed. But I don't know what an elliptic curve is from this.
I can clarify this. This was supposed to be the definition of elliptic curve! In other words: suppose a polynomial equation has complex solutions that form a torus with one point removed. Then its complex solutions, together with one extra point, are called an elliptic curve. That's what an elliptic curve is.
This is not the 'best' definition of an elliptic curve - I would not use this in a course on elliptic curves - but it has the advantage of being quick and easy to understand.
So, your nervousness about this suggests that I wasn't clear enough here:
Crucially, it gives an example of an "elliptic curve''. Instead of defining elliptic curves, let me simply state a key fact about them: if a polynomial equation in two variables defines an elliptic curve, its space of complex solutions is a torus with one point removed.
I guess when I wrote this I was nervous that experts would notice the problems if I explicitly used this as a definition of elliptic curve.
If my audience only included people who didn't know about elliptic curves, I would have said this was the definition of elliptic curve... and it would be good enough.
But it's bad if someone who doesn't know about elliptic curves is nervous about what I wrote. The mathematicians I talk to most are very used to absorbing expository text that's packed with definitions they don't know; they file away the facts they learn and maybe look up the definition later, focusing on getting the 'big picture'. But not everyone wants to read stuff that has words in it that aren't defined.
So I'll tweak this a bit.
The talk of pieces of elliptic curves is then difficult for me from the start, because I'm not exactly sure what's being discussed. Are we discussing subsets of a solution set for a polynomial equation over a specified finite field? Or possibly subsets of the solution set of a polynomial equation over the complex numbers?
Okay. When discussing the 4 pieces I was talking about the version over the complex numbers, because I was showing you a picture of a torus.
The weird thing about motives is that we use ideas from the complex numbers to motivate ideas for finite fields. When we dig deep enough into what Grothendieck did, we learn why this makes sense. But it's not supposed to be obvious: on the contrary, it's profound.
So again, I need to tweak my wording a bit.
Another related point of confusion: "Now, it may seem reasonable that the piece of dimension 2 contributes q points to the elliptic curve." I don't know what the piece of dimension 2 is.
It's the torus with the point and the two circles removed. But yeah, that's a piece of the complex solutions of our polynomial equations. Here what I'm doing is vaguely hoping that something similar happens for the solutions over the finite field . Since the piece I'm pointing to on the torus is isomorphic to , I'm hoping that there's an analogous subset of the solutions over that is isomorphic to . If there were, it would have elements.
So, what I failed to make clear is that all this is just vague, loosey-goosey reasoning by analogy.
At least at this stage! Later when you do some calculations it starts to make sense. But notice: it only makes a very limited amount of sense, because some of the "pieces" of the solutions over need to have a complex number of points! :face_with_spiral_eyes:
So, the marvel of the Weil Conjectures is that they claim that this analogy really does work. These conjectures have been proved. But the marvel of motives is that - if we ever understand them thoroughly - they will provide a deep explanation of the analogy.
Motives are things that can have a "complex number of points".
So your problem was not at all a lack of background. Your problem was that I was not writing clearly enough - e.g., clearly explaining that a lot of the reasoning I'm describing is very rough, leading ultimately to a huge mystery.
Your feedback was very helpful - thanks! You are EXACTLY the sort of person I'm trying to talk to in this article, at least up to section 5, where I bring in some category theory.
Ahhh ok, that's making more sense now. :smile: Thank you for clarifying!
And I'm glad my feedback was helpful!
Great! I don't want you, or people like you, to feel "oh, I might understand this if I knew more".... at least not in sections 1-4.
Part of the point is that the whole subject is profoundly mysterious!
Okay, @David Egolf - I've improved the paper based on your suggestions, and credited you:
John Baez said:
Okay, David Egolf - I've improved the paper based on your suggestions, and credited you:
Awesome!
Another (small) thing was confusing me a bit, by the way. On page 4 the paper talks about finite fields where , with being some prime. Then on page 6, it talks about a prime power in the statement of Hasse's Theorem. Is one of these possibly stated backwards?
(It's a small typographical thing, I think, but it took me a little while to figure out!)
That looks like a typo, to me.
I can't imagine anyone writing q for a prime then p for a power of it. The use of p and q=p^n is very ingrained now when talking about finite fields
Yeah, just a slip - I need to mind my p's and q's. Thanks, I'll fix it.
As David Roberts said, p always means a prime, and q always means a prime power.
Just a typo: at the beginning of 1.4 Motives: "Starting from can construct a category , called the category of “pure motives”..." would be better with a ", one" or ", we" before the "can" I guess!
Thanks! I meant to say "we" - a sneaky trick for getting the reader to feel involved. :upside_down:
Another typo too: Screenshot-2023-04-16-at-5.11.03-PM.png
(Probably, you were a bit sleepy or very excited when starting to finally talk about motives :sweat_smile: )
Thanks! That typo is my wife's fault: she interrupted me in the middle of that sentence.
Later I decided to put that "vector space" stuff a bit further down, where I explain how the "number of points" in a motive is really the (super)trace of a power of the Frobenius.
I find it interesting that possesses all the structure necessary to define Schur functors! And also, I was wondering if the trace in the formula has something to do with traced monoidal categories.
But I have a more simple question: what is this ? It is not defined before it appears in the formula (you talk only about ).
A superscript is a common symbol for raising an endomorphism to the nth power, i.e. composing itself with itself times. I didn't think this needed to be explained, but my philosophy is that "the reader is always right".
I don't know if would be any better....
Hmm okay, I thought it was another kind of index
Maybe you can just write below the formula that is composed with itself times, superscripts are used to designate various things, that's often confusing to me...
Or if you have a mean to write the more to the right than the maybe it would be sufficient to make it clear that it is a nth power
Jean-Baptiste Vienney said:
I find it interesting that possesses all the structure necessary to define Schur functors!
Yes! I discovered yesterday that people have written a lot about Schur functors applied to motives: they turn out to be very important! For example it turns out to be hard (yet interesting) to show that certain motives are "finite dimensional", meaning for large enough .
If you ever get interested in seeing Schur functors applied to motives, try Chapter 5 of this book:
I wish I'd discovered this book much sooner, since it's really good.
And also, I was wondering if the trace in the formula has something to do with traced monoidal categories.
Yes, I believe is traced. And the Standard Conjectures, together I guess with the Hodge conjecture, imply that they are a Tannakian category, which means in particular they are compact closed - which implies traced.
Jean-Baptiste Vienney said:
Maybe you can just write below the formula that is composed with itself times, superscripts are used to designate various things, that's often confusing to me...
Okay, maybe I'll say this. It's sort of hidden in what I said:
This explains the exponentially growing yet also perhaps oscillating terms in the Riemann Hypothesis for varieties over finite fields (see Theorem 3).
since the trace of the nth power of an operator tends to be the sum of a bunch of terms like , which is what we've got in Theorem 3.
But I'm not trying to make people work hard to understand what I'm saying!
Waouh it's fascinating that these simple ideas make sense while I don't know anything about motives. Maths seem to be just very logical and organized!
I was walking in the street 30 minutes ago thinking: "I'm sure it must be just a formal and useless construction to apply Schur functors to these complicated things named motives." But I was wrong :sweat_smile:
Yes, it's very nice how this stuff works. This is why Grothendieck liked motives so much: they turn certain portions of algebraic geometry into something very much like linear algebra.
Okay, I think I've solve your problem in a fairly graceful way. Now I say this:
First, we break the motive into a direct sum of motives as above. Each of these comes equipped with a special morphism
called the "Frobenius''. The number of points of over can then be expressed in terms of the nth power of the Frobenius as follows:
where the trace is defined by carrying ideas from linear algebra to the category of pure motives. This explains the exponentially growing yet also perhaps oscillating terms in the Riemann Hypothesis for varieties over finite fields (see Theorem 3).
One advantage of explicitly saying "nth power of the Frobenius" is that people who know enough number theory will like that phrase - they'll instantly see there's some connection between this and the field .
(The Frobenius starts out life as an automorphism of the algebraic closure of , and the fixed points of the nth power of the Frobenius form the subfield .)
John Baez said:
Yes, it's very nice how this stuff works. This is why Grothendieck liked motives so much: they turn certain portions of algebraic geometry into something very much like linear algebra.
There is a somewhat similar phenomenon with the calculus of functors: there is a kind of exponential modality of linear logic even if it's more an homotopic thing. David Corfield was talking about that here
Interesting, 'A Functorial Excursion Between Algebraic Geometry and Linear Logic', by @pamellies, https://www.irif.fr/~mellies/papers/Mellies20submitted.pdf: "the guiding idea here is that linear logic should be seen as the logic of generalised vector bundles, in the same way as Martin-Löf type theory ...
- David Corfield (@DavidCorfield8)So I get a bit angry when people tell me that I must not try to understand various maths with linear logic
Thanks to Paul-Andre Mellies I learned that Drinfeld had renamed star-autonomous categories "Grothendieck-Verdier categories", because they are very important in algebraic geometry! So there's a cool connection between linear logic and algebraic geometry, which I would like to explore now that I'm starting to learn a bit of algebraic geometry.
That's a good idea! I try to understand the connections between not the star-autonomous part of linear logic but the exponential modality and Schur functors as well as other families of functors and well, I've already learned a lot of things while doing that (just the symmetric powers in characteristic 0 are already giving me a lot of work to do!).
I think that every time a part of math is understood in some logical language (example: toposes, model theory, dependent type theory, homotopy type theory), that's a very good progress and because linear algebra is central in math, linear logic should be very important for math too! But that's difficult to make people understand that you should use different languages when all mathematics have been based on set theory + classical logic for at least one century. (Even if people seem to forget that they are using this each time they define any set or make any proof by contradiction because they never had any course explicitly on "set theory" or "classical logic", they just learned to use sets and classical logic without paying attention).
John Baez said:
Thanks to Paul-Andre Mellies I learned that Drinfeld had renamed star-autonomous categories "Grothendieck-Verdier categories", because they are very important in algebraic geometry! So there's a cool connection between linear logic and algebraic geometry, which I would like to explore now that I'm starting to learn a bit of algebraic geometry.
What did you use to understand algebraic geometry? Everybody agrees that most of the books are difficult to understand. For instance, the definition of a scheme is huge. So what would be the easiest path to have the prerequisites say for reading "Lecture on the Theory of Pure Motives"?
Maybe I should try "The rising sea: Foundations of Algebraic Geometry" by Ravi Vakil. It seems to be good.
It probably won't help you much, but you might like my article on how I learned to love algebraic geometry.
The reason it might not help you much is that my own road to algebraic geometry was fairly idiosyncratic, based on my interest in geometric quantization, as explained in the article.
Yep, that's what I understood. It was useful. That's why I'm going to make another post to ask some questions on how I would want to approach these things...
Great. There are lots of roads to algebraic geometry! For example I can imagine you being sufficiently interested in Schur functors, Young diagrams and similar things that you start reading Fulton and Harris' book Representation Theory — a First Course to learn about how they show up in representation theory, and getting pulled into algebraic geometry that way (since it talks a lot about algebraic geometry in a nice elementary way).
Hmm yes, so I was trying to make a post but my super long message didn't show up correctly and I deleted everything so I'm gonna reexplain briefly here.
As for books, I don't know Ravi Vakil's book, but I like the title, which is taken from Grothendieck's work. I've heard this is good:
and I think these are good:
Igor R. Shafarevich, Basic Algebraic Geometry, two volumes, third edition, Springer, 2013.
David Eisenbud and Joseph Harris, The Geometry of Schemes, Springer, 2006.
So yes, I like symmetric powers and purely categorical things that I can express using sequent calculus or another kind of syntax without any set theory. I know that a polynomial map with variables in a vector space is the same thing than a linear map . So I could define a polynomial map with variables in in any symmetric monoidal category with symmetric powers as a linear map . Now the problem is that the set of zeroes of a polynomial map is not a vector space so I can't translate the set of zeroes to any symmetric monoidal category and define it with a coequalizer or some other (co)limit or other way. If I've read correctly, it's an affine algebraic variety but I don't know how to make this stuff categorical so I'm not very satisfied... I guess there not really existing solutions to my problem...
Maybe by starting with the category of commutative rings instead, we could find something better, I don't know...
It's not very helpful I think.
If I can categorify the idea of set of zeroes of a polynomial map (or polynomial) after that I would be excited by translating facts from algebraic geometry in this categorical theory but I'm far from having this categorical theory :upside_down:
It's probably to use this "sets of zeros of a polynomial map is an affine algebraic variety" idea (which is correct) as an excuse to learn about the Nullstellensatz, which is often considered the fundamental theorem of algebraic geometry, as well as finitely generated algebras and affine schemes. Even if they aren't exactly the tools you need for your particular problem, they're all important ways of relating polynomial algebras to vector spaces. It's all very categorical if you do it all very categorically, but it's good to just learn about it the way ordinary folks do, too.
If you want a categorical approach to these ideas you can often find it on the nLab.
In other words: you can take the problem you mentioned and use it as an excuse to learn some algebraic geometry, without hoping that what you learn will instantly help you with your problem.
Hmm yes, there is a lot to learn to put some algebra on this set and find axioms and make this categorical, and the Nullstellensatz seems to be the first step, so at least I have a question in my head to be excited now and I understand that it is not simple...
I have another idea too. I learned that the category of affine scheme over a ring is , right? So maybe there is something to do with starting with a symmetric monoidal category , taking a commutative monoid then look at the opposite of the category of algebras over this monoid, and if you can do things in this generality. Then it would be purely purely categorical algebraic geometry :).
So, yep I start to be excited by understanding things and I think I'm going to read a bit on the classical stuff this summer.
Jean-Baptiste Vienney said:
I have another idea too. I learned that the category of affine scheme over a ring is , right?
Right!
So maybe there is something to do with starting with a symmetric monoidal category , taking a commutative monoid then look at the opposite of the category of algebras over this monoid, and if you can do things in this generality. Then it would be purely purely categorical algebraic geometry :).
This is the approach to algebraic geometry outlined by Toen and Vaquie here, unfortunately for me in French:
Donnons-nous une cat ́egorie mono ̈ıdale symetrique (C, ⊗, 1) que l’on supposera complete, cocomplete et fermee (i.e. possede des Hom internes relatifs a la structure mono ̈ıdale ⊗).
En particulier il existe une notion de mono ̈ıde commutatif (associatif et unitaire) dans C, et ils forment une categorie que l’on note Comm(C). On definit formellement la categorie des schemas affines relatifs a C par AffC := Comm(C)
Oh excellent! Maybe I've read this one day and this is why I'm thinking to this.
So, Canadian guys I'm working with like my supervisor Rick Blute and @JS PL (he/him) know that is a tangent category for a commutative ring . Maybe we can generalize this fact to the setting of Toen and Vaquie!
It's going to make 50 pages of commutative diagrams to verify it but that seems something worth to try
Oh now I'm not sure, is a tangent category, I'm not sure if it works with algebras, but maybe. So thanks, that makes a new question to work on probably.
Maybe hopefully Toen and Vaquie speak about zeroes...
But anyway, this discussion gave me probably a new question to work on so that was very usefull
And I think someone translated this paper in English
Hmm it's maybe only some sections of the paper but there is that: Under Spec Z
Do you have an idea if there is a characterization of the affine schemes which are the set of zeroes of a polynomial?
Jean-Baptiste Vienney said:
But anyway, this discussion gave me probably a new question to work on so that was very usefull
Well, no, we already know that the category of commutative monoids in any symmetric Monoidal category gives a tangent category but anyway it’s always very interesting to talk with you
Yes, if you are taking the coefficients to lie in an algebraically closed field for example:
quasiprojective integral separated finite type scheme over an algebraically closed field (i.e. it has a map to for alg. closed)
That might instead be quasi-projective, instead of affine, but it's close
And that's more the common zero locus of a finite collection of polynomials, not just a single one.
Hmm, maybe you are asking for a condition on to know it's basically the variety given by the zero set of a single polynomial?
This (edit: that motives are set-like things that help count corrections to numbers of points) is an interesting observation. I know nothing about motives and never progressed past the point of briefly absorbing the definition of a scheme before promptly forgetting it, but I know a fair amount of practical linear algebra. In that vein, one tool that I've found profoundly capable in many applied settings is magnitude--or actually, weightings (whose sum is magnitude). Here too for a finite metric space one has a notion of "effective number of points" and in practical computations that number can and often is negative: basically, the effective size of a point (its weighting component) is big and positive for an "outlier", slightly smaller and negative "just behind" an outlier, and roughly unity everywhere else (in fact there are oscillatory corrections IIRC). I am not an analyst but I am given to understand via work of Mark Meckes that a weighting is secretly a Bessel potential, which forms a lovely and surprising connection between potential theory/PDE and all that and enriched category theory. Anyway as you know magnitude has a categorification that works in very general settings. Your description of motives makes me wonder if perhaps there is some actual or "potential" connection between magnitude and motives.
David Michael Roberts said:
Yes, if you are taking the coefficients to lie in an algebraically closed field for example:
quasiprojective integral separated finite type scheme over an algebraically closed field (i.e. it has a map to for alg. closed)
That might instead be quasi-projective, instead of affine, but it's close
Thanks, that's definitely what I was looking for.
Jean-Baptiste Vienney said:
Hmm it's maybe only some sections of the paper but there is that: Under Spec Z
it's only the basic sections of the paper, but i also tried to write some good accompanying words, and give some background definitions and motivation — i think if you just want to understand how they do this idea of "take a commutative monoid and look at the opposite category of algebras" that's all in there :-)
Luckily, I understand better French than English, but your accompanying text looks very useful!
Okay, here's a list of things I've done or should be doing:
1) I just submitted my paper Motivating motives to the arXiv and to the Grothendieck Conference proceedings, so now it's time for something new.
2) The next thing I need to do by a deadline is prepare this talk for Cohl Furey's physics seminar by May 15th:
Symmetric Spaces and the Tenfold Way
Abstract. The tenfold way has many manifestations. It began as a tenfold classification of states of matter based on their behavior under time reversal and charge conjugation. Mathematically, it relies on the fact that there are ten super division algebra and ten kinds of Clifford algebras, where two Clifford algebras are of the same kind if they have equivalent categories of representations. But Cartan also showed that there are ten infinite families of compact symmetric spaces! After explaining symmetric spaces, we describe two ways to get compact symmetric spaces from Clifford algebras, which give different correspondences between these two manifestations of the
tenfold way.
So, I have to get back to thinking about the tenfold way.
3) Jessica Flack has invited me to give a public lecture for the Santa Fe Institute. So, I'll go to Santa Fe from Friday May 19 to Wednesday May 24, and by then I need to prepare this talk:
Visions for the Future of Physics
Abstract. The 20th century was, arguably, the century of physics. While there was immense progress on so-called “fundamental physics” — the basic laws governing matter, space, and time — fundamental physics has slowed to a crawl since 1980, despite an immense amount of work. But as John Baez will explain in this SFI Community Lecture, there is exciting progress in other branches of physics: for example, using the fundamental physics we already know to design surprising new forms of matter. Like all other sciences in the 21st century, physics must also embrace the challenges of the Anthropocene: the era in which humanity is a dominant influence on the Earth’s climate and biosphere.
4) By June 1st I need to submit a proposal for a 6-week work session in Edinburgh on using category theory to design software for agent-based models in epidemiology. I'm doing this with Nathaniel Osgood, who knows more about agent-based models, so I'm hoping he'll do most of the work on the proposal... but I think I need to get it started.
5) However, I have enough time now to finish checking all the corrections David Roberts made to my paper From loop groups to 2-groups, and submit a new version to the arXiv, and an erratum to Homology, Homotopy and Applications. So I'll start doing this today.
I'm making a lot of progress on my talk "Symmetric spaces and the tenfold way", which requires some new last-minute research.
In the process I wrote short explainers of some stuff I already understand - you can read these on Mathstodon:
Then someone wanted to understand more of the physics, so I wrote this:
I've decided to submit a talk abstract to ACT2023 and take my chances. I can only give a virtual talk, and just one week before the deadline, the organizers still haven't announced a policy on virtual talks!
I've decided to submit a talk abstract to ACT2022 and take my chances.
You might be a little late for that :smile:
But the advantage is that at least ACT2022 has decided on its policies.
For ACT2023, I'll submit a talk abstract on my paper A categorical framework for modeling with stock and flow diagrams with Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood and Eric Redekopp. I want to talk about:
Also, by the end of May, Nathaniel Osgood and I need to submit a proposal to Mathematics for Humanity for a small 6-week meeting to design software to build agent-based models using category theory.
Proposals for ACT2023 are due tomorrow! Just for fun here is a draft of the one I'll be submitting, a 2-page talk abstract:
It's not nearly as well-organized and beautiful as the abstract that @Sophie Libkind and her team (@Benjamin Merlin Bumpus (he/him), @Jordy Lopez Garcia, @Layla Sorkatti and @Sam Tenka) have prepared for their talk on "Additive invariants of open Petri nets". But I'm hoping the work is interesting in itself.
I morphed that 2-page abstract into a much more fun-to-read introduction to software that uses category theory to make it easier to collaboratively model epidemics:
Conversation with James Dolan:
2023-04-06:
For a finite group G, we defined G-torsors over a number field k in our March 9, 2023 conversation. Most abstractly these are 2-rig maps F: Rep(G) → Vect, where representations and Vect are defined using finite-dimensional vector spaces over k. We also gave some other ways of thinking about these. Today we study the case G = ℤ/3. In this case F maps the group algebra k[G] to some cubic extension K of k - and this is the whole point, since we are using them to study cubic extensions of k. The idea is that a cubic extension of k can be seen as a 'twisted version' of the group algebra k[ℤ/3].
The theory works most simply when k contains (or is) the Eisenstein field: the rationals with a primitive cube root of 1 adjoined. Then ℤ/3 splits over k, so one can show that a ℤ/3-torsor over K amounts to a line object L in the category of k-vector spaces together with a trivialization of its tensor cube:
Even more concretely, we get a ℤ/3-torsor over k from a cube root of nonzero element A ∈ k. Then two choices of A, say A and A', give isomorphic Z/3-torsors iff A' = B³A for some nonzero B ∈ k.
When k is the Eisenstein field, any cubic extension K is a 6th-degree extension of Q, and in fact (we claim) a Galois extension. The Galois group of K over Q can thus be either ℤ/6, which is abelian, or S₃, which is not. We thus consider this challenge: given A ≠ 0 in the Eisenstein field k, determine whether the Galois group of K over ℚ is abelian or not.
Next, what if k does not contain the Eisenstein field? Then we can redo our previous discussion using a formal 'Eisenstein object' replacing the tensor unit in Vect, and an 'Eisenstein line object' replacing L:
For more on this whole series of conversations, go here:
https://math.ucr.edu/home/baez/conversations/
What James is doing, in case this is unclear, is trying to redo class field theory up to and including Artin reciprocity, but using a lot more category theory and a lot less of everything else.
Conversation with James Dolan:
2023-04-15
ℤ/3-torsors over ℚ are interesting because ℤ/3 doesn't split over ℚ. ℤ/3 does split over the Eisenstein field ℚ(ω), where ω is a primitive cube root of unity. For this reason we've seen ℤ/3-torsors over ℚ(ω) correspond to numbers x ∈ ℚ(ω) up to multiplication by nonzero cubes in ℚ(ω). We have a map sending ℤ/3-torsors over ℚ to ℤ/3-torsors over ℚ(ω). Which ℤ/3-torsors over ℚ(ω) do we get this way? Those coming from x such that x² = x* up to a cube factor in ℚ(ω).
Fixing a mistake from last time: most cubic extensions of ℚ(ω) are not Galois over ℚ. Analysis of what's really going on.
Analogy between what we're doing and exponentiating by a 'tiny object', like the 'infinitesimal object' D used in synthetic differential geometry:
https://ncatlab.org/nlab/show/infinitesimal+object
1) I finally finished fixing and carefully checking a huge pile of sign errors that @David Michael Roberts caught in my paper From loop groups to 2-groups.
I'll put this new version on the arXiv, and then polish up the erratum that David kindly prepared for me, and submit that to the journal where this paper was published: Homology, Homotopy and Applications.
2) @Dan Christensen, Sam Derbyshire and I finished making corrections to our short paper The beauty of roots, which will appear as a column in the AMS Notices.
I sent that in to the journal editor, Erica Flapan.
3) I finished making slides for my talk Symmetric spaces and the tenfold way, which I'm giving on Monday.
4) I wrote a blog article on Categories and epidemiology for the Topos Institute blog, which @Tim Hosgood kindly formatted and put on that blog.
Hmm. Those slides are rather different! Did you always mean to take out the stuff on categories? The slides linked here: https://johncarlosbaez.wordpress.com/2023/05/10/symmetric-spaces-and-the-tenfold-way/ are superseded?
Maybe you're looking at the slides of my first talk, "The tenfold way"? Let me adjust the link so it points more clearly to the second talk of this 2-part series, in case that's the problem.
Grrr, I can't get the link here to do the right thing:
http://math.ucr.edu/home/baez/tenfold/index.html#symmetric
But anyway, I think you were looking at the wrong talk slides: there are 2 talks on that page, each with their own slides.
OK, that was my problem: I trusted the link anchor was pointing me to the newer slides. I didn't stop to check if the ones I looked at mentioned symmetric spaces in the title, which would have been a giveaway...
It doesn't help that both those talks feature a similar-looking big green disk. :upside_down: But there's different stuff in those two big green disks.
Continuing on...
5) I'm still making slides for my public lecture "Visions of the Future of Physics", which I'm giving in Santa Fe. I'm visiting the Santa Fe Institute next week, trying to meet people interested in using network theory, categories, or our software for compositional modeling to help solve real-world problems.
6) Also, I'm working with @Todd Trimble and @Joe Moeller to fix up our paper Schur functors and categorified plethysm based on the referee's comments. It'll be great to get all this editing done.
1) Here's the video of my talk Symmetric spaces and the tenfold way, which I gave on May 16th in the seminar Algebras, Particles and Quantum Theory. I sort of regret not talking more about the category theory, but this is visible near the end of the slides.
Abstract. The tenfold way has many manifestations. It began as a tenfold classification of states of matter based on their behavior under time reversal and charge conjugation. Mathematically, it relies on the fact that there are ten super division algebras and ten kinds of Clifford algebras, where two Clifford algebras are of the same kind if they have equivalent super-categories of super-representations. But Cartan also showed that there are ten infinite families of compact symmetric spaces! After explaining symmetric spaces, we show how they arise naturally from forgetful functors between categories of representations of Clifford algebras.
2) I am done making slides for my talk The future of physics, which I'm giving on Tuesday May 23rd as a Santa Fe Institute Community Lecture. Videos should show up eventually.
Abstract. The 20th century was, arguably, the century of fundamental physics. While we saw immense progress on discovering the basic laws governing matter, space, and time, this has slowed to a crawl since 1980, despite an immense amount of work. But as I will explain, there is exciting progress in other branches of physics: for example, using the fundamental physics we already know to design surprising new forms of matter. Like all other sciences in the 21st century, physics must also embrace the challenges of the Anthropocene: the era in which humanity is a dominant influence on the Earth’s climate and biosphere.
3) I am also done making slides for my talk Algorithmic thermodynamics, which I'm giving on Thursday May 25th in WOST 4, a workshop on stochastic thermodynamics.
Abstract. Algorithmic entropy, as introduced by Kolmogorov, Chaitin and others, can be seen as a special case of entropy as studied in statistical mechanics. This viewpoint lets us apply ideas from thermodynamics to algorithmic information theory. For example, if we take the log runtime and length of a program as observables analogous to the energy and volume of a container of gas, the conjugate variables of these observables are quantities which we may call the algorithmic temperature and algorithmic pressure , and an analogue of the fundamental thermodynamic relation holds. However, the resulting subject of "algorithmic thermodynamics" remains largely unexplored.
John Baez said:
2) I am done making slides for my talk The future of physics, which I'm giving on Tuesday May 23rd as a Santa Fe Institute Community Lecture. Videos should show up eventually.
Abstract. The 20th century was, arguably, the century of fundamental physics. While we saw immense progress on discovering the basic laws governing matter, space, and time, this has slowed to a crawl since 1980, despite an immense amount of work. But as I will explain, there is exciting progress in other branches of physics: for example, using the fundamental physics we already know to design surprising new forms of matter. Like all other sciences in the 21st century, physics must also embrace the challenges of the Anthropocene: the era in which humanity is a dominant influence on the Earth’s climate and biosphere.
The link to the slides doesn't seem to work?
Whoops, a typo! I fixed it. The link to the slides is here. Thanks for catching that.
Some parts will make more sense when I give the talk out loud. For example, the last slide shows a butterfly, but I'll explain how this butterfly's colorful wings use a photonic crystal patterned like a gyroid! I wrote about this earlier here:
John Baez said:
3) I am also done making slides for my talk Algorithmic thermodynamics, which I'm giving on Thursday May 25th in WOST 4, a workshop on stochastic thermodynamics.
This looks very neat! I never heard of algorithmic thermodynamics before. Reading your slides, I was a bit confused at this point b7251f38-0006-453a-92e6-0154c2cd9a11.jpg
Why does L land in natural numbers?
It doesn't! :grimacing:
But it is computable in a suitable sense.
I met some people at the Santa Fe Institute who might help me apply category theory to environmental issues. The most interesting was Steen Rasmussen. He was a student at MIT when Forrester was still there doing System Dynamics - the formalism that my epidemiology gang has turned into category-theory-aided software!
Later, Rasmussen got interested in Donella Meadow's work on The Limits to Growth and helped build, or maybe just messed around with, the World3 model that book used to model the biosphere and world economy! He wants to have a meeting on this subject and maybe do an updated version of this model. I told him that our software would make that a lot easier.
The Santa Fe Institute website describes him thus:
Steen Rasmussen’s scientific focus is to understand the creative forces in nature by using self-organization to engineer minimal living and intelligent processes. In his early career, he worked 20 years at Los Alamos National Laboratory, USA (1988-2007, Alien of Extraordinary Abilities) contributing to a variety of interdisciplinary research programs and projects. He was part of the core group establishing the Artificial Life field in the late 80s and in the 90s he co-developed the Transportation Simulation System (TRANSIMS) implemented by the US Department of Transportation, integrated simulation frameworks for urban security systems. He also developed web-based disaster mitigation tools, which were deployed in Cerro Grande Wildfire, May 2000, where 20,000 people were evacuated, and he was part of the original Los Alamos team on Critical Infrastructure Protection, implemented by the US Department of Homeland Security after September 11, 2001.
In 2004 he co-founded the European Center for Living Technology (ECLT) Venice, Italy, and late 2007 he returned to Denmark as Director of the Center for Fundamental Living Technology (FLinT) at SDU. In 2009 he founded the Initiative for Science Society and Policy (ISSP), which together with SFI and the ECLT allows Rasmussen to explore how living and intelligent technologies change society and what it means to be human. He has consulted on science and technology issues for the European Commission, Danish Parliament, German Bundestag, US Congress, as well as businesses and NGOs. He was part of the Danish Government’s Data Ethics Expert Group in 2018.
Nonetheless he was a very approachable and friendly guy. I just happened to share my office with him and his dog.
This would be an extremely exciting project, I'm very curious to know how stock flow diagrams + decorated cospans + AlgebraicJulia would perform in practice against whatever technology they'd use if they were doing it themselves nowadays
The World3 model was written in an archaic language called DYNAMO, and I believe this may even be true for the later updated versions.
But I believe that the real virtue of compositionality is not your holy grail - speed-up in computation - but instead, speed-up in having teams of people create and modify models, and increased flexibility in what they can do with those models.
Once the model is created, we hit it with a functor that turns it into a system of coupled ODE which gets solved with a Julia package. I hope that's pretty fast.
Yes, that fully matches our experience with open games. Philipp and me understood several years ago that developer time is far more expensive than compute time for the vast majority of problems. I made that a central point in my hiring talk for my current job, for example
When I said "perform in practice" I was referring to the entire cycle from idea to final report, not just executing the simulations
Okay, great. I think I'm still remembering a younger, less wise version of you who was more focused on compute time! Sorry!
Here's my public lecture at the Santa Fe Institute:
The 20th century was, arguably, the century of fundamental physics. While we saw immense progress on discovering the basic laws governing matter, space, and time, this has slowed to a crawl since 1980, despite an immense amount of work. But as I will explain, there is exciting progress in other branches of physics: for example, using the fundamental physics we already know to design surprising new forms of matter. Like all other sciences in the 21st century, physics must also embrace the challenges of the Anthropocene: the era in which humanity is a dominant influence on the Earth’s climate and biosphere.
Today I need to upload my application to the ICMS "Mathematics for Humanity" program. It's called "New mathematics and software for agent-based models", and we've got a great team lined up:
• Nathaniel D. Osgood: computer scientist and public health modeler; contributed dozens of peer-reviewed publications on agent-based models (ABMs) in health; provincial director of COVID-19 modeling in Canada; supervised creation of ABMs used for COVID-19 decision-making across Canada and Australia. Contributed to development of StockFlow.jl using AlgebraicJulia as a framework for “stock and flow models” in epidemiology.
• John C. Baez: mathematical physicist and applied category theorist; helped develop the “decorated cospan” framework widely used in AlgebraicJulia, and helped apply this to stock and flow models in epidemiology.
• William Waites: computer scientist and member of the Digital Health and Wellness Research Group at Strathclyde; developed compositional and rule-based models of disease transmission.
• Xiaoyan Li: Doctoral student in Computer Science with extensive peer-reviewed contributions in health applications of dynamic modeling; lead developer of the StockFlow.jl package using AlgebraicJulia for stock and flow health models.
• Sean Wu: Epidemiology (PhD) and public health (MPH) modeler; senior scientist at Merck; creator of an ABM package in R and a prototype ABM implementation in AlgebraicJulia.
• Evan Patterson: applied category theorist specializing in scientific computing, software systems and data science; helped design the AlgebraicJulia framework and software using this framework for both Petri net and stock and flow models of epidemiology.
• Kristopher Brown: research software engineer; led software development for ABMs and rewrite rules in AlgebraicJulia.
Between now and May 2024 I plan to learn about agent-based models and work with Nate Osgood and other team members to come up with category-theoretic ways to work with them. Luckily this team I'm working with knows a lot about them! Nate says that currently the subject of agent-based models is the "wild west": a chaotic pile of ad hoc techniques.
And here are the slides, which have lots of useful links in brown.
My talk Software for Compositional Modeling in Epidemiology got accepted for ACT2024. Alas, I won't be able to actually go to the conference since I'll be in Scotland July-January, but I'll give a Zoom talk.
More conversations with James Dolan:
2022-04-20
For a finite group G, we defined G-torsors over a number field k in our March 9, 2023 conversation. Most abstractly these are 2-rig maps F: Rep(G) → Vect, where representations and Vect are defined using finite-dimensional vector spaces over k. We also gave some other ways of thinking about these. Today we study the case G = ℤ/3. In this case F maps the group algebra k[G] to some cubic extension K of k - and this is the whole point, since we are using them to study cubic extensions of k. The idea is that a cubic extension of k can be seen as a 'twisted version' of the group algebra k[ℤ/3].
The theory works most simply when k contains (or is) the Eisenstein field: the rationals with a primitive cube root of 1 adjoined. Then ℤ/3 splits over k, so one can show that a ℤ/3-torsor over K amounts to a line object L in the category of k-vector spaces together with a trivialization of its tensor cube. Even more concretely, we get a ℤ/3-torsor over k from a cube root of nonzero element A ∈ k. Then two choices of A, say A and A', give isomorphic Z/3-torsors iff A' = B³A for some nonzero B ∈ k.
When k is the Eisenstein field, any cubic extension K is a 6th-degree extension of Q, and in fact (we claim) a Galois extension. The Galois group of K over Q can thus be either ℤ/6, which is abelian, or S₃, which is not. We thus consider this challenge: given A ≠ 0 in the Eisenstein field k, determine whether the Galois group of K over ℚ is abelian or not.
Next, what if k does not contain the Eisenstein field? Then we can redo our previous discussion using a formal 'Eisenstein object' E replacing the tensor unit in Vect, and an 'Eisenstein line object' replacing L.
2023-04-27
Two appearances of a 3 × 3 square in the study of cubic extensions of number fields, which may or may not be related.
An infinite-dimensional vector space in number theory. Let ω be a primitive cube root of unity, so ℤ[ω] is the ring of Eisenstein integers and ℚ(ω) is the corresponding field, the 'Eisenstein field'.
https://en.wikipedia.org/wiki/Eisenstein_integer
Then the abelian group GL(1,ℚ(ω)) is the direct sum of countably many copies of ℤ, one for each Eisenstein prime mod units, and one copy of ℤ/6, consisting of the units in the Eisenstein integers, which are the powers of ω. If we tensor this abelian group GL(1,ℚ(ω)) with the field with 3 elements (thought of as another abelian group), we get an infinite-dimensional vector space over the field with 3 elements. This has one basis element for each Eisenstein prime mod units, together with one for the units. How does Eisenstein conjugation act on this vector space? Since this operator squares to 1, we can split this vector space as a direct sum of the +1 eigenspace and the -1 eigenspace.
The work of Albert, Brauer, Hasse and Noether relating the Brauer group of a number field k to the second cohomology of its Galois group:
https://en.wikipedia.org/wiki/Albert-Brauer-Hasse-Noether_theorem
https://en.wikipedia.org/wiki/Brauer_group
Peter Roquette, The Brauer-Hasse-Noether theorem in historical perspective, https://www.mathi.uni-heidelberg.de/~roquette/brhano.pdf
This relies on building central simple algebras over k as 'twisted' versions of the group algebra of the absolute Galois group of k. The 'twisting' is done using a 2-cocycle on the Galois group, which Schreier called a 'factor system':
https://en.wikipedia.org/wiki/Factor_system
When this group is cyclic, these twisted algebras are 'cyclic algebras':
https://en.wikipedia.org/wiki/Brauer_group#Cyclic_algebras
For more on this whole series of conversations, go here:
https://math.ucr.edu/home/baez/conversations/
My student Chris Rogers has a student who got his PhD! This is my first grandstudent:
Chris writes:
congratulations! You are officially an academic grandfather (see
attached screenshot).Thought I'd share the news that my first Ph.D. student, Alex Milham,
successfully defended their thesis last month.Here's the journal article version of Alex's thesis:
https://arxiv.org/abs/2205.13099
Here's the quick summary to save you from clicking:
Alex studied the -categories that come from applying the "nerve
functor" to pronilpotent algebras over a field of arbitrary
characteristic. In particular, in char zero, Alex proved that the nerve
of such an -algebra A is homotopy equivalent to the integration of
the -algebra you get from A by taking "commutator brackets". I
found that result to be quite satisfying.
Congratulations!
Thanks! And thank Chris.
I've writing a series of posts about 'modes' in music, and I'm getting a bit more serious about it:
What happened to the Azimuth forum? Is it gone? There's all these great Lectures from 2018 that I hope are preserved somewhere...
It's gone, for reasons I explained in March:
I have the lectures on applied category theory and this is yet another damn book I should write. If anyone here has the ability to take stuff like this:
and put it on a stable website where people could read the math, that would be a huge help!
You'll see that these files use "Markdown", a common markup language, with LaTeX as follows:
This makes it easy to describe the partial order on \\(\mathcal{E}(X)\\): we say a partition \\(P\\) is finer than a partition \\(Q\\), or \\(Q\\) is coarser than \\(P\\), or simply \\(P \le Q\\), when
\[ x \sim_P y \textrm{ implies } x \sim_Q y \]
for all \\(x,y \in X\\).
If they used HTML instead of Markdown I could rather easily put these 77 lectures on my website, but I don't know how to get Markdown to work on my website (because I'm an ignoramus).
In case you really want to do this, perhaps pandoc might be of use. It's a tool to automagically convert between text formats. You'd still need to proof read everything though...
Because stuff gets screwed up?
John Baez said:
Because stuff gets screwed up?
yes, there isn't a nice 1-1 correspondance between all possible formats. I do think that you could use standard Markdown static website generators for this though, like Hugo or Jekyll, without resorting to pandoc.
Thanks!
Right now it looks like @Simon Burton will helping me with this. I'll relay these comments to him.
John Baez said:
Because stuff gets screwed up?
99% gets translated without trouble usually, and then there's a pesky 1% to fix
@Simon Burton is making good progress on this, apparently.
My thoughts on the tenfold way have forced me to learn a lot more about things like Azumaya algebras, the Brauer 3-group, and Grothendieck's approach to Galois theory. They turn out to be related to topological quantum field theory and maybe also the relation between classical logic and quantum logic! To keep from getting overwhelmed I've decided to start writing blog articles where I explain how all the parts fit together. Here's the first:
There's a very nice humble question here: how can you identify the essential image of the "free vector space" functor from Set to Vect? That is, how can you find set theory within linear algebra?
Hrm? Can't I just identify it as "everything", or do you mean something better by "identify"?
Oh, or you mean the image on morphisms.
Out of my waters here but without a chosen field 'over' which the vector space is settled,i think we need to imagine the set as the field somehow,seems unnatural.
Kevin Arlin said:
Oh, or you mean the image on morphisms.
Yes, that's the interesting part. My article explains it.
Simonas Tutlys said:
Out of my waters here but without a chosen field 'over' which the vector space is settled, i think we need to imagine the set as the field somehow,s eems unnatural.
I'm not 100% sure what you mean by this, but yes: we should choose a field before we talk about "the category of vector spaces".
My article explains that the answer to the question "how can we identify Set sitting inside Vect?" depends a lot on whether the field is algebraically closed or not. I do the algebraically closed case. The general case gets us into Galois theory in an interesting way.
Going through the article now,sorry - tried to make input without reading it :)
No problem!
We fixed up this paper according to the referee's comments and today I uploaded the new version to the journal's website, the arXiv, and also here:
Well done!
That seems like it was a big job.
Thanks! Yes, the letter to the referee saying what we did was 8 pages long!
I'm supposed to be writing about the thesis of Grothendieck's student Hoàng Xuân Sính, who is having her 90th birthday. Her thesis introduced the 'Sinh invariant' of a [[2-group]], which comes from the associator. The Sinh invariant is trivial iff the 2-group is equivalent to one that's both strict and skeletal.
I just discovered that in 1982 she wrote a paper about 'restrained Picard categories'.
A 'Picard category' is what I'd call a symmetric 2-group, and I just learned that she calls it 'restrained' if the self-braiding of any object
is the identity. She proves that any restrained Picard 2-category comes from a 2-term chain complex of abelian groups.
I wish I'd heard about this about 30 years ago! Maybe her paper didn't make much of a splash, maybe because it was published in Acta Mathematica Vietnamica.
That's cool! I didn't know Sinh proved that either. We should record it on the nLab...
I did, in two places!
Dear Dr. JOHN C. BAEZ,
Best wishes from the Journal of Electrical Electronics Engineering.
I came across your article entitled “SCHUR FUNCTORS AND CATEGORIFIED PLETHYSM”.
We believe that the mentioned article would be a good fit for the issue of the journal and thus on behalf of the editorial team, I would like to invite you to submit your work for publication in this issue of the journal.
Our editorial board members, who are very supportive of the quality review process, assist us throughout the review process.
You can submit your paper as an attachment to this email.
Please do not hesitate to contact me.
Anticipating your reply.
Kind Regards,
Thomas Charles
Managing Editor
Wow, what a great opportunity to educate the, ahem, electrical electronics community about categorification!
They're on Beall's list of predatory publishers
https://beallslist.net/#update
OPAST
I'm starting to work on agent-based models, and I need to get a handle on them using category theory. My first thoughts are here:
Probably you want to give a look at Categorical Cybernetics and Open Games as well. It has been ongoing for almost 8 years now and there are definitely overlaps with agent-based models.
The links are "obvious" but we never succeeded to make progress on it... A thing we tried doing a few times is to have Petri nets where the nondeterministic choice between multiple possible transitions is made by a game-theoretic agent, and you could probably ask the same question for stock flow networks. But I never figured out a nice clean way to say that
It's easy to confidently say something like "agent based models are just game theoretic models without the equilibrium conditions", but reality is more complicated...
Jules Hedges said:
It's easy to confidently say something like "agent based models are just game theoretic models without the equilibrium conditions", but reality is more complicated...
Are you talking about difficulties in "solving" the models, in the sense of saying something interesting about the dynamics they generate, or are there already difficulties in writing down a specification categorically?
Fabrizio Genovese said:
Probably you want to give a look at Categorical Cybernetics and Open Games as well. It has been ongoing for almost 8 years now and there are definitely overlaps with agent-based models.
I'm less convinced there's a substantial overlap. Agent-based modeling is like 'microscopic' modeling of a system whereas games usually deal with 'mesoscopic' models. The overlap is in mean-field games where game-theoretic agents are assumed to be many and identical. Open games so far didn't venture into that territory though.
Thermodynamic limit of open games is something quite interesting to ponder, however...
@John Baez might be interested in the work of William Waites though, who's an MSP member doing agent-based models for epidemiology with categorical-adjacent methods
Thanks everyone! Luckily William Waites is part of the team working on this agent-based model project. And we're both in Edinburgh now! I'm working here until January 9, 2024.
By the way, @davidad (David Dalrymple) had some interesting suggestions about this agent-based model business here.
Agent-based models are widely used in epidemiology and elsewhere. As I explained in my article there are software packages that let people write and run these models, and we need to write software that does most of what those packages do - but do it better. I don't see how to use open games as a tool to develop this software, and since @Jules Hedges is not optimistic I think I'll proceed down a different route.
And now for something completely different:
I'm writing a paper about the thesis of Grothendieck's student Hoàng Xuân Sính. It's her 90th birthday and I'm contributing to a volume in her honor. She did her thesis on Gr-categories, now called [[2-groups]] - and she wrote it in Vietnam back when the US was dropping 20,000 tons of bombs on Hanoi.
I'm going to send her an email asking her some questions:
Dear Madame Hoàng Xuân Sính -
I am writing a paper about your PhD thesis for a volume celebrating your 90th birthday edited by Ha Huy Khoai. I have worked on Gr-categories myself, and studied your thesis, so I think I can do a decent job of explaining your thesis. But I have some questions whose answers I cannot find:
1) I read that you took notes for Grothendieck's lectures in 1967. How did you become his graduate student?
2) What were his lectures about?
3) How did you and he choose the topics of your thesis?
4) I read that you did your PhD thesis by correspondence. How often would you exchange letters?
I am sure many mathematicians are curious about these questions, or any other details you might be willing to provide.
Best wishes,
John Baez
However, I got my wife and two friends to translate it into French. This proved surprisingly difficult, but it's too late for any suggestions for improvements:
Chère Madame Hoàng Xuân Sính,
Je vous écris au nom de mon mari, le professeur John Baez, qui ne parle pas français. Il écrit un article au sujet de votre thèse de doctorat pour un volume célébrant votre 90e anniversaire, édité par Ha Huy Khoai.
Professeur Baez lui-même a écrit beaucoup sur les Gr-catégories, et a étudié votre thèse dans ce contexte. Donc, il se considère assez bien placé pour expliquer votre thèse. Néanmoins, il a plusieurs questions dont il n'a pas pu trouver les réponses soi-même. Ils sont:
Il a lu que vous étiez le rapporteur des cours de Grothendieck en 1967, et que vous êtes ensuite devenu son étudiant de doctorat. Comment êtes-vous devenu son étudiant de doctorat?
De quels sujets s'aggisaient ses conférences ?
Les sujets de votre thèse, comment les avez vous deux choisi?
Il comprend que vous avez redigé la thèse de doctorat par correspondance. Les lettres avec Grothendieck, combien de temps a passé plus ou moins entre chacque échanges?
Sans doute, plusieurs mathématiciens seront curieux a l'égard de ces questions et d'autres informations que vous pourriez fournir.
Puis-je ajouter, écrivant de ma propre voix comme professeur féminine de plus de 70 ans: comment avez-vous surmonté les difficultés que vous aviez peut-être rencontrées?
Veuillez recevoir, chère Madame, nos salutations distinguées,
John Baez (via Lisa Raphals)
(I know Lisa added some extra stuff.)
Cool, this was a really good move.
Thanks! I hope she replies.
I have a question for anyone who understands French and/or French mathematics works. Grothendieck wrote a summary of Hoàng Xuân Sính's thesis which you can read here. It starts by saying
Esquisse d'une theorie des Gr-categories
par Mme Hoang Xuan Sinh
(Note presentee par M. Henri Cartan)
Is this saying that this summary was written by Cartan, or written by Grothendieck for Cartan?
Cartan was one of the people on Sinh's thesis committee.
"présenté par" is not commonly used to mean written by, but it might be an older/scholarly usage. It could also mean that Cartan gave the talk based on this note, but the note was written by Sinh (the "par ..." just below the title could refer to the author).
The first paragraph also seems to say that the author is Sinh.
It's about Sinh's thesis, but from the content I think it can only have been written by Grothendieck.
So, either he wrote this for Cartan to read, or for Cartan to give a talk based on, or....?
The first paragraph translates to:
We give a summary of some results on Gr-categories, which are the subject of a detailed work the author aims to present for their PhD thesis.
If you are convinced Grothendieck wrote it, I would guess Cartan gave a talk based on the note.
I would love someone to help me translate this passage, which seems quite Grothendieckian in its vision:
De tels developpments obligeraient sans doute a tirer au clair les relations tres etroites qu'on present entre les notions de n-categorie (et de -categorie) d'une part, cell d'ensemble simplicial (ou d'espace topologique) d'autre part, tout comme la relation entre n-Gr-categories (et -Gr-categories) et groupes simpliciaux (ou groupes topologiques), enfine entre n-categories Picard strictes et complexes de chaines tronques a la dimension n. Il y auria a s'attendre a un fusion pues ou moins complete entre ces trois visions: algebre homologique, algebre homotopique, algebre categorique, dans une vision commune (qui pourrait bine prendre le nom d'algebre homologique non commutative, etant entendu que ce qui porte actuellment ce nom n'est que amorce tres ???? de ce qui doit venir...)
There's a word that's very hard to read near the end - this is on page 6 of the summary.
Ralph Sarkis said:
The first paragraph translates to:
We give a summary of some results on Gr-categories, which are the subject of a detailed work the author aims to present for their PhD thesis.
Could 'the author' in that context mean the author of the 'detailed work', as opposed to the author of the document at hand?
That author has to be Sinh, not the author of this note! It's Sinh who wrote the PhD thesis on Gr-categories.
Google Translate produces this translation of the passage I copied down above:
Such developments would doubtless oblige us to clarify the very close relations that we present between the notions of n-category (and of ∞-category) on the one hand, that of simplicial set (or topological space) on the other hand, just like the relation between n-Gr-categories (and ∞-Gr-categories) and simplicial groups (or topological groups), finally between strict Picard n-categories and complexes of truncated chains of dimension n. One would have to expect a weak or less complete fusion between these three visions: homological algebra, homotopical algebra, categorical algebra, in a common vision (which could well take the name of noncommutative homological algebra, it being understood that this which currently bears this name is only the very beginning of what must come...)
Nobody but Grothendieck could have written that, at that time!
I'm suspicious of the phrase "just like".
I would change this "just like" into "as well as"
John Baez said:
There's a word that's very hard to read near the end - this is on page 6 of the summary.
It's "très partielle" I think
Dylan Braithwaite said:
Could 'the author' in that context mean the author of the 'detailed work', as opposed to the author of the document at hand?
Possible, but my uneducated guess is that it refers to both.
"pues ou moins" should be "plus ou moins" = "more or less [complete]"
Oh, good. That was another thing that sounded really fishy in translation. (The typewriting is overlaid with so many corrections that it's hard to read, esp. since I don't know French.)
Improved version:
Such developments would doubtless oblige us to clarify the very close relations that we present between the notions of n-category (and of ∞-category) on the one hand, that of simplicial set (or topological space) on the other hand, as well as the relation between n-Gr-categories (and ∞-Gr-categories) and simplicial groups (or topological groups), and finally between strict Picard n-categories and complexes of truncated chains of dimension n. One would have to expect a more or less complete fusion between these three visions: homological algebra, homotopical algebra, and categorical algebra, in a common vision (which could well take the name of noncommutative homological algebra, it being understood that what currently bears this name is only the very partial beginning of what must come...)
(complexes de chaines) (tronques a la dimension n) = chain complexes truncated at dimension n
I once read a whole book (translated from French) about "categories of closed models" (= closed model categories).
Heh. I should have noticed that problem! I knew what was meant. Better:
Such developments would doubtless oblige us to clarify the very close relations that we present between the notions of n-category (and of ∞-category) on the one hand, that of simplicial set (or topological space) on the other hand, as well as the relation between n-Gr-categories (and ∞-Gr-categories) and simplicial groups (or topological groups), and finally between strict Picard n-categories and chain complexes truncated at dimension n. One would have to expect a more or less complete fusion between these three visions: homological algebra, homotopical algebra, and categorical algebra, in a common vision (which could well take the name of noncommutative homological algebra, it being understood that what currently bears this name is only the very partial beginning of what must come...)
The stuff about strict Picard n-categories is an allusion to some work which I haven't actually seen in her thesis (yet), but appears in her 1982 paper Categories de Picard restreintes. Does 'restreintes' mean 'strict' here?
A 'Picard category' is what I'd call a symmetric 2-group, and she calls it 'restrained' (or 'strict', or whatever the right translation for this word is) if the self-braiding of any object
is the identity. She proves that any such 'restrained' Picard 2-category comes from a 2-term chain complex of abelian groups.
I think if we dropped this assumption on the self-braiding we'd get some interesting sort of 'super' analogue of a 2-term chain complex, since we'd still have .
However, since would be an element in the abelian group of 1-chains, I think there could be many different square roots of .
Wow, she answered very promptly. Here's the answer fed through Google Translate. "Mrs. John Baez" is her name for my wife Lisa Raphals.
Dear Mrs. John Baez,
Thank you very much for writing to me in French. I will answer you question by question. You will see that what is happening was very simple.
- I asked Professor Grothendieck to be my thesis supervisor and he accepted.
- He gave us a course on the basics of algebraic geometry.
- It was Grothendieck who gave the thesis subject and the work plan. And I was just developing the plan.
- If I remember correctly, he wrote to me twice and I wrote to him three times. The first time he wrote to me was to give me the subject of the thesis and the work plan; the second is to tell me to drop the problem of inverting objects if I can't do it. As for me, I think I wrote to him three times: the first was to tell Grothendieck that I couldn't invert objects because of the non-strict commutativity; the second is to tell him that I succeeded in inverting objects; and the third is to tell him that I have finished the job. The letters had to be very short because we were in time of war, eight months for a letter to arrive at its destination between France and Vietnam. When I finished my work in writing, I sent it to my brother, who lives in France, and he brought it to Grothendieck.
The difficulties I had encountered were firstly the absence of the library and of colleagues with whom I could exchange ideas and there were also other difficulties of the post-war years.
Thank you very much, Dear Mrs. John Baez, and please convey to Professor John Baez my warmest greetings and great gratitude.
I wonder if "the letters had to be very short" really means there had to be very few letters - I don't see why they'd have to be short. In French she wrote "Les lettres devaient être très brèves car nous étions en temps de guerre".
Lol, doing a PhD by communicating with your supervisor 2 times in total is really based.
Yeah - and later she started her own university!
John Baez said:
I wonder if "the letters had to be very short" really means there had to be very few letters - I don't see why they'd have to be short. In French she wrote "Les lettres devaient être très brèves car nous étions en temps de guerre".
To me, it clearly means that the letters had to be short and not that there had to be very few letters. Maybe because during the war the letters were read in a context of censorship?
Hmm, maybe, but she doesn't actually say that. She says "eight months for a letter to arrive at its destination between France and Vietnam."
But maybe it's supposed to be obvious.
Whew, typing Vietnamese in LaTeX takes a lot of work!
John Baez said:
I wonder if "the letters had to be very short" really means there had to be very few letters - I don't see why they'd have to be short. In French she wrote "Les lettres devaient être très brèves car nous étions en temps de guerre".
Why not ask her? :-)
I could, but the Vietnamese government is far from democratic even now, so asking her this may not be the wisest followup question.
If I bother her at all, it would be to ask more about her math.
But maybe it's supposed to be obvious.
Well, sending letters is expensive (especially at war times) and more sheets of paper means more expensive.
And also the postage would be expensive. By the way, her thesis was hand-written, 238 pages, including some huge commutative diagrams.
Yay! My paper with @Todd Trimble and @Joe Moeller on Schur functors and categorified plethysm was accepted for publication in Higher Structures. We first submitted it on August 24, 2021.... just in case any students out there are wondering how long these things take.
Now Nate Osgood and I are responding to referees' comments on our proposal to design agent-based models at the International Center of Mathematical Sciences here in Edinburgh. Sometimes it feels like your whole life is spent talking to referees.
I want to meet with William Waites to work on this project. He works on epidemiology and "mathematically structured programming" at Strathclyde. He spends a lot of time at the Edinburgh Hacklab, quite near me. But he sent me a funny email saying
I tend to work from there fairly often (but not today, I'm on the boat in Granton, the tide has gone out and I won't be able to return ashore until late afternoon earliest).
Okay, Nate and I responded to those comments about our proposal to design agent-based models. I recovered by writing another article on musical modes:
Here I covered the 7 modes of the Neapolitan major scale:
They're pretty far-out, though the leading whole tone scale and Lydian augmented dominant scale look fun to play around with.
Next I want to do the modes of the harmonic minor scale, and then the 'harmonic major scale'. But I should get back to math!
If you want a suggestion of something to work on that's not too big... :-)
I have to finish that paper on Hoàng Xuân Sính's thesis by the end of the month for a volume celebrating her 90th birthday, but I can probably also finish that task you're hinting at if I don't get distracted by other things.
The Sính paper is clearly much more important (and something I'm looking forward to reading!), but if the other task can progress, that would be good.
I just received 21 proposals for Mathematics for Humanity projects that I need to judge. Each one comes with a proposal, 2 or 3 referee's reports, and a response to those referee's reports. They're going fairly fast, since I just need to give a numerical score. I am one of many judges.
I have also submitted a proposal myself, but needless to say I'm not involved in judging my own.
Besides my own I've so far seen one other connected to category theory, and about 4 others connected to climate change, ecological modeling, and related things.
I finished reviewing those proposals a while back. Then Lisa and headed up to Aberdeen and took a ferry to the main island of Orkney where we met Mike Fourman (a topos theorist, then computer scientist, who is now analyzing Brouwer's work and trying to translate it into topos theory) and his partner Jeanne du Luart.
We saw a lot of great Neolithic and Iron Age monuments.
Now we're in Inverness. As soon as I get back to Edinburgh I need to work hard to finish my paper on Hoàng Xuân Sính's thesis on 2-groups, and also my ACT 2023 talk.
The talk is about our new ModelCollab web-based software, and while I'd like to demonstrate it in my talk, when my collaborators tried to do that in an earlier talk, one of their computers slowed to a halt due to Zoom's excessive demands! My computer also tends to swoon when I try to do other tasks under the malign influence of Zoom, so I'm scared to try it in a talk.
Here are some pictures I took of a Neolithic village in Orkney:
The archaeology of the whole area is mysterious and fascinating.
John Baez said:
Here are some pictures I took of a Neolithic village in Orkney:
The archaeology of the whole area is mysterious and fascinating.
On a related tangent apparently the Scottish neolithic polyhedra are less fascinating than has been purported (apologies if this has already been discussed): http://www.neverendingbooks.org/the-scottish-solids-hoax
Tom Leinster and I did some work to investigate that story too:
http://www.neverendingbooks.org/scottish-solids-final-comments
But Lieven le Bruyn did the most. It seems the ancient Scottish Platonic solids are not a real thing.
Not to mention others, see the references in my letter:
https://www.ams.org/journals/notices/201806/rnoti-p676.pdf
I'd forgotten you'd written a letter trying to quash this myth. Good!
Here are the slides for my ACT 23 talk, in case anyone wants a look at them before I give the talk on Wednesday 14:00 UTC - or during the talk, or after:
I feel like adding that my talk is less about epidemiology per se and more about general modeling principles and how to implement them using category theory.
Surely you will find the presentation of the great Ngô Bảo Châu, titled 'Mathematics in Vietnam: Past and Present Continuous,' very interesting (https://www.mathunion.org/fileadmin/CDC/cdc-uploads/CDC_MENAO/p1_ngobaochau_01.pdf), as it contains valuable information about Vietnam during those years.
He says below this picture :
index.jpeg
"The fellow standing right behind Grothendieck happened to be my
own uncle."
Regarding Sinh, you may also find her summary of the thesis in English interesting, available at https://pnp.mathematik.uni-stuttgart.de/lexmath/kuenzer/thesis_sinh_summary_sinh.pdf.
In fact, I discovered and shared the documents listed in "Documents from the Grothendieck Archive in Montpellier" (https://pnp.mathematik.uni-stuttgart.de/lexmath/kuenzer/sinh.html) with M. Kuenzer to ensure the completeness of his page.
Regarding Cartan, Grothendieck says (controversially) in Récoltes et Semailles (https://agrothendieck.github.io/divers/ReS.pdf pp 392):
"C’est à lui aussi [Pierre Deligne], comme le mathématicien le plus proche de moi, que je me suis adressé tout aussi spontanément en les premières occasions (entre 1975 et 1978) où j’avais à demander assistance, caution ou appui pour les élèves travaillant avec moi. La première de ses occasions a été la soutenance de la thèse de Mme Sinh en 1975, qu’elle avait préparée au Vietnam dans des conditions exceptionellement difficiles. Il a été le premier que j’aie contacté pour faire partie du jury de thèse. Il s’est récusé, laissant entendre qu’il ne pouvait s’agir là que d’une thèse bidon, à laquelle il n’était pas question qu’il apporte sa caution. (J’ai eu l’adresse pourtant d’arriver à circonvenir la bonne foi de Cartan, Schwartz, Deny et Zisman pour me prêter main forte pour cette supercherie — et la soutenance a eu lieu dans une ambiance d’intérêt et de sympathie chaleureuse.) Il a fallu trois ou quatre expériences du même genre, dans les trois années suivantes, avant que je finisse par comprendre qu’il y avait en mon prestigieux et influent ami un propos délibéré d’antagonisme vis-à-vis de mes élèves “d’après 1970”, come aussi à l’égard des travaux qui portent seulement la marque de mon influence (tout au moins ceux entrepris “après 1970”)."
I do not like controversies, but here you can find the relation of Cartan with the thesis (following Grothendieck).
John Baez said:
Is this saying that this summary was written by Cartan, or written by Grothendieck for Cartan?
here
Hi John, Sorry to say that your Google translator flunked the translation of the first line: The original French says "les relations étroites qu'on pres-sent entre... " Here, "pressent" is the third person singular of the verb "pressentir" " ("feel in advance", have the hunch that , intuit, foresee, ...) not of "présenter" (to present). Moreover, it is clear to me (I wrote several such Notes while I was working on my French Doctorat d'Etat) that this is the (edited draft) of a Note aux Comptes Rendus de l'Académie des Sciences de Paris (CRAS), written by Ms Sinh and presented (read) by Monsieur Henri Cartan. The presenter must be a member of the Academy and he is not supposed to edit the Note, just to "read" it in front of his peers. It is the thesis advisor in this case who checks the note for mathematical soundness, style and syntax ( by the way, there are still some typos in this draft; it seems to me that the handwriting is Ms. Sinh's, not Grothendieck's). What Ms Sinh does here is very usual: to announce results that will be presented later in full (de manière détaillée) in her thesis...
Mateo Carmona said:
Surely you will find the presentation of the great Ngô Bảo Châu, titled 'Mathematics in Vietnam: Past and Present Continuous,' very interesting (https://www.mathunion.org/fileadmin/CDC/cdc-uploads/CDC_MENAO/p1_ngobaochau_01.pdf), as it contains valuable information about Vietnam during those years.
He says below this picture :
index.jpeg"The fellow standing right behind Grothendieck happened to be my
own uncle."
Thanks, that could be useful to me! I'm almost done writing my article for Hoàng Xuân Sính's volume, and I have a deadline, but if there are intriguing bits of history that I can add, that would be great.
Mateo Carmona said:
Regarding Sinh, you may also find her summary of the thesis in English interesting, available at https://pnp.mathematik.uni-stuttgart.de/lexmath/kuenzer/thesis_sinh_summary_sinh.pdf.
Yes, I've been using that extensively, along with everything else here:
https://pnp.mathematik.uni-stuttgart.de/lexmath/kuenzer/sinh.html
Mateo Carmona said:
Regarding Cartan, Grothendieck says (controversially) in Récoltes et Semailles (https://agrothendieck.github.io/divers/ReS.pdf pp 392)....
Thanks! Translated automatically for those of us who don't read French well:
"It was to him too [Pierre Deligne], as the mathematician closest to me, that I turned just as spontaneously on the first occasions (between 1975 and 1978) when I had to ask for assistance, surety or support for students working with me. The first of these occasions was the defense of Mrs. Sinh's thesis in 1975, which she had prepared in Vietnam under exceptionally difficult conditions. He was the first person I contacted to sit on the thesis jury. He declined, suggesting that it could only be a bogus thesis, to which there was no question of him lending his support. (I did, however, have the skill to circumvent the good faith of Cartan, Schwartz, Deny and Zisman to lend me a hand in this deception - and the defense took place in an atmosphere of interest and warm sympathy). It took three or four similar experiences over the next three years before I finally understood that my prestigious and influential friend was deliberately antagonizing my "post-1970" students, as well as the work that bore only the stamp of my influence (at least that undertaken "after 1970")."
I will not include this in my discussion of Sinh's thesis, for the obvious reason: it's in honor of her 90th birthday.
Jorge Soto-Andrade said:
Hi John, Sorry to say that your Google translator flunked the translation of the first line [....]
Thanks!
I just noticed my talk at ACT23 today is half an hour, not a whole hour. Glad I still have time to deal with that. :sweat_smile:
(Perfect opportunity to use the "sweat-smile" emoji.)
Stock and flow diagrams are a nice graphical tool for modeling systems, part of a tradition called System dynamics. Our new software based on category theory lets teams build models using these diagrams:
But here's something cool Nate just told me: people have had success teaching stock and flow diagrams to students starting at a young age! You can use these diagrams to teach math, economics, ecology, and other subjects in a unified way. For more try my blog article:
Nate and I want to team up with various other people and use our software to:
Luckily for me Nate has a lot of contacts that can help! I hope to be announcing some new developments in a while.
The link "Teaching system dynamics" currently leads to an arXiv paper, not a blog article. I suspect it should lead to this page, instead: https://johncarlosbaez.wordpress.com/2023/08/03/teaching-systems-dynamics/
Yes, that link led to the aforementioned paper on stock-flow diagrams. I do a lot of linking and sometimes make mistakes like this. I'll fix the mistake - thanks!
I'm curious to know if you are allowed to post your Sinh article online before publication. Obviously I can wait until it's ready, but I don't know what the timeline is like on the official side
Yes, I got the freedom to put it on the arXiv as soon as its done - I wouldn't have written it otherwise.
Right now I'm almost done, fighting against certain rabbit holes like who first introduced the term "categorical group", and when?
On a side note, the paper by Solian is interesting because it cites an earlier 1972 paper of his (Groupe dans une catégorie) which appears to be a particularly early reference for (non-strict) 2-groups. (He uses the terminology "group in a category".)
Thanks! I hadn't gotten around to reading his paper, since it's locked behind a paywall.
Group in a category sounds like an internal group, not a categorical group unless that category happens to be Cat.
John Baez said:
Group in a category sounds like an internal group, not a categorical group unless that category happens to be Cat.
Perhaps this is why he changed the terminology for his 1981 paper.
@John Baez The link Nathanael gives is to the national library of France's free archived version of the Comptes Rendus
I meant the other, later paper by Solian, "Coherence in categorical groups", that's locked up in Comm. Alg.
That was the earliest paper I could find with the term "categorical group" in it. But I haven't read it yet.
I used MathSciNet to search for early papers or reviews of papers containing the phrase "categorical group". All the early ones are using "categorical" in the sense of logic, nothing about category theory.
Oh, sorry, my mistake! I just looked at the 1981 paper, and the only reference given for the definition of "categorical group" (which is the coherent weak one) is to Solian's 1972 paper. All the other references are general category theory or to other papers with coherence results.
Thanks. So I'll look at Solian's 1972 paper and then give up... it's not crucial right now.
Somehow "categorical group" was quite common by the time I showed up and dubbed them "strict 2-groups".
Solian gives a couple of strict examples in 1981: an ordered group, considered as a poset qua category (thats a bit odd, I didn't read too closely tbh), and the groupoid of self-isomorphisms of a category, as distinct from the autoequivalences.
But doesn't draw particular attention to their strictness.
Thanks, David! If you - or anyone! - would like to check out my paper, here's a draft:
I'd love corrections or suggestions. Experts will see that I'm trying to minimize the technical details but still give a sense of what she actually did in her thesis. It's supposed to serve as an introduction to 2-groups. This is why I'm not taking a very sophisticated point of view.... or at least that's my excuse.
First quick comment: Cartan's contributions list sheaf theory twice, or else it's oddly phrased (is the second the combination of sheaf theory and potential theory?)
"In Section 5 we give examples of what be called ‘abelian’
Gr-categories"
"In a skeletal Gr-category we have" an equality followed by two isomorphisms
"A crucial first step was J. H. C. Whitehead’s concept of ‘crossed module’, formulated around 1946 without the aid of category theory"
Scare quotes on crossed module aren't needed here, since they have been defined previously
Thanks for all those! Clearly I need to read through it a few more times myself.
In the appendix, Theorem 14 "one for the associator" missing "which"
It was a fun read!
Thanks! You're one of the people in the world who knows this math the best already.
Okay, I updated my draft based on your corrections - and even a bigger deal, I noticed that Sinh tersely explains the fundamental 2-group of a pointed space in her thesis, so I changed my story about that. (I had given an absurdly abstract description of the fundamental 2-group.)
I liked your opening example of a 2-group, which was implicitly the fundamental groupoid of SO(3). I don't know if you have space, but you could circle back to this example (or class of examples) later when you discuss strict 2-groups. The source for this is Brown and Spencer's G-groupoids paper.
Fwiw Danny Stevenson also gave the full definition of in his PhD thesis, independently of Hardie Kamps and Kieboom, but not to establish anything about the homotopy hypothesis with it (even implicitly).
I enjoyed reading part of the draft! From my beginner perspective, I especially appreciated the detailed intuitive explanation of the reasoning behind the definition of a monoidal category. For example, it was great to read an explanation of why we want the pentagon diagram to commute: so that we can use different sequences of unitors and associators to re-parenthesize an expression, and as long as we arrive at the same final result (in terms of the location of parentheses), we don't need to care about about which specific sequence of unitors and associators we used.
I also especially enjoyed the example you gave of the symmetries of a sphere and the "paths" between these. I found it fun to visualize this example by associating a 3D rotation matrix to the point on the sphere that the north pole (of a sphere centred at the origin) is sent to under that rotation. Then a continuous path between two 3D rotation matrices hopefully corresponds to a continuous path on the sphere between the corresponding two points. To be able to compose a path and the "reverse" of that path and get the identity, I assume we actually need to consider homotopy equivalence classes of paths.
Building on this example, I was trying to think of how to get more examples of Gr-categories, because I find examples helpful for understanding new abstract concepts. If we have some manifold where each point of the manifold is an element of a group, can we get a Gr-category by...
Returning to the draft, I got up to page 7 before I ran out of steam. Page 7 introduces and . I found the introduction of these things a bit intimidating, because I didn't understand where they came from. Upon reflection, though, I am guessing that this is to be expected, because identifying and understanding the role that and play in the classification of Gr-categories sounds like it was a significant part of Hoàng Xuân Sính's thesis. It would also probably help if I knew even a little about cohomology groups. Because I found this section hard to follow in any detail, it was great that you provided a high-level summary on page 8.
Overall, I enjoyed reading the first 7 pages, and if I have more energy, I may continue reading further!
One minor suggestion: this sentence in the first paragraph doesn't seem quite right "But for some manage to carry out profound research on the fiery background of history." Maybe there is an extra "for" here?
Thanks for catching that mistake, @David Egolf! Yes, the "for" shouldn't be there.
To be able to compose a path and the "reverse" of that path and get the identity, I assume we actually need to consider homotopy equivalence classes of paths.
Exactly.
If we have some manifold where each point of the manifold is an element of a group, can we get a Gr-category by...
- taking the points of this manifold to be our objects,
- and taking the homotopy equivalence classes of paths between points in the manifold to be our morphisms?
Yes! Excellent idea!
More precisely, any topological space you can get a groupoid that way (a category where all morphisms are invertible), but if your space is also a group where the multiplication is continuous, then this groupoid is a Gr-category.
Note I'm being a bit more general than you here - a topological space is enough, it doesn't need to be a manifold - but also a bit more special: the multiplication that makes this topological space into a group must be continuous. Such a gadget is called a [[topological group]].
People would summarize what you just discovered by saying "the fundamental groupoid of a topological group is a Gr-category". And the other David, @David Michael Roberts, actually mentioned this:
I liked your opening example of a 2-group, which was implicitly the fundamental groupoid of SO(3). I don't know if you have space, but you could circle back to this example (or class of examples) later when you discuss strict 2-groups. The source for this is Brown and Spencer's G-groupoids paper.
I'm a bit exhausted by this paper right now. I'll have to think about whether it's wise to add yet another example. In a way it belongs in the section on topology, but that tells a coherent story right now and I don't want to clutter it up. But thanks for reminding me that this is in the G-groupoids paper!
In a way I almost feel like not including more discussion of this, seeing how David Egolf came close to guessing the general result based on this one example!
Returning to the draft, I got up to page 7 before I ran out of steam. Page 7 introduces ρρ and aa. I found the introduction of these things a bit intimidating, because I didn't understand where they came from. Upon reflection, though, I am guessing that this is to be expected, because identifying and understanding the role that ρρ and aa play in the classification of Gr-categories sounds like it was a significant part of Hoàng Xuân Sính's thesis.
Maybe I should explain a bit more. It's kind of natural that since the associator is a big deal in a monoidal category, classifying Gr-categories is going to involve the associator. The map is just the natural way to express the associator (which eats three objects and spits out a morphism) in terms of the group of objects of a skeletal Gr-category and the group of automorphisms of the unit object. So perhaps all that's needed is to point this out and talk the reader through the construction a bit more.
For example, maybe I should point out that in a skeletal Gr-category, every morphism is an automorphism of some object, and the group of automorphisms that we get doesn't depend on what object we use! So we might as well work with , the group of automorphisms of the unit object. Anything we want to say about morphisms in a skeletal Gr-category, we can say using .
I was being a typical bad math writer and assuming that because I've known this for a long time I don't need to say it. :upside_down:
Similarly, the action comes from how we in a Gr-category can tensor any morphism with the identity morphism of an object and get a new morphism. So there's a way to turn a morphism and an object into a morphism. And we need some function that expresses this fact! That's .
It would also probably help if I knew even a little about cohomology groups.
I actually define the cohomology group that we need here, not assuming any prior knowledge of cohomology. But it's quite possible that readers without some familiarity with cohomology will have trouble absorbing this information, either because it's too much or because they get so intimidated that they give up before they get that far.
Ultimately my point is that the true meaning of is that elements of this set describing different possible associators that a Gr-category can have.
I say this at the end of the section, in what's intended to be a kind of wrap-up of the whole section:
This gave a new explanation of the meaning of the cohomology group . In simple terms, this group classifies the possible associators that a Gr-category can have when the rest of its structure is held fixed. The element of determined by the associator of a Gr-category is now called the Sinh invariant of .
This is what she's most famous for.
Your explanations above regarding and are very helpful, thanks! In particular, understanding that the associator takes in three objects and produces a morphism (which in this case of skeletal Gr-categories must be an automorphism), is clarifying for me. I guess I was thinking of an associator as just "a way to move around parentheses" without carefully thinking about what data that involves.
John Baez said:
For example, maybe I should point out that in a skeletal Gr-category, every morphism is an automorphism of some object, and the group of automorphisms that we get doesn't depend on what object we use! So we might as well work with , the group of automorphisms of the unit object. Anything we want to say about morphisms in a skeletal Gr-category, we can say using .
I'm surprised to hear that the group of automorphisms we get doesn't depend on the object chosen in a skeletal Gr-category! (I also don't know why this group is abelian.) I think I'd need more examples of skeletal Gr-categories in my toolbelt to try and figure out why that's true. But knowing this really does help clarify the reason for the focus on .
David Egolf said:
Your explanations above regarding and are very helpful, thanks! In particular, understanding that the associator takes in three objects and produces a morphism (which in this case of skeletal Gr-categories must be an automorphism), is clarifying for me.
Okay, I'll stick something like that in the paper.
I'm surprised to hear that the group of automorphisms we get doesn't depend on the object chosen in a skeletal Gr-category!
It's a thing about groups that any sort of structure that sits at one point can be moved over to any other point, since there are a lot of "left translation" maps
given by
and by choosing appropriately you can find a left translation map that sends any point in the group to any other.
Your mental image should be that groups are roundish, featureless things.
Gr-categories are a bit fancier, but they're enough like groups that your mental image of groups should make you expect that the automorphism group of any object in a Gr-category is isomorphic to the automorphism group of any other object!
And you can turn this left translation stuff into a proof.
(I also don't know why this group is abelian.)
Ah, that's the coolest fact about monoidal categories: the Eckmann-Hilton argument. Wikipedia has the statement and the picture proof, though as usual not explained well enough.
I suppose I really should mention this in my paper, but probably not give the argument.
I decided to give the argument since it's just a one-line series of equations!
I think the new version also has a much better explanation of why a Gr-category ( = 2-group) is classified by a quadruple consisting of a group , an abelian group , an action of on and a 3-cocycle . I still don't fill in all the details of the argument, but now at least I explain why these four items are important.
Thanks!
Also, this new version thanks @David Egolf, @David Michael Roberts and two other folks in the acknowledgements at the end.
I think I'll send this off before anyone notices any other problems.
Small typo in the second paragraph: "Afer" rather than "After".
If you can edit my name in the paper to be David Michael Roberts at some point, I would be greatly appreciative.
Aargh, I just sent it to the publisher 2 minutes ago.
But yes.
And now, to celebrate the fact that I don't have anything I instantly need to do, I will:
1) Finish writing that erratum for From loop groups to 2-groups and send it to Homotopy, Homology and Applications.
2) Finish editing my short book Tweets on Entropy and put it on the arXiv. I had been reluctant to give it that title back when Twitter still existed, because of problems with Twitter, but now that era seems like a long-gone golden age.
3) Write two blog posts on the mathematical virtues of the Lydian mode.
@John Baez i know several people, mathematicians and nonmathematicians, who think that one of the most important books of XX century is Hodges' Enigma. the reason why it is so important is that it makes a rare step towards describing the human existential content behind mathematics. it provides a rare inkling of the possibility of bridging the gap between the precise thinking on which the human civilization is based and the blind routines in which the human life is immersed (home, money, career, community...).
turing, grothendieck, and perlman expressed disgust with this arrangement in their different ways. some deadlier than others.
most of us are not as courageous as they and we attempt to live out our lives commuting between the two coasts. many souls of each generation remain stuck in traffic on the bridge.
it seems to me that you touched on a dramatic arch across this sea, john. a woman feeding her child, teaching her students, under bombs in hanoi, while working on symmetries of symmetries. getting her tablets by mail from The Advisor in Paris, after his brief descent to explore the possibility of bridges. climbing the mount to meet her Advisor, surrounded by other Olympians, for a day. returning to teach her students and feed her child. the myth stops and the tablets get covered with moss. no prophets, no parting seas. or was the failure of the bombings of hanoi a sea-parting event?
50 years later, an american on a mission to educate youth in cyberspace writes a report on thos woman's work and on her life. can the two dimensions of this story be spanned in the same text? who are the readers who will be able to read such 2-dimensional text? will the writer and the readers who do manage to grasp the unity of life and mathematics in this text manage to retain it, or will they all sink back into the routine of daily commutes between life and mathematics?
you did an amazing amazing job by beginning this work. but there is so much more to it than a project among projects could touch. (but also, "history is one damn thing after the other")
@dusko may I quote your two paragraphs starting "it seems..." and "50 years..."? They are so poetic!
John Baez said:
To be able to compose a path and the "reverse" of that path and get the identity, I assume we actually need to consider homotopy equivalence classes of paths.
Exactly.
But here you just need "thin homotopy", don't you?
Nathanael Arkor said:
Small typo in the second paragraph: "Afer" rather than "After".
Small typo on page 19 "provide any extra information requred for the classification"
Jorge Soto-Andrade said:
John Baez said:
To be able to compose a path and the "reverse" of that path and get the identity, I assume we actually need to consider homotopy equivalence classes of paths.
Exactly.
But here you just need "thin homotopy", don't you?
That would work too, and it would give a different 2-group (=Gr-category). I've decided to leave this part vague since it's just an introductory example. I've also decided not to spell it out in more detail since it's not so closely connected to Sinh's thesis, and explaining it would break the flow of the story.
Mateo Carmona said:
Nathanael Arkor said:
Small typo in the second paragraph: "Afer" rather than "After".
Small typo on page 19 "provide any extra information requred for the classification"
Thanks for those extra corrections!
I fixed up the paper some more. As always the latest version can be found here:
.....................................................................
@David Michael Roberts - here is my erratum for "From loop groups to 2-groups", in the journal's style. I've also edited the prose a bit:
If you have any comments let me know! I've given your name correctly in this one.
dusko said:
it seems to me that you touched on a dramatic arch across this sea, john. a woman feeding her child, teaching her students, under bombs in hanoi, while working on symmetries of symmetries. getting her tablets by mail from The Advisor in Paris, after his brief descent to explore the possibility of bridges. climbing the mount to meet her Advisor, surrounded by other Olympians, for a day. returning to teach her students and feed her child. the myth stops and the tablets get covered with moss. no prophets, no parting seas. or was the failure of the bombings of hanoi a sea-parting event?
It was, actually! Even more amazingly, when I visited Hanoi they don't seem angry about Americans anymore.
Thanks so much for bringing out the poetry that my article only hinted at. I was somewhat reluctant to dive into the sea of emotion too deeply, so I decided to "show, not tell" and let the reader figure out the rest.
@John Baez thank you so much! I have no complaints :-)
Just one small, boring correction: there's been a recent departmental merger at Adelaide, so that Danny is now in the "School of Computer and Mathematical Sciences" (yes, it feels like a SCAM), not just "Mathematical Sciences".
Thanks! I'm also a bit fuzzy about Urs Schreiber's mailing address, and I'll ask him.
I would like to send this email to Hoàng Xuân Sính, asking if it's okay to quote her. This translation into French was done by a computer. Can anyone tell me if it's okay?
Chère Madame Hoàng Xuân Sính,
Voici mon article sur votre thèse de doctorat pour le volume célébrant votre 90e anniversaire, publié par Ha Huy Khoai. En page 1, j'aimerais citer le courriel que vous m'avez adressé, traduit en anglais. Ai-je votre autorisation pour le faire ?
Je vous prie d'agréer, Madame, l'expression de mes salutations distinguées,
John Baez
I find the salutation at the end a bit odd? Here's what I'm trying to say:
Dear Mrs Hoàng Xuân Sính,
Here is my article about your doctoral thesis for the volume celebrating your 90th birthday, published by Ha Huy Khoai. On page 1, I would like to quote your email to me, translated into English. Do I have your permission to do this?
Yours sincerely,
John Baez
It's okay, she will clearly understand but I would say:
As to the salutation, I agree, it sounds really very formal (I would write this if I write to an authority or if it's a letter with a very serious legal value).
Like this, it should be perfect.
As someone else who struggles through French, I'm curious about the first suggestion, @Jean-Baptiste Vienney. Would it be "En première page, j'amerais..." or "En la première page, j'amerais..."?
The reason I ask is because the English flow is better with the definite article there when saying "the first page", while "page 1" without the definite article has the better English flow. I'm wondering if it's the same or different in French.
It would be "En première page, j'aimerais...". "À la première page" or "Sur la première page" would work too but the first one sounds better here to me. The only French expression I know with "en le/la/l'" is "En l'an de grâce...", in english "In the year of our Lord..."
John Baez said:
I would like to send this email to Hoàng Xuân Sính, asking if it's okay to quote her. This translation into French was done by a computer. Can anyone tell me if it's okay?
Chère Madame Hoàng Xuân Sính,
Voici mon article sur votre thèse de doctorat pour le volume célébrant votre 90e anniversaire, publié par Ha Huy Khoai. En page 1, j'aimerais citer le courriel que vous m'avez adressé, traduit en anglais. Ai-je votre autorisation pour le faire ?
Je vous prie d'agréer, Madame, l'expression de mes salutations distinguées,
John Baez
It is essentially correct, I find, but:
- I would say "à la première page..."
- the ending is a bit too formal for a courriel (email) and for "academia"; I would just say "cordialement" ou "bien cordialement"
-
Thanks very much, @Jean-Baptiste Vienney and @Jorge Soto-Andrade!
Here's what I wound up sending it with your help and my wife's too - please don't improve it now. :upside_down:
Chère Madame Hoàng Xuân Sính,
C'est avec grand plaisir que je vous envoie mon article sur votre thèse de doctorat. À la première page, je voudrais vous demander la permission de citer le courriel que vous m'avez adressée.
Cordialement,
John Baez
Oui, c’est mieux comme ça! Yes, it’s better like that! 😊🇫🇷
Here is her reply:
Je vous remercie infiniment de votre article sur ma thèse de doctorat. Bien sûr, vous pouvez citer le courriel que je vous ai envoyé.
Il y a un point à rectifier: Jacques Deny n'était pas au jury, c'était Jean-Louis Verdier (un ancien élève de Grothendieck, professeur à l'Ecole Normale Supérieure, rue d'Ulm.
Mes remerciements les plus chaleureux.
or:
Thank you very much for your article on my doctoral thesis. Of course, you can quote the email I sent you.
There is one point to correct: Jacques Deny was not on the jury, it was Jean-Louis Verdier (a former student of Grothendieck, professor at the Ecole Normale Supérieure, rue d'Ulm.
My warmest thanks.
That's pretty important - I got the information about Deny from a quote of Grothendieck, but it's Verdier who actually noticed around 1965 that strict Gr-categories (= strict 2-groups = categorical groups) are the same as crossed modules.
In "Récoltes et Semailles", Grothendieck wrote:
"Un autre cas assez à part est celui de Mme Sinh,
que j'avais d'abord rencontrée à Hanoi en décembre
1967, à l'occasion d'un cours-séminaire d'un mois
que j'ai donné à l'université évacuée de Hanoi.
Je lui ai proposé l'année suivante son sujet de thèse.
Elle a travaillé dans les conditions particulièrement
difficiles des temps de guerre, son contact avec moi
se bornant à une correspondance épisodique.
Elle a pu venir en France en 1974/75 (à l'occasion
du congrès international de mathématiciens à
Vancouver), et passer alors sa thèse à Paris (devant
un jury présidé par Cartan, et comprenant de plus
Schwartz, Deny, Zisman et moi)."
Great! It makes more sense indeed that Verdier was in the jury, instead of Deny, whose work was not directly related to category theory. The same holds by the way for Laurent Schwartz, but in his case, because of his progressist left leaning stance (he was very outspoken in opposing "l'Algérie française" in spite of being a professor at Ecole Polytechnique, whose rector is an Army General, and he was surely also against "l'Indochine française" before the Americans came in) he had for sure a strong commitment to support Mrs. Sinh's work. A propos, Verdier, who had also a very progressist stance, was teaching in Chile in 1973 during the military putsch, barely escaped being shot on the spot by the military in our campus (he was detained and afterwards released thanks to the intervention of the French Embassy) and he was influential in having some fleeing Chilean math undergraduates welcome at Ecole Normale Sup.
Wow, that's fascinating! I knew nothing about Verdier except his awesome work on [[Verdier duality]] (which I barely understand) and the fact that Drinfeld wants to rename [[star-autonomous categories]] "Grothendieck-Verdier categories". Learning a bit about his life humanizes him - I'd merely thought of him as one of the French mathematical superstars.
Okay, @David Michael Roberts - I submitted the erratum to Dan Christensen at Homotopy, Homology and Applications. Very slightly modified in ways that are highly uninteresting, e.g. now there's a MSC Classification for 2-groups, so I added that! :muscle:
Awesome, thanks!
Incidentally, I just learned tonight (accidentally, when looking for something else) that you can add the reference to an erratum in the arXiv metadata, if so desired: https://info.arxiv.org/help/prep.html#journal
Cool! I want to, since the new arXiv version does not really explain how it's different from the previous version.
Some great news! @Nathaniel Osgood, @Evan Patterson, @Kris Brown, @Xiaoyan Li, Sean Wu, William Waites and I had applied to hold a 6-week meeting at the International Centre of Mathematical Sciences here in Edinburgh. Now our application has been accepted!
Our plan is to apply category theory to agent-based models, e.g. of epidemic disease:
We're going to meet for 6 weeks starting around May 1st 2024.
I wrote two articles on modes of the major scale and their connection to the circle of fifths - stuff connected to the "Lydian chromatic concept", which I only learned about recently:
The pictures in the second one would work better in a video!
John Baez said:
I fixed up the paper some more. As always the latest version can be found here:
I've been reading this paper now that it's hit the arXiv and it's a very enjoyable mix of history and mathematics. Thanks for writing it!
I just noticed that you referenced "Mathematical Life in the Democratic Republic of Vietnam, translated by Neal Koblitz," in the collection titled "Meditations," edited by Mateo Carmona. Thank you very much for including it. However, please note that the projects on my previous website are no longer being updated. Instead, you can utilize the link directly provided by Koblitz on his page: (https://sites.math.washington.edu//~koblitz/groth.pdf).
Thanks, @Evan Patterson! I tried to keep it "light" - fun to read, not a lot of work - at least for people who already know what symmetric monoidal categories are. The history is a way to lighten up the mathematics. But it's also a fascinating story of how lots of people from lots of different countries put together the notions we now consider natural... and how a lot of Sinh's work was neglected due to it only being available as a hand-written thesis.
I can update that link, @Mateo Carmona. But what's happened to that collection "Meditations"? It's more useful than Koblitz's paper for many reasons. For one, it includes the original French which Koblitz translated.
Btw, the MIT alumni magazine came out with an article about me:
John Baez said:
I can update that link, Mateo Carmona. But what's happened to that collection "Meditations"? It's more useful than Koblitz's paper for many reasons. For one, it includes the original French which Koblitz translated.
Thank you. I wanted to let you know that I have ceased the development of my previous website projects (https://agrothendieck.github.io/) due to my current involvement in a collaborative endeavor with the Istituto Grothendieck (https://igrothendieck.org/). While the document you referred to is indeed a valuable collection, we are currently focused on enhancing and consolidating these efforts into a more robust and reliable resource within the research center. We anticipate making these improvements public in the near future.
Great - I'm glad something bigger and better is planned!
And now for something completely different!
On 8/24 I'm giving a talk about the numbers 8 and 24.
Two of my favorite numbers: 8 and 24
Abstract. The numbers 8 and 24 play special roles in mathematics. The number 8 is special because of Bott periodicity, the octonions and the E8 lattice, while 24 is special for many reasons, including the binary tetrahedral group, the 3rd stable homotopy group of spheres, and the Leech lattice. The number 8 does for superstring theory what the number 24 does for bosonic string theory. In this talk, which is intended to be entertaining, I will overview these matters and also some connections between the numbers 8 and 24.
Time: August 24, Thur 10:00 am - 11:30 am ET
Zoom: https://harvard.zoom.us/j/977347126
Password: cmsa
Here's a draft of my slides:
If anyone sees problems in these before August 24th, please let me know!
I wrote a blog article about the role of the group in music theory:
As someone who plays the piano, I really enjoyed your blog post!
If I understand correctly, at least part of what you are saying is:
There are 84 modes in total, which include other scales besides major and natural minor! But we can understand where all these modes come from. In particular, two pieces of data pick out a single mode:
If I'm understanding this correctly, it's very cool that all these modes can be cleanly organized! It's a lot simpler than the scary Latin names led me to believe.
For example, we can understand the "C major scale" as being described by these two pieces of data:
To get the C major scale from this data, we then take all the notes in our perfect fifths sequence and put them in order, with C at the start.
This makes me wonder: What would happen if we used a different interval instead of perfect fiths to generate our scales? For example, what would we get if we specified a "scale" by (1) a starting note (tonic) and (2) the position of the tonic in a sequence of 7 rising major thirds?
In this case, because 4 is not relatively prime to 12, I think our scale will be shorter! We would expect to get "full length" scales when generating rising interval sequences where we step up each time by 5, 7, or 11 half steps as 5,7, and 11 are relatively prime to 12. Of course, going up 7 half steps is the same as going up a perfect fifth (the case analyzed in the blog post), but maybe you can get some fun other scales using 5 or 11!
For example, let's say the tonic is a C, and the C should occur second in our sequence of seven rising perfect fourths: we get G, C, F, B flat, E flat, A flat, D flat. Putting these in order, we get: C, D flat, E flat, F, G, A flat, B flat which has a fun sound to it! I think this is just an A flat major scale where we start at C, so it's not something really new, though. (Although arguably it has a different "feel" than A flat major because you emphasize the C when playing this scale, it's the "home base" of this scale). I suspect this happens in general, because going down 7 half steps is the same as going up 5 half steps, in the sense that you end up on a note with the same name. Indeed, reversing the sequence we obtained above, we get: D flat, A flat, E flat, B flat, F, C, G. So, we get all the same unique notes from going up 7 half steps or going up 5 half steps - we just might have to start at different places. Overall, I suspect going up 5 half steps at a time doesn't generate anything new relative to going up 7 half steps at a time.
Going up 11 half steps will just give a portion of the chromatic scale, although potentially with a big "jump" between notes. For example, if C is our second note in our 7-note sequence where we go up by 11 half steps each time, we get: C#, C, B, B flat, A, A flat, G. Putting this in order with C at the start, we get: C, C#, G, A flat, A, B flat, B. Again, playing this in order is fun! Played dramatically, it can have a very sharp and anxious sound to it. Upon reflection, I bet one would get essentially the same thing by going up 1 half step each time, because going up 11 half steps is very similar to going down 1 half step.
John Baez said:
Here's a draft of my slides:
- Two of my favorite numbers: 8 and 24. Talk slides.
If anyone sees problems in these before August 24th, please let me know!
"In 1908, Cartan showed that " Maybe specify which Cartan.
The slide giving the rationale for the 24 components of the bosonic string field theory seems to be missing the word "transverse"
"the motion of the string in the 24 directions to the worldsheet"
John Baez said:
I wrote a blog article about the role of the group in music theory:
Only other place I’ve ever seen this group is in the table of homotopy groups of spheres. Fifty cents and a six pack on me to anyone who can find a connection between these appearances
Whoever figures out a serious connection between those two 84s, I'll give a lot more than fifty cents and a six pack!
Naturally as soon as I saw the 84 here I thought about Hurwitz's theorem: the symmetry group of a genus-g Riemann surface has order at most 84(g-1). I explain that theorem here - it's connected to a magical property of the regular 42-gon.
And for genus 3 we actually do get a surface with 168 symmetries, called Klein's quartic curve.
This 168-element group is - there's a whole Wikipedia article about it.
It's the symmetry group of the projective plane over , which has 7 elements, mildly reminiscent of the 7 notes in the major scale... but not in any useful way that I can see.
Meanwhile on my blog article I point out we get a 168-element group acting on the set of modes in all keys if you allow 'inversion' - flipping the mode upside down. This makes it twice as big as the 84-element group where you rotate the 12-note chromatic scale and rotate the tonic around the 7 notes in your mode.
Alas, these two 168-element groups are not isomorphic! :cry:
David Michael Roberts said:
"In 1908, Cartan showed that " Maybe specify which Cartan.
Good point. It's Elie, of course. I've never really read his book The Theory of Spinors.
The slide giving the rationale for the 24 components of the bosonic string field theory seems to be missing the word "transverse"
Indeed it is - thanks!
David Egolf said:
As someone who plays the piano, I really enjoyed your blog post!
Thanks!
If I understand correctly, at least part of what you are saying is:
There are 84 modes in total, which include other scales besides major and natural minor! But we can understand where all these modes come from. In particular, two pieces of data pick out a single mode:
- The tonic of our scale (its first note). There are twelve options here (C, C#, D, D#, ..., B).
- Where in a sequence of 7 rising perfect fifths our tonic should be placed. There are seven options here.
Yes, that's a nice crisp description.
(By the way, there are other modes out there, like melodic and harmonic minor - here we are only talking about what people usually call "modes of the major scale". But you knew that.)
David Egolf said:
If I'm understanding this correctly, it's very cool that all these modes can be cleanly organized! It's a lot simpler than the scary Latin names led me to believe.
They're names of Greek tribes, by the way. But it's not like Ionians sung songs in the Ionian mode, etc. In fact the system of naming modes is much more recent, and it was only completed in the 1500s, by a guy named Glarean. For some reason the major and natural minor were the last to get named after Greek tribes! Glarean called them Ionian and Aeolian.
This makes me wonder: What would happen if we used a different interval instead of perfect fiths to generate our scales? For example, what would we get if we specified a "scale" by (1) a starting note (tonic) and (2) the position of the tonic in a sequence of 7 rising major thirds?
That's an interesting idea!
In this case, because 4 is not relatively prime to 12, I think our scale will be shorter!
Yeah, you'll get a scale with just 3 different notes, like C E Ab. And it will be a very symmetrical scale, so the tonic doesn't stand out as sounding different from any other note, as it does in the more commonly used modes I was discussing. The composer Messiaen liked highly symmetrical scales like this: he called them modes of limited transposition:
The most famous one is the 'whole tone scale' where we go up a major second each time. You may know this from Debussy or that movie cliche where they play eerie music as someone 'goes into a dream'. The eeriness arises from the high symmetry of this mode - you can't tell what's the tonic.
We would expect to get "full length" scales when generating rising interval sequences where we step up each time by 5, 7, or 11 half steps as 5,7, and 11 are relatively prime to 12. Of course, going up 7 half steps is the same as going up a perfect fifth (the case analyzed in the blog post), but maybe you can get some fun other scales using 5 or 11!
If you're going to allow 11 you should allow 1, 12 is also relatively prime to 1, and going 11 up is the same as going 1 down mod 12.
Oh, you said that later.
Similarly, going 5 up is the same as going 7 down mod 12. So the 7-note modes you get this way are the same as the modes I discussed in my post. And you said this too!
I think the scale you invented, the 7-note scale C, C#, G, Ab, A, Bb, B, is the maximally 'dark' 7-note scale. 'Dark' means that the notes are pushed down near the tonic. This scale is so dark that I haven't seen anyone use it!
The darkest 7-note scale most people can tolerate is Phrygian. Locrian is darker but because the 5th is flatted people consider it "unusable". (They're just wimps of course!)
Thanks for your reply! I had no idea that the modes were named after Greek tribes! I suppose I should stop assuming all scary sounding names are Latin... :upside_down:
And thanks for pointing out that the "modes of limited transposition" are very symmetrical. I suppose that's because they always contain all the same notes, just in a different order. (And this isn't the case with the modes generated by perfect fifths, because it takes more than seven rising perfect fifths to loop back around to our tonic).
All this makes me wonder how different music would be if we could go up 13 "small steps" to make an octave, instead of 12 half steps. Then, each of 1,2,3,4,5,6,7,8,9,10,11, and 12 are relatively prime to 13 and so I think would generate "full length" modes. I'm guessing by analogy to what happened above that 1 and 12 would generate the same modes, as would 2 and 11, 3 and 10, 4 and 9, 8 and 5, and 7 and 6. But that's still potentially six different kinds of scales, compared to the two that we get when we have to go up by 12 half steps to complete an octave (as discussed above)! My dad pointed out one could probably write a program to listen to these kinds of scales (or even write music with them). But I also wonder if there are any physical instruments that subdivide the octave into 13 small steps!
@David Egolf If I understand your question correctly, there are many such divisions of the octave (and many such instruments built to play them). Stephen Weigel has a theory called "All Scalar Set Theory" in which he tries to make connections between set theory and microtonality. https://www.youtube.com/watch?v=VfZGNtorwGc
Harry Parch devised a 43 tone octave: https://www.youtube.com/watch?v=0iabv_G5Rwc
Mauritanian electric guitar players often opt for 24 tone division of the octave (place a fret between every fret). edit: this is what i've heard but i think this video describes something different:
https://www.youtube.com/watch?v=Tp00_DWk9-4
Given infinite precision, there are infinitely many ways to divide the octave (in theory, though only a small subset would be human perceivable and only a small subset of THAT subset would be human distinguishable). Actually, I've wondered whether an infinity groupoid could represent the infinite possible divisions of the octave, and transformations between them.
Wow, @Corey Thuro! Super cool, thanks for sharing that!
You're welcome!
To add to the mix of possible splittings of the octave, note that the current dominant form of music uses an equal tempered scale – every half-step is the same number of cents... meaning a "perfect fifth" in an equal tempered scale is a few cents off from a true perfect fifth, for instance. This is the trade-off that allows for modulation to a different key without changing the relative tones of each note. Historically, even limited to western music, there have been several temperings used for the 12 notes in the octave, making some chords and progressions more consonant and others more dissonant, compared to their equal tempered equivalents.
While looking for music in the Locrian mode I found "Dust to Dust" by John Kirkpatrick as an example that (from what I read, anyway) is completely in the Locrian mode: https://www.youtube.com/watch?v=vjAIZ9wQAnc
Thanks @Jason Erbele. I'll check out that piece! There's also this search for pop music using Locrian:
@David Egolf wrote:
All this makes me wonder how different music would be if we could go up 13 "small steps" to make an octave, instead of 12 half steps.
I'm planning to write an article about the mathematical advantages of the familiar 12-tone equal tempered scale, but as a little teaser note that in the 12-tone equal tempered scale we can approximate a perfect fifth, with a frequency ratio of 3/2, very well by climbing up 7 half steps, since
In a 13-tone equal-tempered scale the two best approximations would be 7 or 8 half steps, but neither is very good, since
while
To get a better fifth than the 12-tone equal-tempered scale you need to go all way up to the 29-tone equal-tempered scale!
Interesting! That makes sense!
The rabbit hole of tuning systems is deep and dark, but if you're committed to equal temperament then 12 tones is pretty good: its fifth is only 0.11% flat. Its major third is also pretty good: 0.79% sharp. The minor third is worse: 5.7% sharp.
Of course the people who go down the rabbit hole of tuning systems don't limit themselves to equal temperament. So they have endless fun.
But anyway, to your original point, it could be a lot of fun studying modes of all equal-tempered scales, or at least lots of them!
I'm not done with modes of the 12-tone equal-tempered scale, though! I've written about the modes of major, melodic minor and (much more quirky) Neapolitan major. But I still haven't written about the modes of harmonic minor! And there's also something called harmonic major, which has its own 7 modes.
It would take me forever to actually master all these modes; it's an embarrassment of riches!
Here's the talk on the numbers 8 and 24 that I gave at that Harvard seminar on quantum matter. If you watch it, jump straight to 3:55:
https://www.youtube.com/watch?v=LKcYqMY234I
There were some typos in the slides, which I have fixed here.
I snuck in a microscopic amount of category theory: when talking about the elliptic curves with extra symmetries, I tersely admitted that the moduli space of elliptic curves is really a stack.
In preparing for this talk I was trying to understand how the period-12 phenomenon with modular forms is connected to the fact that the Picard group of the compactified moduli stack of elliptic curves is .
It's a bit subtle because the dimension of the space of modular forms is not really periodic with period 12 as a function of the weight: it just keeps growing in a way that depends a lot on the weight mod 12. Also, modular forms are not really sections of line bundles on the compactified moduli stack of elliptic curves, so the relevance of the Picard group is indirect!
Anyway, someone named Lisanne on Mathstodon pointed me to the paper that explains this stuff:
And to see where the number 24 comes in, you need to think a modified version of the moduli stack where you replace by its double cover, the metaplectic group:
For example the Dedekind eta function, which I discuss in my talk, is a "modular form of weight 1/2".
Its 24th power, which I also discuss, is an honest modular form of weight 12.
I wrote a blog article explaining Grothendieck's approach to Galois theory. I review Galois theory, covering spaces, and say how Grothendieck's approach to Galois theory relates it to covering spaces:
I'm way behind on writing abstracts for my talks with James Dolan and putting them on YouTube.
2023-05-04
Artin reciprocity for ℤ/3 torsors. If k is a number field, we can characterize the ℤ/3-torsors over k in two equivalent ways: the 'Tannakian' way and the 'Frobenius pattern' way.
As already discussed in our April 20, 2023 talk, the 'theory of ℤ/3 torsors over ℚ' is another way of talking about the category of representations of ℤ/3 on rational vector spaces. 'Theories' here are symmetric monoidal locally presentable categories, and we can also describe this theory by saying it's the free symmetric monoidal locally presentable category on an Eisenstein line object L with a trivialization of its tensor cube. We can further constrain this theory by saying that the Eisenstein conjugate of L is isomorphic to its tensor square (in a way compatible with the above trivialization). This extra constraint makes it so models of this theory in the category of vector spaces over k correspond to abelian cubic extensions of the number field k.
We can also state this all more concretely, 'gauge-fixing' by taking L to be the 'standard' Eistenstein line object E, which can be taken to be the sum of two copies of the unit for the tensor product in our theory. We can actually put this into the theory, equipping our theory an isomorphism G: L → E. Then the models of the theory are given by solutions to some polynomial equations, which we outline.
2023-05-11
An attempt to understand some of James Dolan's recent thoughts on torsors. Fix a field k and a finite group G. Let Vect be the category of vector spaces over k. Let Rep(G) be the category of representations of G on vector spaces over k. Rep(G) is a '2-rig', or more precisely a locally presentable 2-rig: that is, a symmetric monoidal locally presentable Vect-enriched category.
Rep(G) is generated, as a 2-rig, by the regular representation of G, which we could call r(G). But Rep(G) also contains another special object kᴳ: the algebra of functions on the finite set G, treated as a trivial representation of G. This is a coalgebra in Rep(G) whose comultiplication encodes the multiplication in G. This coalgebra kᴳ has a coaction on r(G), r(g) → r(g) ⊗ kᴳ. This coaction makes r(G) into what James calls a 'G-torsor in Rep(G)', meaning that there is also a 'codivision' map kᴳ → x ⊗ x obeying a certain equation. All this is dual to how a nonempty set X is a torsor of G if it has an action G × X → X together with a division map X × X → G obeying a certain equation.
Indeed, any 2-rig R contains a Hopf object that we could call kᴳ: it's the direct sum of G copies of the unit object, with a comultiplication coming from multiplication in G. We can use this to define a concept of a 'G-torsor in R', copying the definition just given in Rep(G).
Such a G-torsor in a 2-rig R works out to be the secretly the same thing as a 2-rig map F: Rep(G) → R, since we can determine such an F by choosing any G-torsor in R, and vice versa. Thus, James usually shortcuts the whole process and defines a G-torsor in a 2-rig R simply to be a 2-rig map F: Rep(G) → R.
Finally, suppose L is any field extending k with Galois group G. Then L gives an algebra object in Vect which is also a G-torsor in Vect.
....................................
(We had this conversation when I started feeling like I'd lost sight of the big picture of what James was trying to do, so I wanted to figure it out. Unfortunately I said a lot of dumb stuff trying to summarize his ideas before he corrected me. This write up conversation tries to get things right, skipping all the dumb stuff I said.)
Yesterday I had another conversation with Jim Dolan on the significance of G-torsors over a field k (in the sense described above) and it all clicked for me; he brought in the fundamental theorem of Grothendieck's Galois theory and it all clicked for me.
I was sort of embarrassed that it was all so simple and I even said "this is kind of demoralizing", since it was all so much simpler than I'd thought.
I'm way behind on uploading conversations to YouTube and writing up summaries, but anyone interested can just go here.
The day before yesterday I wrote an article explaining the Kaehler differentials for an algebra and why a finite-dimensional separable commutative algebra over a field has :
I gave a hands-on proof that where is the kernel of the multiplication map .
I did this by proving that there's a derivation and this is the universal 'derivation from to an -module': that is, any derivation of taking values in any -module factors through this one. The proof involves some fun calculations. (See? I do calculations! Most of them don't appear on the blog because I try to show people only the best ones.)
But then Yemon Choi pointed out that my calculations show there's a derivation of a different sort: it's a 'derivation from to an -bimodule', using the fact that has left and right actions of on it, which are different, and inherits these.
He further pointed out that is the 'universal derivation from to an -bimodule', and the derivation I considered, , factors through this one!
is just the universal derivation from to an -bimodule where the left and right actions of agree.
This is very nice because this derivation works even for noncommutative algebras , bringing in a connection to noncommutative geometry (namely, Hochschild cohomology).
The formula for is very simple:
I wrote an article about my trip to a stone circle up in Orkney:
I figured - with a huge amount of help from Tom Leinster and Qing Liu - how to use Kähler differentials to prove a nice result in Galois theory:
The word ‘separable’ is annoying at first. In Galois theory we learn that ‘separable field extensions’ are the nice ones to work with, though their definition seems dry and technical. There’s also a concept of ‘separable algebra’. This is defined in a very different way — and not every separable field extension is a separable algebra! So what’s going on?
It turns out that every field K that's finite-dimensional over some subfield k is a separable extension of K iff it's a separable algebra over K.
And you can prove this using ideas from calculus!
It's Hoàng Xuân Sính’s 90th birthday today! Here she is in front of Grothendieck in 1967: he taught algebraic geometry in the countryside in Vietnam while Hanoi was being bombed, and she took notes.
After he left she did her thesis with him by correspondence, writing it by hand by kerosene light.
In it, she pioneered the theory of 2-groups: categories that are like groups. Decades later, 2-groups are starting to be used in physics to describe systems with 'higher symmetries' - that is, symmetries of symmetries.
When the war ended, she went to Paris to defend her thesis and get her PhD. Then she returned to Hanoi and started the first private university in Vietnam: Thang Long University. She worked there ever since.
A while ago Hà Huy Khoái, director of the Mathematics Institute at the Vietnam Academy of Science and Technology, asked me to write an essay about Hoàng Xuân Sính’s thesis for a book in honor of her birthday. He didn't tell me it was going to be a printing of her thesis. In all these years, her thesis had never been published!
Here is the book in honor of Hoàng Xuân Sính's birthday:
Her thesis was hand-written in French, and you can see its title reproduced here. 'Gr-catégories' are categories like groups. I later called them '2-groups' and that seems to have caught on.
Here is the table of contents:
The introduction includes the following remarks:
In later years, Professor Hoang Xuan Sinh no longer had time to devote to mathematical research, as she devoted all her energy and enthusiasm to building Thang Long University. Her life is the consistent journey of a patriotic intellectual and talented scientist: from the decision to leave a comfortable life in France to return to contribute to Vietnamese education during the war years, determined to reach the pinnacle of science in extremely difficult conditions, to extraordinary efforts and determination to overcome countless challenges, building the first non-public university in Vietnam's education system.
In the first part of this book, the reader can see part of the handwritten manuscript of the thesis, which Professor Hoang Xuan Sinh sent to Paris. At that time, it was almost impossible to type the Thesis! This manuscript is a valuable document that Thang Long University has thanks to the enthusiastic and generous help of Professor Nguyen Tien Dung (Toulouse University) and Dr. Jean Malgoire, Grothendieck's last graduate student.
You can also see her thesis typeset in LaTeX here (in French), and the original hand-written version and much more here. My essay in this book can be gotten here:
Abstract: During what Vietnamese call the American War, Alexander Grothendieck spent three weeks teaching mathematics in and near Hanoi. Hoàng Xuân Sính took notes on his lectures and later did her thesis work with him by correspondence. In her thesis she developed the theory of "Gr-categories", which are monoidal categories in which all objects and morphisms have inverses. Now often called "2-groups", these structures allow the study of symmetries that themselves have symmetries. After a brief account of how Hoàng Xuân Sính wrote her thesis, we explain some of its main results, and its context in the history of mathematics.
My student @Christian Williams has completed his thesis on The Metalanguage of Category Theory and gotten his PhD!
Perhaps my final student unless I get lonely and bored.
Congrats @Christian Williams, you earned it!
Yay! Congrats @Christian Williams! When do we get to see the thesis? :)
It may take months for it to be polished enough to reveal. For the last 6 months he's been working very hard trying to prove the main results on time, and the exposition still needs a lot of work.
But it's very cool stuff so it's worth taking time to explain it well so people understand it easily when they first encounter it.
.....
Some people have proposed a 5-day workshop on The role of information in ecology and evolution: closing the loop between individual-level processing and population-level processes up at the Banff International Research Station for Mathematical Innovation and Discovery in Alberta, Canada, sometime in 2025. The aim of this workshop is to convene diverse theoretical perspectives on biological information processing and information flows in the context of ecology and evolution. I like this idea but I don't like flying around for short meetings. So I'll tell them I'm interested but I'll have to think about whether it's worth it.
Of course it might not even happen.
Abstract: A defining feature of life is the ability to leverage environmental information to adapt to complex conditions. Organisms possess a variety of sensory systems, ranging from gene regulatory networks to neural circuits, that integrate environmental and social cues in order to inform development, behavior, and interactions. In turn, much of the information needed to build and implement sensory integration systems is encoded in the genome. Natural selection acts on phenotypes shaped by the information that organisms acquire, and accumulates adaptive genetic information in the process. As a result, the capabilities of biological information processing systems evolve over time, coming full circle. Information theoretic approaches have significantly advanced our understanding of each of these domains, and the time is ripe to develop “unified theories” for how biological information changes in concert with evolutionary and ecological feedbacks. The goal of this workshop is to bring together researchers studying mechanisms for information encoding and processing at the individual level with those studying the processes of information accumulation and propagation at the collective level. In doing so, the workshop aims to catalyze information theoretic connections that bridge domains and advance our understanding of ecology and evolution.
thank you everyone! it was a long journey, but the language grew far beyond what I imagined and I'm excited to share.
yes the exposition needs a lot of improvement before sharing widely, but I don't want to wait much longer to give it to people individually. I'm on a trip for a couple weeks, but as soon as I'm back I am ready to talk with anyone and provide the current draft.
and I can share the introduction etc. on here pretty soon.
John Baez said:
Abstract: A defining feature of life is the ability to leverage environmental information to adapt to complex conditions.
is it an accident that our own species doesn't seem to support this defining feature of life anymore?
come to think, archaea also pumped the atmosphere full of oxygen and received the environmental information mostly in the form of being burned... maybe we are also advancing evolution by not adapting even to our own environment?
Good question, @dusko. 'By coincidence' - but not really a coincidence at all - the week after next I'm giving a public talk on the Anthropocene, other mass extinction events, and various other crises that life on Earth has survived.
You can see the slides now - but I'm still editing them, so if anyone catches mistakes or has questions, please let me know! Later a video should appear:
Abstract. When pondering our future amid global warming, it is worth remembering how we got here. Even after it got started, the success of life on Earth was not a foregone conclusion! In this talk I recount some thrilling, chilling episodes from the history of our planet. For example: our collision with the planet Theia, the "snowball Earth events" when most of the oceans froze over, and the asteroid impact that ended the age of dinosaurs. Some are well-documented, others only theorized, but pondering them may give us some optimism about the ability of life to survive crises.
This event will be in G.03 on the ground floor of the Bayes Centre, 47 Potterrow, Edinburgh EH8 9BT.
Tea and coffee will be served after the lecture. If you actually show up, say hi!
Regarding the defining feature of life, I did an overview of the ways biologists figure things out. Here is a diagram of the 24 ways I systematized https://www.math4wisdom.com/files/BiologyDiscovery.png and a presentation in Lithuanian http://www.ms.lt/sodas/Mintys/20211105AdomasPavadinoGyv%c5%abnus which I intend to translate soon. The upshot is that "life is a system for managing traits for success". Which is to say, we first define success (at not dying, at homeostasis, at transcending homeostasis), and then consider various traits that achieve that, and then life is any system that coordinates such traits.
@Simon Burton has helped me create an introductory course on applied category theory:
It consists of 77 short lessons, based on the first four chapters of Fong and Spivak's book Seven Sketches in Compositionality: An Invitation to Applied Category Theory.
A previous version of this had existed on the now-defunct Azimuth Forum, but this should be easier to navigate.
It still needs work. Please let me know about any problems!
One problem is that not all the Puzzles are included - especially at first, before I realized I should put the puzzles into the lectures. I will fix this someday.
FYI, the link "to read other lectures, go here" on each page is broken
Hmm, he said he fixed that. Maybe I did something wrong....
Okay, it should be working now! Thanks.
This looks great!
Is there any good centralized list somewhere of ACT learning resources? Maybe we should put one on the Topos site somewhere if not.
A while back folks at Statebox put together this list: https://github.com/statebox/awesome-applied-ct It hasn't been updated in four years though.
I posted something on Lawvere theories and the problem of counting finite algebras of a Lawvere theory:
I gave the proof of an amazing theorem which I mentioned earlier here: if and are finite algebras of a Lawvere theory with then .
Then @Tobias Fritz conjectured that if are nonempty finite algebras of a Lawvere theory with then .
Then Tom Leinster disproved that conjecture!
But Benjamin Steinberg pointed to a proof of a big special case of it! https://groupprops.subwiki.org/wiki/Direct_product_is_cancellative_for_finite_algebras_in_any_variety_with_zero
Yes!
And on another note: @Joe Moeller, @Todd Trimble and I just submitted the final (?) version of our paper Schur functors and categorified plethysm for publication in Higher Structures.
I plan to give a talk about this in the Edinburgh Category Theory Seminar on Wednesday October 4th at noon UK time. Y'all can attend via Zoom if you want - follow the link for directions.
John Baez said:
Simon Burton has helped me create an introductory course on applied category theory:
It consists of 77 short lessons, based on the first four chapters of Fong and Spivak's book Seven Sketches in Compositionality: An Invitation to Applied Category Theory.
Some of the hackers are discussing this over here.
A ten-second glance into that thread was a good reminder of why I quit reading Hacker News.
Okay, I think I won't look at that!
Our short article has hit the AMS Notices:
A "Littlewood polynomial" is a polynomial whose coefficients are all 1 or -1. The set of all complex roots of all Littlewood polynomials exhibits many complicated, beautiful and fascinating patterns. Some fractal regions of this set closely resemble "dragon sets" formed by iterated function systems. A heuristic argument for this is known, but no precise theorem along these lines has been proved. We invite you to try!
Wow, that's been a long time coming!
Yeah! We never finished our paper that had actual proofs of theorems about this stuff, and I felt bad about that, so I used an abbreviated version as a column.
By the way, my article about the moduli space of triangles may be the second to last of my AMS Notices columns: I have one more due in May 2024, and then the current editor, Erica Flapan, will resign. I suppose there's a chance that I get re-invited to write more, but probably not.
Which is okay, since it looks like I'm going to get very busy on category theory for agent-based models.
Here are the lecture notes for my October 4th talk at the Edinburgh Category Theory Seminar, about work done with @Todd Trimble and @Joe Moeller:
In the comments Allen Knutson asked if we could use our stuff to show the ring of symmetric functions is the free -ring on one generator, so I did that - but probably not in the best way.
When I put my AMS column on the icosidodecahedron on the arXiv, I tried to put it under the topic "group theory" since it leads up to the E8 root lattice. Someone moved it to "combinatorics" with crosslists to "group theory" and "history and overview".
I thought "okay... maybe you could call it combinatorics". There's no topic just right for the geometry of highly symmetrical polytopes.
No big deal.
Now when I put my AMS column on roots of Littlewood polynomials on the arXiv, I tried to put it under the topic "complex variables". I didn't like this, but there's no topic just right for fractals formed by roots of polynomials, and the complex analysis connected to that.
Then it was put on hold and some goofball classified it under "history and overview" (hmm, okay, though the main point is an open conjecture) with a crosslist to "combinatorics" (huh?).
There appears to be no way to undo the crosslist to combinatorics.
Anyway, someone on the arXiv seems to have too much time on their hands. Or something.
weird... is there a policy on arXiv to "keep people in their lane", so to speak?
Yes, I believe there is. But if someone has decided that my "lane" is combinatorics, well, that's... pretty strange. Especially for a paper on fractal patterns formed by complex roots of polynomials with coefficients . I just can't see that as combinatorics!
There is no public policy. arXiv has an unpleasantly secretive side, probably necessary, but it inevitably makes mistakes as a result. Between that and the bizarre rigidity it enforces w/ LaTeX vs PDF (sometimes compiling on arXiv is quite tricky) I find HAL to be a good second option.
Luckily I don't think anyone except me will care too much how this paper of mine is crosslisted. It just seems so... dumb.
The days are long since past when getting an email about the day's hep-th preprints was something a person could sensibly read, but probably math.ct still is. But does anyone past graduate school still look at all the abstracts anyway?
I know people in specific areas who read the abstracts in their areas. For example if you're a self-proclaimed category theorist it makes sense to browse the abstracts in math.CT. But my own interests are too diffuse for any one arXiv subject area.
I try to skim the math.CT arxiv postings. I don't read all the abstracts, mainly I look at the titles and authors until something catches my eye and then I look at its abstract. But I like to have some idea of "what's going on in the field", especially since I can't go to as many conferences as I'd like.
My papers have been misclassified by the arXiv moderators on a number of occasions. The "keeping people in their lanes" idea doesn't seem to hold, because it's often changing it from a topic I do lots of work in to one I have never worked in. This is mainly frustrating to me because I imagine people in these fields being irritated at me for wasting their time by choosing irrelevant subject classifications. If you try get this changed they say you need to explain in detail why it does not belong to the category they assigned it to, but this is essentially impossible to do to the degree they require when you cannot comprehend how they thought it was a reasonable category in the first place.
I can explain in arbitrary amounts of detail why this particular 2-page paper is not about combinatorics. I could easily write 2 pages about why. :upside_down:
But would it work? I think it's worth the bother just to let them know that people don't like their papers being reclassified.
Probably it wouldn't work, but I agree that it might be worthwhile nonetheless.
There is probably no point arguing with arXiv per se. Maybe individual mods for the relevant subject areas.
I don't know which moderator changed my "complex variables" classification to "combinatorics".
Does an expert on complex variables get to say "this stuff isn't complex variables, it's... umm, err... combinatorics!"
Perhaps you've seen this already, but there is a list of arXiv moderators. I wonder which one of them moderates math.CT; I do recognize some names but nobody who strikes me as a category theorist. Perhaps Anthony Licata?
That list of moderators is interesting. Does anyone recognize any of the names as being category/homotopy theorists? I know some of those folks but I don't recognize any of them as being in my field...
Anthony Licata (Australian National University) works on categorification, which is not exactly what you asked for, but the closest I can see.
I wonder if people at the arXiv view CT as something that's more of a tool, and so anyone who uses it in their own field can moderate the math.CT submissions? Not unreasonable, perhaps, given that the arXiv is not filtering for anything but the minimum possible standard of "is a paper written in academic style" (modulo cranks, who get shuffled to math.GM). But a little sad and a reflection on the general standing of the CT community among the broader one, maybe. (I'm reminded of Barwick's essay on the future of homotopy theory, and the need to ensure homotopy theorists get appointments to journal boards etc to have a healthy ecosystem that recognises good work)
I wrote an article on tuning where you only use frequency ratios built from powers of 2 and 3:
I explain three ways the 'Pythagorean comma' shows up:
The third one I've never heard anyone discuss. It's well-known that there are two sizes of half-tone in Pythagorean tuning, but it turns out the ratio of these two sizes is the Pythagorean comma!
Someone must have noticed this, but it was very fun to rediscover.
I don't want get too deep into tuning systems, but I want to understand just intonation and other forms of 5-limit tuning (where all ratios are powers of 2, 3, and 5) well enough to understand this wonderful paper about Newton's work on tuning systems:
Just for completeness, here are the videos of my first two This Week's Finds lectures this year, on n-categories:
https://www.youtube.com/watch?v=ZVecriTCBLU
and on the periodic table of n-categories, and the stabilization hypothesis:
https://www.youtube.com/watch?v=X1PkkqDwf8Y
A paper of mine was just published in the Thang Long Journal of Science.
Here's another post on tuning systems:
Here's an issue that shows up. Given an irrational , say a fraction is a "best-so-far" rational approximation of if is smaller than for any fraction with a smaller denominator: .
(Here .)
1) Are all rational approximations arising from continued fraction expansions in the usual way best-so-far rational approximations? I feel the answer is "yes" but I don't know if it's true or why.
2) Do all best-so-far rational approximations arise from continued fraction expansions in the usual way? Apparently not: I give a counterexample.
The above questions were nicely tackled by Oscar Cunningham, so I put his answers on my blog. :check_mark:
I'm going to give talks in Cambridge and Birmingham this week, so maybe I'll see some of you there!
Wednesday October 18, 2023 - Jamie Vicary has invited me to speak in the Logic and Semantics Seminar at the University of Cambridge. I'll be speaking from 14:00-15:00 in room LT1 of the Computer Laboratory.
Friday October 20, 2023 - at 11 am I'll speak at the University of Birmingham Computer Science seminar. My contact is George Kaye, with Martín Escardo serving as catalyst.
It'll be the same talk in both places:
Mathematical models of disease are important and widely used, but building and working with these models at scale is challenging. Many epidemiologists use “stock and flow diagrams” to describe ordinary differential equation (ODE) models of disease dynamics. This talk introduces the mathematics of stock and flow diagrams and two software tools for working with them. The first, called StockFlow.jl, is based on category theory and written in AlgebraicJulia. The second, called ModelCollab, runs on a web browser and serves as a graphical user interface for StockFlow.jl. Modelers often regard diagrams as an informal step toward a mathematically rigorous formulation of a model in terms of ODEs. However, stock and flow diagrams have a precise mathematical syntax. Formulating this syntax using category theory has many advantages, but I will focus on three: functorial semantics, model composition, and model stratification. This is joint work with Xiaoyan Li, Sophie Libkind, Nathaniel Osgood, Evan Patterson and Eric Redekopp.
Hey @John Baez ! I am thrilled to report that I’ve now started my maths program and your advice of practicing hundreds of practice problems over the summer to prepare for a maths program really paid off! A question I had is I quite admire your blog on maths and was curious: how did you make blogging on maths into a habit for yourself? I blog a bit to understand concepts better, have fun, and share insights here and there but was curious on your methods or approach.
Congratulations! I'm glad it paid off, and I hope you do well!
I was one of the first people to blog (before the world-wide web existed). I did it in part because the woman who is now my wife lived on the other side of the country for 7 years before we got jobs in the same place, so I was lonely and bored at night. I found it was a good way to start conversations with people all around the world. But back then there were far fewer blogs - and nothing like Facebook or Twitter or Tiktok - so it was easier to attract attention than it is now.
Later, as it became harder to get anyone to notice anything, I still enjoyed blogging because I realized it was a good way to clarify my thoughts. If I can't explain something clearly, it's usually because I don't understand it well enough yet. Blogging also makes it easier to remember things: when I forget something I can often look at old articles I wrote to refresh my memory. By now it's a basic part of how I work.
Here's my talk at ACT2023. While it has ‘epidemiology’ in the title, it’s mostly about general ways to use category theory to build flexible, adaptable models:
Here are the slides, which have clickable links in blue.
Abstract. Mathematical models of disease are important and widely used, but building and working with these models at scale is challenging. Many epidemiologists use “stock and flow diagrams” to describe ordinary differential equation (ODE) models of disease dynamics. In this talk we describe and demonstrate two software tools for working with such models. The first, called StockFlow, is based on category theory and written in AlgebraicJulia. The second, called ModelCollab, runs on a web browser and serves as a graphical user interface for StockFlow. Modelers often regard diagrams as an informal step toward a mathematically rigorous formulation of a model in terms of ODEs. However, stock and flow diagrams have a precise mathematical syntax. Formulating this syntax using category theory has many advantages for software, but in this talk we explain three: functorial semantics, model composition, and model stratification.
You can get the code for Stockflow here and for ModelCollab here.
I'm currently enjoying reading in your applied category theory course!
While doing this, I am finding the occasional typo. Is there a good place to share the typos I find, so that they can get fixed?
I'm glad you're enjoying the course. Please do notify me of typos and other mistakes! Emailing me at baez@math.ucr.edu is best.
I've gotten a few corrections already, that fixed really bad mistakes - mistakes in definitions.
..................................................................................................
I'm digging into the math of 'just intonation', which was one of the most popular tuning systems from roughly 1300 to 1550 - and still used in some vocal and string music, I think.
It's based on the free abelian group generated by the primes 2, 3 and 5.
I started writing about it here:
I explain why it became more popular than the Pythagorean tuning system, which is based on the free abelian group generated by the primes 2 and 3.
But this article just scratches the surface of the math! I'm writing more.
One general problem is that people who know math jargon may not know music jargon, and vice versa. So, if you want to use the concepts of "free abelian group" and "major triad on the fourth", you either have to explain both of them, explain one and alienate one audience, or explain neither and alienate almost everyone. There are people who write about music theory using serious math and publish it in serious journals. But they are talking to very few people, I'm afraid!
So, I'm trying to tread carefully here and explain things as I go.
The second part will be a lot more mathematically interesting, because it introduces a crucial concept, called the Tonnetz:
This is an important submonoid of the free monoid on 2, 3, and 5; it's generated by 3/2 and 5/4.
Wait... do you mean the free abelian group generated by 2, 3 and 5, in that case, since you're dividing by them?
Yeah, I meant free abelian group. Will fix. I was too busy thinking how the nonnegative integers are the free commutative monoid on the primes.
...................
Hey, @David Michael Roberts. That erratum for "From loop groups to 2-groups" will appear fairly soon! After the copy editor and I worried about how to fix a bad line break without destroying the meaning of what I was trying to say, things seem to be sailing smoothly:
In that case, I will consider this ready to publish online. I am waiting on a few others, so I’m not sure whether to expect it to be posted one week or two weeks from today. Once it appears online, I will notify you and send you a copy with the correct page numbers.
Excellent, thanks for letting me know!
John Baez said:
Yeah, I meant free abelian group. Will fix. I was too busy thinking how the nonnegative integers are the free commutative monoid on the primes.
In my idiolect, that would be the positive integers.
I really can't slip anything past you guys. :upside_down:
I don't have a Mastodon account, but I wanted to respond to what you posted about there.
You mentioned a somewhat mysterious figure from a book by Boethius:
square of numbers
I was able to read the version of the book here. To do this, I took screenshots of the text, converted them to "copy-pastable" text using this and then translated the text to English using chatGPT.
I believe the square is illustrating some simple properties of certain arithmetic and geometric sequences. Namely, if you take two terms and of an arithmetic sequence, then their sum is equal to . Similarly, if you take two terms and of a geometric sequence, then their product is equal to . This is because and .
Each column of this square is an arithmetic sequence, and each row of the square is a geometric sequence. By the way, the reason Boethius is talking about this is because he is interested in studying "even-ness" and "odd-ness". Here, he is interested in numbers that are "somewhat even" but not "minimally" or "maximally" even. These are the numbers that have at least two factors of 2, and have at least one prime factor besides 2. The first column is the "least even" numbers satisfying these criteria - the odd multiples of 4 (skipping ). The entries in the second column from the left have three factors of 2, and so its entries are the odd multiples of 8 (skipping ). The "even-ness" continues to increase as one moves to the right.
The "arcs" on the outside of the square illustrate the property of arithmetic and geometric sequences I mentioned above. I first give some examples relating to geometric sequences. For example, . We also have that . Similarly, moving to the second row (which corresponds to the "inner arcs" on the top of the drawing), we have . We also have .
On the left of the square, we have some examples of the above mentioned property of arithmetic sequences. On the leftmost arcs, we have some examples illustrating properties of the arithmetic sequence in the leftmost column. For example, , and . Moving to the "right-most arcs" on the left side, these now correspond to properties of the arithmetic sequence in the second column from the left. For example, and .
By the way, Boethius is really good at frequently giving examples of what he means. Without the examples he gives, his meaning would be much less clear (at least to me!).
After having had all the fun of working from the Latin, I now see that there is a nice presentation of the key ideas in English here. The article is called "DE ARITHMETICA, Book I, of Boethius" and it is by Dorothy V. Schrader.
Wow, great! I will post your comments to Mastodon. So far I've been unable to spark a conversation there.
It looks like you really figured it out!
I'd been misled a bit by hoping there was some connection to harmony theory. However, the appearance of a row of numbers with 7 as a factor rules out the possibility that it's connected to the tuning systems Boethius would have considered!
Of course I noticed that the rows were 3, 5, 7, and 9 times the same list of numbers, and the columns were 1, 2, 4 and 8 times some other list of numbers. But I didn't really get what was going on, especially with the "arcs".
Thanks! To me this sort of math feels sadly simple compared to what Archimedes and Diophantus had been doing centuries earlier.... especially when it lacks the redeeming connection to harmony (which to me justifies all sorts of investigations into very particular number patterns). But then I remember that this guy Boethius survived the sacking of Rome by the Goths, and had become advisor to the king of the Goths. And then it seems pretty impressive that he had time for this!
Given that Trump may win the next election and he is planning to seek revenge against people who went after him, and perhaps invoke the Insurrection Act, I can't help but thinking about the role of math and science during the collapse of an empire. Maybe Boethius would have done better to keep his head down, keep translating Greek texts into Latin, and not complain about corruption in the Ostrogothic court!
.........
My latest video:
The 3-strand braid group has striking connections to the group SL(2,ℤ) of invertible 2x2 matrices with integer entries, the Lorentz group from special relativity, modular forms (famous in number theory), and the trefoil knot. They all fit together in a neat package, which I explain here.
.....
My new blog article dives deeper into just intonation:
We start with the free abelian group generated by 2, 3, and 5. Then we mod out by 2 (an 'octave') since a tone with twice the frequency of another sounds 'the same, only higher'. Then we identify this quotient with the free abelian group generated by 3/2 (a 'perfect fifth') and 5/4 (a 'major third)'.
We get this diagram called the Tonnetz:
Then we consider notes that differ by at most a factor of (or ) from . These lie in a parallelogram:
If we 'curl up' this parallelogram we get a torus containing 12 tones! What we're doing here is taking the free abelian group, quotienting by a subgroup, and getting a 12-element abelian group.
However, when we do this quotient some tricky things happen, as I start to explain in this article.
David Egolf said:
I don't have a Mastodon account, but I wanted to respond to what you posted about there.
I took the liberty of broadcasting your reply over there.
Each column of this square is an arithmetic sequence, and each row of the square is a geometric sequence. By the way, the reason Boethius is talking about this is because he is interested in studying "even-ness" and "odd-ness". Here, he is interested in numbers that are "somewhat even" but not "minimally" or "maximally" even. These are the numbers that have at least two factors of 2, and have at least one prime factor besides 2. The first column is the "least even" numbers satisfying these criteria - the odd multiples of 4 (skipping ). The entries in the second column from the left have three factors of 2, and so its entries are the odd multiples of 8 (skipping ). The "even-ness" continues to increase as one moves to the right.
This is cool. By the way, one possible reason for skipping and is that 1 wasn't considered to be a number!
I think that and were skipped because Boethius considered them to be in a different class of even numbers than the numbers that were included in the figure. and are both of powers of , and so were considered by Boethius to be in the class of most "even evens". So, they were excluded from the discussion in this section, where "somewhat even" numbers were under consideration.
Based on my (rough) understanding of what Boethius was saying up to this point in the book, I believe he did consider to be a number, but he did not mention . Quoting from that article I linked to, the translation (or paraphrase?) in English says at one point "Number is defined as a collection of units, or as a flow of quantity made up of units". And I believe he talks about adding one and multiplying by one in various places.
However, interestingly, I don't think he considered zero to be a number. Again quoting from the article I mentioned above: "Every number is equal to one-half the sum of the numbers next above and next below it, the next one but one above and below, the next one but two, and so on. Thus, 5 = 1/2 (4 + 6); 5 = 1/2(3 + 7); and 5 =1/2 (2 + 8). This is true of every number but unity, which has nothing below it and so is equal to one-half the number above it."
If Boethius had wanted to introduce a new number "below unity" while extending the pattern he points out, then he would want some number so that . That is, he would want some so that . (But, based on the part of the book I read, I saw no indication that Boethius actually did this.)
"Number is defined as a collection of units, or as a flow of quantity made up of units".
In fact this is consistent with not considering 1 to be a number. Of course you and I think of a "colllection" as possibly containing just one thing, but in everyday language it does not, e.g. if you invite someone to look at your rock collection and you show them one rock they'll think you're weird.
When Boethius writes
This is true of every number but unity
he's definitely saying 1 is a number! But in classical Greek mathematics the "unit" was not officially considered a "number". In fact sometimes it's considered to be the opposite of number. (One is the opposite of many.)
Well, I guess it's complicated! In someone's blog article
I read:
In book VII of the Elements (Definitions 1 & 2), Euclid defines the technical terms μονάς = monas (unit) and ἀριθμὸς = arithmos (number):
A monas (unit) is that by virtue of which each of the things that exist is called one (μονάς ἐστιν, καθ᾽ ἣν ἕκαστον τῶν ὄντων ἓν λέγεται).
An arithmos (number) is a multitude composed of units (ἀριθμὸς δὲ τὸ ἐκ μονάδων συγκείμενον πλῆθος).
So 1 is the monas (unit), and the technical definition of arithmos excludes 0 and 1, just as today the technical definition of natural number is taken by some mathematicians to exclude 0.
However, in informal Greek language, 1 was still a number, and Greek mathematicians were not at all consistent about excluding 1. It remained a number for the purpose of doing arithmetic. Around 100 AD, for example, Nicomachus of Gerasa (in his Introduction to Arithmetic, Book 1, VIII, 9–12) discusses the powers of 2 (1, 2, 4, 8, 16, 32, 64, 128, 256, 512 = α, β, δ, η, ιϛ, λβ, ξδ, ρκη, σνϛ, φιβ) and notes that “it is the property of all these terms when they are added together successively to be equal to the next in the series, lacking a monas (συμβέβηκε δὲ πᾱ́σαις ταῖς ἐκθέσεσι συντεθειμέναις σωρηδὸν ἴσαις εἶναι τῷ μετ’ αὐτὰς παρὰ μονάδα).” In the same work (Book 1, XIX, 9), he provides a multiplication table for the numbers 1 through 10.
So I guess in formal definitions the "monas" was not considered an "arithmos", but then they went around doing arithmetic operations with 1 without getting worked up about it... and Nichomachus even wisely considers 1 to be a power of 2.
(By the way, I think “it is the property of all these terms when they are added together successively to be equal to the next in the series, lacking a monas" must mean 1 = 2-1, 1+2 = 4-1, 1+2+4 = 8-1, etc.)
That's interesting! I did not know that 1 was ever (even partially) excluded!
Yes! Here's a nice quote from the above blog:
The special property of 1, the monas or unit, was sometimes expressed (e.g. by Nicomachus of Gerasa) by saying that it is the “beginning of arithmoi … but not itself an arithmos.”
The number 2 was also treated in funny ways, e.g. the early Greeks thought 2 was not prime - because obviously even numbers aren't prime.
John Baez said:
if you invite someone to look at your rock collection and you show them one rock they'll think you're weird.
That's true, but I feel like they would also think I was somewhat weird if I only showed them two rocks. So that example feels to me more like the sorites paradox.
I'm impressed 1 was recognised as a power of anything bigger than 1!
John Baez said:
Thanks! To me this sort of math feels sadly simple compared to what Archimedes and Diophantus had been doing centuries earlier.... especially when it lacks the redeeming connection to harmony (which to me justifies all sorts of investigations into very particular number patterns).
Which is funny, because the historical complaint mentioned in the article that translated a bunch of the text was that the work was too theoretical, and not focussed enough on practical algorithms! I think the argument that because Boethius had Roman numerals to work with meant actual arithmetical algorithms were not really available is possibly flawed, but it's interesting to consider.
The early history of combinatorics https://en.wikipedia.org/wiki/History_of_combinatorics#Earliest_records is likewise very much bogged down on what we would today think as very elementary. It took nearly 200 years for medieval mathematicians in the Jewish tradition to go from proving that what we now call binomial coefficients are symmetric, to finding the closed-form formula for them. That Boethius is experimenting with sequences of certain types of numbers feels in the same vague neighbourhood of such a level of mathematics (but, of course, over 500 years earlier)
In ordinary language, people say they have a number of items, but never in this sense a number of an item, or a number of one item.
Do they say they have "a number of items" if they have exactly two? That also sounds a bit weird to my ears.
"A number of one item" doesn't sound wrong to my ear, but not in the sense that you could replace "item" in that phrase with something in particular (e.g. "a number of one widget" definitely sounds wrong, but the generic "a number of one thing" still sounds OK). To be more precise, the literal phrase "a number of one item" sounds like a multiplicity of some kind of thing that is yet unnamed, as in the hypothetical conversation: "I have a number of one item." "What item is that?" "Sprockets. I have 75 of them."
Ending that conversation with "Sprockets. I have one of them." would still make sense, but would sound more like a comedy sketch (I could imagine something like this coming from Groucho Marx, e.g.) due to the subversion of expectations that it is more than one.
Mike Shulman said:
John Baez said:
if you invite someone to look at your rock collection and you show them one rock they'll think you're weird.
That's true, but I feel like they would also think I was somewhat weird if I only showed them two rocks. So that example feels to me more like the sorites paradox.
I believe that in very early Greece two was also not considered a number! My strongest piece of evidence: the Greek language had three number forms of nouns, verbs and adjectives: singular, dual and plural. So, if you referred to two things using the plural, that would be ungrammatical.
I want to write about this more carefully sometime, but I think a case can be made that at some point 3 was the first number; then people realized 2 was also a number; then people realized 1 was also a number, and only much later did they realize 0 was also a number. (Then later they realized -1 was a number, and all hell broke loose.)
Taking your sorites paradox seriously, I should look for evidence that at some point even three was not considered a number - e.g., languages that treat 3 different from other numbers. I think such evidence is thin on the ground, but I've heard rumors that some language has words for one, two, three and many.
More about the dual case, and how traces of it persist in English even today:
The dual number existed in Proto-Indo-European and persisted in many of its descendants, such as Ancient Greek and Sanskrit, which have dual forms across nouns, verbs, and adjectives; Gothic, which used dual forms in pronouns and verbs; and Old English (Anglo-Saxon), which used dual forms in its pronouns. It can still be found in a few modern Indo-European languages such as Irish, Scottish Gaelic, Lithuanian, Slovene, and Sorbian languages.
The majority of modern Indo-European languages, including modern English, have lost the dual number through their development. Its function has mostly been replaced by the simple plural. They may however show residual traces of the dual, for example in the English distinctions: both vs. all, either vs. any, neither vs. none, and so on. A commonly used sentence to exemplify dual in English is "Both go to the same school." where both refers to two specific people who had already been determined in the conversation.
David Michael Roberts said:
John Baez said:
Thanks! To me this sort of math feels sadly simple compared to what Archimedes and Diophantus had been doing centuries earlier.... especially when it lacks the redeeming connection to harmony (which to me justifies all sorts of investigations into very particular number patterns).
Which is funny, because the historical complaint mentioned in the article that translated a bunch of the text was that the work was too theoretical, and not focussed enough on practical algorithms!
Complaint by whom? Boethius' contemporaries? I can imagine the Ostrogothic nobles thought he was an irritating nerd. :upside_down:
There seems to have been a long and painful descent in the level of math and science after the heyday of Archimedes (287-212 BC), Apollonius of Perga (240-190 BC) and Diophantus (200-284 AD). In The Forgotten Revolution Lucio Russo shows how math got watered down in Roman popularizations, and then popularizations based on popularizations, and then popularizations based on popularizations based on popularizations....
It's a really fascinating book, though controversial.
John Baez said:
Taking your sorites paradox seriously, I should look for evidence that at some point even three was not considered a number - e.g., languages that treat 3 different from other numbers. I think such evidence is thin on the ground, but I've heard rumors that some language has words for one, two, three and many.
In Watership down the rabbits can count to four, while anything exceeding four is 'many', which is fun. I haven't heard of any real world languages drawing a line above two, though.
Somewhere I've heard of a language like that, which counts only up to four. It goes one, two, one-and-two, two-and-two, many. I think this is mentioned in Guns, Germs and Steel, but it was a very long time ago and I might be completely mis-remembering where I read it (and hence have no idea about the veracity of the claim).
Okay, I think Pirahã is a language where some claim there's no word for numbers above 2:
But: "The Pirahã language is most notable as the subject of various controversial claims [....] The controversy is compounded by the sheer difficulty of learning the language; the number of linguists with field experience in Pirahã is very small. "
The Schrader article says
Boethius is often criticized for his impracticality, for not giving at least the elementary rules of computation. It must be remembered, however, that arithmetica was not similar to our present-day arithmetic but rather to number theory, with which science it still has many features in common. common. The Greek and Roman equivalent of today's arithmetic, which is really the art of reckoning, was logistica.
but without citation. Apparently it's so well known as to not need a reference?
Oh, okay - so this is the sort of thing where on Wikipedia you'd stick a little note saying citation needed. I'm wondering whether these criticisms are circa 500 AD (despite the "is") or more recent.
Mike Shulman said:
Do they say they have "a number of items" if they have exactly two? That also sounds a bit weird to my ears.
Here's what I thought of: if someone says "I can think of a number of counterexamples", and the interlocutor says, "okay, name some", and the first says, "well, there's this... and lemme see, there's also that...", then I think it would be weird or unusual for the second one to insist on a third counterexample. I reckon the first has produced the goods.
Okay, so it seems 2 is a number but just barely - like a kind of pathetic attempt at a number.
We're willing to call it a number, but only not to be rude. :upside_down:
The conversation reminds me of a class discussion when I was in the sixth grade. We were discussing prime numbers, and the teacher seemed unsure whether 2 was a prime (she was claiming 'not' in the beginning, but was weakening under pressure from the students, and went off to consult with a colleague). It's well-known that 2 is the oddest prime of all.
Looping back around to the original topic, Boethius' De arithmetica begins by introducing various classes of number: even, odd, evenly even, evenly odd, and oddly even.
2 is oddly even!
As an illustration of how pathetic 2 is because it's so small: the usual exponential map defines an invertible homomorphism on the -adics, , except when , and the basic reason is that is just too damned small (the image winds up landing in , as you can see by reducing mod ).
Todd Trimble said:
The conversation reminds me of a class discussion when I was in the sixth grade. We were discussing prime numbers, and the teacher seemed unsure whether 2 was a prime (she was claiming 'not' in the beginning, but was weakening under pressure from the students, and went off to consult with a colleague). It's well-known that 2 is the oddest prime of all.
Of course if she were mathematically sophisticated enough she might have claimed that morally, 1 is prime.
Even among mathematicians, it wasn't until some time in the 20th century that 1 was universally considered a non-prime. A famous table of primes around the turn of the century, associated with the name D.N. Lehmer, duly included 1 as the first prime.
(And, relatedly, there are still many mathematicians who believe that the empty space is connected. These things can take a while.)
These things may be cyclical: https://ncatlab.org/nlab/show/field+with+one+element
I am not sure I understand this: the number of elements of a finite field doesn't have to be prime, so the existence of the "field with 1 element" doesn't necessarily say anything about the primality of 1. Am I missing something?
The characteristic of finite fields is prime for sure, but I woudln't say that the characteristic of is 1 (in fact, my understanding of is that it is of "undefined" characteristic, but that may depend on how exactly you define ).
(I'm guessing that JR was joking around in his last comment.)
.......
On a different note, today I criticized a famous expositor of math who wrote:
Since life is itself simply a game in disguise, having a few mathematical tricks up your sleeve can also give you an edge in the game of life.
I knew JR was joking, I just didn't (don't) understand the joke! Sorry :upside_down:
I'd heard of behaving like characteristic 1. I won't claim to understand that stuff.
Todd Trimble said:
(And, relatedly, there are still many mathematicians who believe that the empty space is connected. These things can take a while.)
And even some who believe that 0 is not a natural number!
John Baez said:
.......
On a different note, today I criticized a famous expositor of math who wrote:
Since life is itself simply a game in disguise, having a few mathematical tricks up your sleeve can also give you an edge in the game of life.
I diagree with what you say:"Succeeding in love is not easy, and there's no formula for it." Maybe there's no formula for it but there is as much rationality in this field as in any other. There are rules in anything: physics, society, cooking, love etc... There are things to do or not to do if you want to find love, find an academic position, become better in a sport, having more friends etc... Now, the rules can be very complicated. And sometimes extremely complicated. For instance: what are the rules to be happy? But I think that seeing life not as a game but as being an aggregation of games is a good starting point to a realistic view of life.
If I can take the liberty of a slight diversion along the lines of what Greeks thought a number was, Mayberry argues pretty strongly in Foundations of Mathematics in the Theory of Sets, a wonderful but badly-titled book that I'm not sure anybody else has ever read, that the Greek ἀριθμός (arithmos) is really a finite set, not a number at all in our sense. Of course when you think about it, it's hard to imagine they had a clear sense of an abstract number in the 4th or 5th century BC, before Plato even showed up, so I find it an easy argument to buy.
The highlight of the book is a heroic effort at getting some mathematics built out of an axiomatic set theory including the negation of the axiom of infinity! I'm not sure whether any "normal" set theorists do this kind of thing, except to the extent that it's a case of topos theory.
Jean-Baptiste Vienney said:
I disagree with what you say: "Succeeding in love is not easy, and there's no formula for it." Maybe there's no formula for it but there is as much rationality in this field as in any other. There are rules in anything: physics, society, cooking, love etc... There are things to do or not to do if you want to find love, find an academic position, become better in a sport, having more friends etc...
Hmm, it's possible you're disagreeing with something I didn't say. :upside_down: I don't see you claiming that 1) succeeding in love is easy, or 2) there's a formula for it.
But we would disagree if you think there is, lurking somewhere, some extremely complicated set of rules whose satisfaction will guarantee success in finding love. (There are of course many "rules of thumb", general principles which when followed tend in many cases to increase the chance of success..)
Hmm, I think that there is, lurking somewhere, some extremly complicated set of rules whose satisfaction will maximize your success in finding love. Maybe for some individuals, we can't guarantee success, a lot depend on your genetics and other factors that you can't control. For instance if you're Newton, it may be impossible to find success. But maybe even Newton could have had more success I'm sure.
If a full team of scientist was doing everything to help someone finding love, they would probably succeed. Maybe we don't have the knowledge today, maybe it would be extremely difficult for the individual to follow all these rules. Maybe following them would be very difficult psychologically. But I do think that love is governed by some rationality which can be written as a very complicated set of rules. Just like everything else.
What I dislike is the idea that there would be something irrational in love. I don't think so, that's my main point.
The seemingly irrationality and all the difficulty come from all the things that you can't control: how you are when you're born, the events that don't depend on you and the other people: you can't decide how they are and what they do.
But given all these externalities, I will not believe that there is not some extremely complicated set of rules whose satisfaction will maximize your success in finding love if you follow them. Maybe it coud even give you a 99% chance of success.
I don't think we really disagree on anything. This is just that most of the time when we speak, we're not very precise and so we can't really decide what we say is exactly meaning.
Mike Shulman said:
Todd Trimble said:
(And, relatedly, there are still many mathematicians who believe that the empty space is connected. These things can take a while.)
And even some who believe that 0 is not a natural number!
There are plenty of textbooks aimed at the secondary education and undergraduate levels that make the distinction that "natural numbers" does not include zero, but if you want to include zero, you have to call it "whole numbers". Sadly, I don't think I've seen any textbooks at these levels that include zero in .
I wouldn't mind calling the whole numbers including zero and then letting but I've never seen mathematicians do this this so I'm not going to be the one to start.
Jean-Baptiste Vienney said:
But I do think that love is governed by some rationality which can be written as a very complicated set of rules. Just like everything else.
Okay. I disagree about this for "being in love", and also for many other things like "being a good musician", "being a good mathematician", etc. I believe that for many such activities it is up to the individual to invent/discover their own personal version of what the activity actually is. Furthermore, the activity is not separate from the process of inventing or discovering one's own personal concept of the activity. That is, there's no way to first figure out what it means to do the activity and then do it. You can't really know what it means to do it except by trying to do it - even before you really know what it means! You could try to do it by following some rules that somebody listed for you, but this would be missing the point. Some advice here and there can be helpful sometimes, but part of the advice should be "don't always follow someone's else's advice: they're not you, and they can't know what you need to do."
I don't really want to talk about this more, but that's my attitude.
No problem :) We don't disagree on anything I think, we just don't know what question we answer. I kind of agree with what you have just said.
Damiano Mazza said:
I am not sure I understand this: the number of elements of a finite field doesn't have to be prime, so the existence of the "field with 1 element" doesn't necessarily say anything about the primality of 1. Am I missing something?
The characteristic of finite fields is prime for sure, but I woudln't say that the characteristic of is 1 (in fact, my understanding of is that it is of "undefined" characteristic, but that may depend on how exactly you define ).
The field with one element does not exist (you need 0 different from 1). I think it became famous because of Jacques Tits "boutade" (joke-hoax) : the symmetric group is just GLn (general linear group in dimension n ). over the field of one element". An elementary motivation is to calculate the number of k - dimensional subspaces of an n-dimensional vector space over the finite field with q elements. You find some nice polynomials in q as a result. When I ask my undergrads "what would you like to do with these polynomial when nobody is looking?", usually a (female) student answers: "evaluating them at q = 1" (although q was supposed to be a power of a prime number). Then, lo!, they find the combinatorial numbers which give you the number of subsets of cardinal k in a set with n elements. Conclusion: a vector space over the field with one element is a set with n elements. Exercise: Give a categorical rendering of this ... You can google Pierre Cartier, or Cristoph Soulé on this non-existent field...
Kevin Arlin said:
If I can take the liberty of a slight diversion along the lines of what Greeks thought a number was, Mayberry argues pretty strongly in Foundations of Mathematics in the Theory of Sets, a wonderful but badly-titled book that I'm not sure anybody else has ever read, that the Greek ἀριθμός (arithmos) is really a finite set, not a number at all in our sense. Of course when you think about it, it's hard to imagine they had a clear sense of an abstract number in the 4th or 5th century BC, before Plato even showed up, so I find it an easy argument to buy.
I wonder if Lawvere had read this book. I went to one talk by him, at CT in Cambridge, and he was mentioning how an "arithmos", something composed out of units, is really a structural set (or something like that). Here's him in 2015 (https://www.acsu.buffalo.edu/~wlawvere/GrothendiekAveiro15.htm):
The idea of an opposition between a category of cohesive spaces and a category of anti-cohesive sets also applies in particular to Cantor's description of the relation between a category of Mengen and a sub-category of Kardinalen. In fact, it appears that in general the discrete is a co-reflective subcategory of the cohesive, with the co-reflection extracting, as an Aristotelian arithmos, the Cantor 'cardinal of X' or the Hausdorff 'points of X'.
Lawvere then mentions "lauter Einsen" ("bunch of units" which dates back to his 1994 paper with this phrase in the title, but which doesn't mention arithmoi). Lawvere and Rosebrugh mention "arithmos" in Sets for Mathematics, which was published 2003, shortly after Mayberry's book from 2000.
Mayberry has published a little more recently on this, eg Euclidean Arithmetic: The Finitary Theory of Finite Sets, https://doi.org/10.1007/978-94-007-0431-2_12, in 2011.
Jorge Soto-Andrade said:
Exercise: Give a categorical rendering of this ...
Believe it or not, I've been very interested in this kind of exercise lately! That's actually why I took JR's joke so seriously: I thought that maybe behind the joke there was some cool fact about I didn't know!
.....
I put together a longer discussion of the mathematics of Boethius on my blog:
This should succeed in publicizing the great work @David Egolf did in figuring out the meaning of this diagram from Boethius' De arithmetica:
.....
@Simon Burton created a purely mathematical piece of music based on the math of just intonation and the Tonnetz - and a video to go along with it:
I created a piece of ambient music based on it, which I put here. It probably needs a lot of work, but it was fun to do some electronic music after a long break!
.....
In The Hitchhiker's Guide to the Galaxy, by Douglas Adams, the number 42 was revealed to be the answer to the ultimate question of life, the universe, and everything. But he never said what the question was! I will reveal it in my final Leverhulme Lecture here in Edinburgh.
The talk will be at 6 pm UK time on Tuesday November 21st in room G.03 of the Bayes Centre. To attend you need a ticket, which however is free. Theoretically you can get one here, though they are almost sold out:
The talk will probably not be available on Zoom. But with luck it will be recorded - and then you too can know the answer to the ultimate question!
John Baez said:
.....
Simon Burton created a purely mathematical piece of music based on the math of just intonation and the Tonnetz - and a video to go along with it:
I created a piece of ambient music based on it, which I put here. It probably needs a lot of work, but it was fun to do some electronic music after a long break!
I haven't listened to your ambient piece yet, nor to the entirety of Simon's creation, but I want to ask him about something he wrote there, that the tempo of the note is proportional to its frequency. Now, I'm not a musician, but that didn't sound quite right to me, if "tempo" refers to the duration of the note. The lower-pitched notes, which have lower frequencies, should then sound with shorter durations, but that's not what I'm hearing. It sounds like it's the other way around, like maybe the duration is directly proportional to the period of the sound wave, which is inversely proportional to the frequency. Am I wrong?
(Oh, if "high" tempo means quick tempo, so shorter durations of notes, then it would all make sense. Sorry in that case for the noise.)
Yeah, "tempo" here is a synonym for frequency, but of a very low frequency.. I agree it's confusing terminology.
As it frequently (!) happens in mathematics, once you understand something, it becomes hard to recreate how it could be that you didn't understand it earlier. "Tempo" makes perfect sense to me now. Probably it would have struck me earlier if I thought for a half-second about metronomes (for example).
From my own musical background, I'm not sure I'd say 'tempo' for the inverse of the duration of the repeated individual notes, as I generally think of tempo as a property of the piece as a whole, though I acknowledge that in eg fugal composition, the different voices can play the theme and its variations at different tempi.
Regardless, it's very pleasant to listen to!
Let me say my understanding of what @Simon Burton did, even though you all seem to get it.
Each of the 7 notes of the D major scale is played over and over again at equally spaced moments in time. The number of times per second at which the note is played (the "tempo") is proportional to the frequency at which the note vibrates (the "pitch"). Since he's using just intonation these frequencies are proportional to
1, 9/8, 5/4, 4/3, 3/2, 5/3, 15/8
so the temporal spacing these notes is proportional to
1, 8/9, 4/5, 3/4, 2/3, 3/5, 8/15
respectively.
The least common multiple of the denominators here is 180, so if we multiply by 180 we get integers:
180, 160, 144, 135, 120, 108, 96
So, if I understand right, Simon played
the tonic once every 180 beats
the second once every 160 beats
the third once every 144 beats
the fourth once very 135 beats
the fifth once every 120 beats
the sixth once every 108 beats
the seventh once every 96 beats
and they all sound simultaneously once every 180 beats. He did not play the zeroth beat, so the first time they all sound simultaneously is on the 180th beat, which is the end of the piece!
However, there's also a lot of excitement half-way through the piece, etc.
You can see the nice pattern here:
It's sort of fractal-esque.
This is from Audacity, the free software I used to process the piece and add some echoing and other effects.
Another interesting thing about the piece is that since the seventh is the note played most often, the piece gets a rather dark character. I think I was confused about the details here when I emailed Simon about it earlier! If you take a major scale and play the notes starting on the 7th, it's the Locrian mode.
John Baez said:
Congratulations! I'm glad it paid off, and I hope you do well!
I was one of the first people to blog (before the world-wide web existed). I did it in part because the woman who is now my wife lived on the other side of the country for 7 years before we got jobs in the same place, so I was lonely and bored at night. I found it was a good way to start conversations with people all around the world. But back then there were far fewer blogs - and nothing like Facebook or Twitter or Tiktok - so it was easier to attract attention than it is now.
Later, as it became harder to get anyone to notice anything, I still enjoyed blogging because I realized it was a good way to clarify my thoughts. If I can't explain something clearly, it's usually because I don't understand it well enough yet. Blogging also makes it easier to remember things: when I forget something I can often look at old articles I wrote to refresh my memory. By now it's a basic part of how I work.
Hey @John Baez , it has been some time since you wrote this beautiful message! I've been ruminating on it over the weeks as I thought this was both such a nice story as well as it being inspirational (seems as though several others did too!). Since we last messaged a bit, I thought about how to develop further the habit of writing and have made a small little academic writing support group with some friends and collaborators. I definitely feel that blogging and writing in general improve my thinkings about things -- hoping to make this a habit that sticks like you have done!
Morgan Rogers (he/him) said:
In Watership down the rabbits can count to four, while anything exceeding four is 'many', which is fun. I haven't heard of any real world languages drawing a line above two, though.
Aside: I thought I was the only one to have read that book lol is it popular in the UK/US?
I still remember the word btw -- it's hrair! In the Italian translation, one of the characters is named Quintilio, meaning more or less 'the fifth one', and indeed the rabbits call him 'hrairù' because they don't have a word for five.
Yeah, the English translation of "Hrairu" is "Fiver".
Watership Down was very popular in the US and UK, at least among people who like books about talking rabbits.
Seriously, it was quite well received:
The Economist heralded the book's publication, saying "If there is no place for Watership Down in children's bookshops, then children's literature is dead." Peter Prescott, senior book reviewer at Newsweek, gave the novel a glowing review: "Adams handles his suspenseful narrative more dextrously than most authors who claim to write adventure novels, but his true achievement lies in the consistent, comprehensible and altogether enchanting civilisation that he has created." Kathleen J. Rothen and Beverly Langston identified the work as one that "subtly speaks to a child", with "engaging characters and fast-paced action [that] make it readable." This echoed Nicholas Tucker's praise for the story's suspense in the New Statesman: "Adams ... has bravely and successfully resurrected the big picaresque adventure story, with moments of such tension that the helplessly involved reader finds himself checking whether things are going to work out all right on the next page before daring to finish the preceding one."
Adams won the 1972 Carnegie Medal from the Library Association, recognising the year's best children's book by a British subject. He also won the annual Guardian Children's Fiction Prize, a similar award that authors may not win twice. In 1977 California schoolchildren selected it for the inaugural California Young Reader Medal in the Young Adult category, which annually honours one book from the last four years. In The Big Read, a 2003 survey of the British public, it was voted the forty-second greatest book of all time.
So, not quite up there with the bible, but pretty good.
One of my fond memories of the year I lived in England is walking the route taken by the rabbits in Watership Down. All the places in it are real.
(Well, except the rabbity places like Efrafa.)
We should probably move this to #meta: off-topic .
I don't mind it here. I kind of like such digressions.
......
I'm digging a bit deeper into the math of a tuning system that was popular in the Renaissance:
Last time I looked at the "Tonnetz", the free abelian group on a fifth (3/2) and a major third (5/4):
There's a parallelogram whose corners are numbers close to the tritone, , times various powers of 2:
We can multiply these numbers by suitable powers of 2 to get numbers between 1 and 2, suitable for notes in a scale:
If we identify opposite edges of this parallelogram, we get a torus with 12 notes on it - just the right number for an ordinary ("chromatic") scale!
But there's a problem, which I discuss in the new article: the numbers on the right edge are 81/80 times their partners on the left edge:
This number 81/80 is called the syntonic comma and it's a fundamental glitch in just intonation. For each of the 4 pairs of notes shown above, we need to pick one to be in our scale!
There's also another problem: the numbers on the bottom edge are 128/125 times their partners on the top edge:
This number 128/125 is called the lesser diesis. It's also the discrepancy between a binary kilobyte (1024 bytes) and a decimal kilobyte (1000 bytes).
So, the lesser diesis shows up whenever you try to translate between powers of 2 and powers of 5 in a simple approximate way.
Similarly the syntonic comma shows up when you translate between powers of 3 and products of powers of 2 and 5.
These problems combine when we try to identify all 4 corners of our parallelogram. We have 4 choices of rational number close to the frequency of a tritone - that is, 4 rational approximations to . You can break them into 2 pairs each of which multiply to give 2. You can break them into 2 pairs each of which have a ratio equal to the syntonic comma 81/80. And you can break them into 2 pairs each of which have a ratio equal to the lesser diesis 128/125.
In fact all the ratios of these numbers are important in music theory, and have names:
I find it fascinating that this stuff was pretty well understood by Boethius when he was serving as advisor to Theodoric, king of the Ostrogoths, who had taken over Rome.
Though I'm pretty sure that the triangular lattice came later, and all my ways of talking about it are modern.
Ultimately what we're doing here is taking the free abelian group on 2, 3 and 5, and then modding out by 2 (this is called octave equivalence), 81/80 (the syntonic comma) and 128/125 (the lesser diesis), getting the group .
John Baez said:
"Number is defined as a collection of units, or as a flow of quantity made up of units".
In fact this is consistent with not considering 1 to be a number. Of course you and I think of a "colllection" as possibly containing just one thing, but in everyday language it does not, e.g. if you invite someone to look at your rock collection and you show them one rock they'll think you're weird.
I' like to add a point to this discussion on 0 and 1. Indeed, I did not know such interesting aspects of the history of numerals, ad I wish to thank you all for sharing such tasteful stuff. However, the world "number" does not seem to have a greek etymology, but rather a latin one. It is based on the latin word "numerus", whose root is the proto-indo-european word "nem-" which means "assign, allot; take". Therefore, a "number" is a quantity of stuff that results from a partition... from this perspective, not only "one", but also "zero" is legitimately a number.
It's true 'numerus' is Latin and Boethius lived in the Roman Empire (which had just been taken over by Ostrogoths). I wouldn't be surprised if his concept of number was heavily influenced by Greek mathematics (since that's where the Romans learned advanced mathematics, and he was busily translating Greek classics into Latin). But I'd have to carefully read his work to see if he thought 1 was a number. I don't think proto-Indo-European is the best way to tackle that question. But of course there are lots of questions here.
I feel pretty sure Boethius didn't consider zero was a number: I don't think the concept of 'zero' even existed for him!
Jacob Zelko said:
Since we last messaged a bit, I thought about how to develop further the habit of writing and have made a small little academic writing support group with some friends and collaborators. I definitely feel that blogging and writing in general improve my thinking about things -- hoping to make this a habit that sticks like you have done!
Great! I think it's a great habit... I just spent about 5 hours today writing 3 blog articles, mainly because I was really excited about some things and had to write about them!
......
Some good news! I'm now helping lead a new Fields Institute program on the mathematics of climate change.
You may have heard of the Fields Medal, one of the most prestigious math prizes. But the Fields Institute, in Toronto, holds a lot of meetings on mathematics. So when COVID hit, it was a big problem. The director of the institute, Kumar Murty, decided to steer into the wind and set up a network of institutions working on COVID, including projects on the mathematics of infectious disease and systemic risks. This worked well, so now he wants to start a project on the mathematics of climate change. Nathaniel Osgood and I are leading it.
Nate, as I call him, is a good friend and collaborator. He's a computer scientist at the University of Saskatchewan and, among other things, an expert on epidemiology who helped lead COVID modeling for Canada. We're currently using category theory to develop a better framework for agent-based models.
Nate and I plan to focus the Fields Institute project not on the geophysics of climate change - e.g., trying to predict how bad global warming will be - but the human response to it - that is, figuring out what we should do! This project will be part of the Fields Institute's Centre for Sustainable Development.
I'll have a lot more to say about this. But for now, let me just say: I'm very excited to have this opportunity! Mathematics may not be the main thing we need to battle climate change, but there are important things in this realm that can only be done with the help of math. I know a lot of mathematicians, computer scientists, statisticians and others with quantitative skills want to do something about climate change. I aim to help them do it. And since I have a special fondness for applied category theory, I'll be describing some ways some ways you folks here can join in.
You've been hinting at this for a while, it's very exciting to see you announce it! I look forward to seeing where this goes.
Thanks! I'll definitely keep talking about it.
I love seeing this news :heart: Congratulations! You've inspired me to think more about this stuff going back to the early days of Azimuth Project.
Just a word of caution though. I expressed this to you about the COVID modeling and the same applies to climate science modeling.
One thing I've learned working in corporate life is that models can be used for good and they can be used for evil. Give me a thesis, and I can build you a believable model that supports the thesis. I can also build a believable model that supports the opposite thesis. Even with the same data! Building models is a super power with real implications.
Both COVID and climate science are highly political issues and it is brave to try to help, but at the same time, these models should be taken very seriously. As seriously, maybe, as the Manhattan project. People can use these models to start wars. People can and probably will die as a result of using these models.
For example, the COVID models only looked at infection rates and not the mental health aspects. Now we have a largely broken COVID generation (my kids being a part of that).
I don't have a good answer for what to do about this, but think it is a serious topic worth talking about. The one thing I can say is to please not be naive and think people are going to use these models only for good. People are going to find ways to twist these models to support their own agendas. How can you protect against that?
Don't want to veer too off the topic, but re
"For example, the COVID models only looked at infection rates and not the mental health aspects. Now we have a largely broken COVID generation (my kids being a part of that)."
What do you mean by "largely broken"? The most I would venture to say, based on my individual experience of college-aged youngsters (including my own children, ages 22 and 19), is that their experience of learning over Zoom resulted in some visible setbacks in their education, but wow, "broken" sounds like a very strong word to me. Do you have studies in mind that I can read about?
Hi Todd. Fair to call me out on that. That statement was too strong. Sorry about that. I don't have data or studies on that other than my own anecdotal observations.
You'll find countless scientific papers if you type "covid-19 lockdown mental health" on Google Scholar.
Eric wrote:
I don't have a good answer for what to do about this, but think it is a serious topic worth talking about. The one thing I can say is to please not be naive and think people are going to use these models only for good. People are going to find ways to twist these models to support their own agendas. How can you protect against that?
I can't stop people from creating bad models or using models badly. Mathematicians like me aren't the ones making the models or making public policy decisions based on the models. But one thing mathematicians can do - which our team is already doing! - is making models that are easier to understand.
Nate Osgood does a lot of public health modeling. He complains that often the models are just piles of code which nobody - except, one hopes, the model's creator - understands very well. There is not a clear specification of the model outside the code itself, so if you ask what's the model, and you want to know exactly what it is, you ultimately are forced to read a lot of code. Sometimes the model's creator is a postdoc and when they get a new job, nobody in the original research group really understands the model. Then they try to change the model, to improve it in some way. You can see the potential for problems.
So, Nate has been trying to fix this problem, and that's what our team's work using category theory aims to do. We're trying to make the models more transparent.
I think this is a good thing. It's easier to abuse a model when almost nobody understands it.
By the way, when I say "our team" it may sound like we have a bunch of people working under us, but that's really not how it works at all. I'm talking about a lot of people including @Evan Patterson, @Sophie Libkind, @Xiaoyan Li, @Kris Brown, Sean Wu, William Waites and others, all doing different interesting things, whose work just happens to come together now and then.
I'm making a tiny bit of progress trying to understand agent-based models using category theory. Here's what I recently came up with:
It's very basic stuff, just trying to understand a single agent... but I really feel the need to think about the basics.
.....
More on the math of music, with insanely detailed pictures:
Many people tried to build a 12-tone scale where the frequency ratios are simple fractions built from the primes 2, 3 and 5. Kepler, Mersenne, Newton and Euler all tried their hand at it! But there are some choices.
This picture shows the most popular choice, which just happens to be Newton's. This choice has a beautiful up-down symmetry. To go up from the bottom you multiply by various numbers. Dividing by these same numbers takes you down from the top of the scale!
Some choices not taken are shown in grey. Other people used these other choices.
Nowadays most music is in 'equal temperament', with just one size of space between neighboring notes:
• the equal-tempered semitone, 2^(1/12) ≈ 1.05946….
But this scale has three:
• the diatonic semitone, 16/15 = 1.0666…
• the greater chromatic semitone, 135/128 = 1.0546875
• the lesser chromatic semitone, 25/24 = 1.041666…
There's also a tiny gap:
• the diaschisma, 2048/2025 ≈ 1.011358…
Hey, wait! Actually this scale has 13 tones! I tried to trick you into thinking it had 12. But to get 12, you need to leave out one of the two tones separated by the diaschisma. Both these tones are very close to the tritone. Pick one! Either way, you destroy the up-down symmetry.
Newton and most other people decided to leave out the higher of those two tones separated by the diaschisma. That gives the scale here, which is not quite symmetrical:
This scale has one more diatonic semitone, since
diatonic semitone = greater chromatic semitone × diaschisma
or in terms of numbers:
16/15 = 135/128 × 2048/2025
I find this math delightful! We flattened out all these nuances, and a lot more, when we switched to equal temperament.
You may not care about the math - but the effect of equal temperament is to make all the keys (like A major, B major, C major, etc.) sound basically the same. Before that, each had its own personality. And their personalities depend on our subtle decisions about the math.
I actually like listening to music in equal temperament: I'm used to it, so it sounds right to me. But I admire how musicians dealt carefully with the math of tuning systems for millennia - before throwing in the towel around the start of the 20th century.
In the 21st century, even most pop stars electronic instruments and autotuning. Music has become more mathematical than ever! So it has become vastly easier for us to explore different tuning systems.
Do you know if there was a particular reason or set of reasons why Newton (and others) decided to specifically avoid a 13-tone scale? I can imagine a few plausible reasons, but it would be interesting to know, e.g., Newton's own reasons.
Some of the plausible reasons I can think of: 12 tones was already entrenched by Newton's time; 13 is viewed as an "unlucky" number in Western culture; a tritone is bad, so two of them is worse; the 12-tone asymmetry was considered more pleasant in practice than the 13-tone symmetry for some other reason; the diaschisma only shows up once in the 13-tone scale, so getting rid of it reduces the number of kinds of semitones. (I kind of hope it's something completely different that I haven't even considered!)
Actually a lot of music theorists of that era seem to not whittle down the scale to a 12-tone scale: they merely mention both alternatives, called the augmented 4th and diminished 5th.
In vocal music or string music you don't really need to pick just one.
But when you're creating a keyboard instrument, it verges on the absurd to have two notes per octave separated by a tiny amount, while the rest are all roughly equally separated.
Most people prefer a scale with notes that are about equally spaced.
There are a few old keyboards with more than 12 notes per octave, but not just 13 - because nobody wanted to solve this sort of problem in just one key. Check out the cimbalo cromatico and see if you can stand listening to it. It feels like the guy playing is trying to illustrate the weirdness it's capable of, not trying to make it sound good!
Such instruments never became popular. And the demands of keyboard music - the desire to be able to play music in more than one key! - also pushed people away from just intonation, where one key sounds great - at least in the major scale - and most of the others sound bad.
This led to "mean-tone temperaments" starting around 1550.... and later "well-temperaments", and finally "equal temperament".
I'm fascinated by the math underlying these historical developments.
Some of the weird in that cimbalo cromatico definitely made me wince a bit, but some of the other weird actually did sound quite good. Overall, I could easily see myself listening to an album's worth of stuff like that in one sitting. I also think that kind of sound could be used to great effect for setting a mood in a film score: the score could focus on the "good weird" while setting the mood that something is not quite normal, introducing the "wincing weird" when the character finally figures out that something is not quite normal.
You're making me want to hear a film score using the cimbalo cromatico. I would also like to hear music that used the notes not close to the equal-tempered chromatic scale very sparingly - for spice, not the main meal.
The business of developing agent-based models for epidemiology and climate change is really absorbing me now, and it's different than anything I've done. I used to be focused on developing cool math ideas, either by myself or with grad students or by talking to people on blogs. I still want to do that, but now:
1) It's a lot more important to get institutions to collaborate, which means applying for grants and talking to people in charge of these institutions. I'd like to blog about the process of this, but I'm afraid I shouldn't talk about any given step until it's succeeded. These people probably wouldn't like seeing blog articles about the process, so blogging might jinx things.
2) A lot of the math and software this project needs has already been created by applied category theorists - including many people here. So a big part of my job is to learn this math, talk to all of you about it, synthesize it, and try to get into the position where I can support some of you to work on developing agent-based models for epidemiology and climate change. (That gets back to part 1: the grants.)
I updated this paper based on referee's reports:
I also caught and fixed another mistake: it's not true that similar triangles give isomorphic elliptic curves! I think the paper is much improved, and it seems headed for publication in the Notices of the American Mathematical Society.
.....
More on music theory:
Here I actually realized something that nobody told me.
I'd earlier noticed that there are up to 4 different half-tones in quite reasonable just intoned scales, like this one:
I'd also noticed that there are 4 good candidates for the tritone in these scales, separated by tiny intervals called the 'syntonic comma', the 'lesser diesis', the 'diaschisma' and the 'greater diesis':
So I guessed that the 4 different half-tones are also separated by these same tiny intervals... and it's true!
I'm color-coding all these guys in my articles, to make them slightly easier to remember.
In case it's still utterly confusing: it's ultimately all about problems like approximating the numbers and (the tritone) with numbers of the form .
Here's the video of a talk I gave on Tuesday, about surprising appearances of the number 42 in geometry and the theory of Riemann surfaces.:
The Answer to the Ultimate Question of Life, the Universe and Everything
In The Hitchhiker's Guide to the Galaxy, by Douglas Adams, the number 42 was revealed to be the "Answer to the Ultimate Question of Life, the Universe, and Everything". But he didn't say what the question was! I will reveal that here. In fact it is a simple geometry question, which then turns out to be related to the mathematics underlying string theory.
For more see:
http://math.ucr.edu/home/baez/42/
and for the slides of this talk see:
https://math.ucr.edu/home/baez/42/
So the transitive automorphism group of the triangulated Klein quartic is a group with index-2 subgroup PSL(2,7)? I guess that's PGL(2,7)?
Yes! PSL(2,7) is also the group of all Riemann surface automorphisms of the Klein quartic - that is, holomorphic diffeomorphisms of this 1-dimensional complex manifold. PGL(2,7) is the group of all holomorphic or antiholomorphic automorphisms. PSL(2,7) has 168 = 24 7 elements and PGL(2,7) is twice as big, with 336 elements.
I've been making progress understanding agent-based models:
I think it's time to start applying this to a kind of agent-based model that Nate Osgood often uses - he showed me an example for pertussis (whooping cough) among children.
In models of this sort, each agent is described by several directed graphs (or quivers?) where vertices are 'states' and each edge is labeled with a cumulative hazard function, which is a nondecreasing function obeying and typically also . This function describes the probability that if your agent's state is sitting at the source of that edge that it will have moved along that edge to a new state by time ... if nothing else happens first.
Since there are typically several edges going out of a state, there are different options for what edge the agent's state will move along, so the probability that it moves along one edge if nothing else happens first is a conditional probability. We have to do some calculation to figure out the actual probability - or in practice, do a Monte Carlo simulation of what happens. (All this stuff is standard to modelers and it's just up to me to understand it.)
If all we had was several labeled graphs of this sort, there would be no interaction between them, and we'd really be using them as a convenient way to describe a bigger, more complicated graph of this sort, namely a kind of product of these graphs. (I need to work out the category of these graphs and see if my intuition is right here: does this category have products, and if so what are they like?)
However, in reality there is an extra feature I haven't mentioned yet!
Namely, when the agent's state in one graph (visualize it as a colored dot moving from vertex to vertex as time passes) reaches a particular vertex, it may 'send a message' to another graph.
This message may have various effects on the state moving around in that other graph, and I'm not sure I know all the allowed effects.
But for example, it could be that one graph has vertices "healthy", "infected" and an edge from the former to the latter. Another graph might have vertices "living at home", "hospitalized" and an edge from the former to the latter. And it could be that when when the state on the first graph is in "infected", a message is sent to the second graph which says that the probability of the state there hopping from "living at home" to "hospitalized" increases a certain amount.
We can similarly have messages transmitted from the graphs describing one agent to those describing another agent. This is how they describe transmission of a disease.
Anyway, I have to figure out what the usual rules are for this class of models, and formalize them. There are other kinds of model too, but this seems like a fairly simple but broadly useful class.
I already like the idea of cumulative hazard functions as a way of describing processes that take a while to occur, and occur in some stochastic but often non-Markovian way.
For example I realized there's a concept, which I'd never thought of before, of a "stochastically delayed map" from one set to another. This is an ordinary map together with, for each , a cumulative hazard function as defined above. This function tells us, in a stochastic way, how long we have to wait when applying the function to an element before we get the answer .
Normally we think of the answer as popping out "instantly" when we apply to ... or, to be honest, we don't bring in time at all.
But a stochastically delayed map attaches a cumulative hazard function to each point , and is the probability that we know by time .
I'm pretty sure we can compose stochastically delayed maps and get a category with them as morphisms.
There should similarly exist "stochastically delayed relations", but composing these is more complicated since if is related to both and , we should think of and as "competing" for being the answer to "who is is related to?" Whoever gets there first wins!
Stochastically delayed relations are still rather fuzzy in my mind, but I think I need some concept like this to understand what's going on in the models Nate is using.
https://en.wikipedia.org/wiki/Variable-order_Markov_model
I made a list of answers to this question, with some reading material for each one:
If things go according to plan, the Fields Institute and some other math institutes will eventually have workshops on these topics. But I bet I left out a lot of important topics!
@Nathaniel Virgo reminded me of a hugely important realm of math for climate change that I left out - game theory!
I added it to my list.
.....
@Matteo Capucci (he/him) is wondering how I do multi-part blog posts:
You've been doing this sort of explainers in installments for a long time, and since the task looks daunting to me I was wondering if you have some advice to share about doing such things. For instance, do you plan all the installments in advance? Or do you just go with the flow until you are done? In fact... how do you approach writing even one post? Often I start writing and then there's just too many things to fix and keep track of and I end up with nothing in my hands. If I try making the posts smaller, then they look unfinished--missing background, unexplained details, and so on.
I think I rarely plan out more than one post at a time.... except when I'm trying to write about X, and I realize that first I need to explain V and W: two prerequisite topics. Then I'll write a post about V and a post about W before writing a post about X.
But my goal, even in these cases, is to explain one thing per post, and focus on writing a good explanation of that one thing, with as few digressions as possible.
:) thanks for sharing, John
It can be hard to follow what someone is saying if they're trying to explain more than one thing at a time. And people are not very patient about reading blogs. So I think it's good to try to explain one thing.
It's always harder than I originally think it's going to be, since I have to repeatedly examine what I write and see what someone would need to know to understand that. This repeated examination and fixing is the main task of writing for me. I do it over and over, rereading every sentence many times and often changing them.
As things get nicer and nicer, eventually the writing starts to "sing" - I don't know how to explain what I mean by that, but basically it starts becoming fun to read.
I create multi-part posts when I keep wanting to write posts in the same general subject. But I don't really plan them out ahead of time - that might be good, but for me somehow that would make writing less fun.
It's very interesting to get that vaguely quantitative sense of how much you revise your writing, since it comes across as informal enough in tone one can imagine it being tossed off all at once. It's good to remember that part of the answer to writing really well is to simply work harder, not necessarily to receive some gift from the gods!
Thanks! Yes, I rewrite everything many times to try to achieve a casual and I hope friendly tone while still being precise. It would be much quicker for me to write in a dry style, just stating facts, But that's poisonous if you're trying to get people to actually read what you write!
I'm still learning new tricks. For example, in the last couple of years I've realized it's good to start a paragraph with a question - the question that the following stuff will answer. With luck this question could be something the reader was already wondering about. Like this:
Why did I draw a triangular lattice instead of a rectangular one? For now you can think of it as a random artistic decision, but the resulting diagram is called a Tonnetz, which means ‘tone network’ in German, and later we’ll see how useful it is.
I think a question can really clarify why you're saying what you're saying - more than just stating facts. And I think it tends to wake up the reader. Of course like all tricks it should not be overused.
And now for something completely different:
https://doi.org/10.4310/HHA.2023.v25.n2.e18
hurrah! Now I have to get on and write the next paper that relies on this.
:tada: Great! Weirdly, I just got an email saying
Your article was published on Wednesday (Nov. 22), I apologize for the delayed notice. The canonical URL for you to link to is
but access is limited to subscribers until January 1, 2024.
Hmm, your link says e18 but the one they gave me says a18.
Anyway, it's either public or soon becoming public....
Thanks for all your help with this.
I think the a18 is a typo, it doesn't resolve for me, whereas the e18 one does. And, weirdly, I could access the pdf this morning, but I can't now! I should have saved a copy!! (not that it's massively important, I have a pre-publication copy already...)
And note that the doi link system doesn't need the dx.
any more, that's an old thing. It still works, but it's 30% more characters to type in the domain ;-P
I'd like to write an article like this if I don't have to finish it too soon:
I am writing to you as editor for the Early Career Series at the AMS Notices to ask if you would write a short piece on writing a good paper? Erica Flapan, Editor in Chief of the Notices, suggested that I ask you.
The Early Career Series is a column with professional advice in the AMS Notices, written by mathematicians, for graduate students, new PhDs, and those who mentor them.
You can see the articles in this month's issue on the AMS Notices websitehttps://www.ams.org/cgi-bin/notices/amsnotices.pl?thispage=homenav
and that have appeared so far in the Early Career Collection
http://www.angelagibney.org/the-ec-by-topic/
The articles for the Early Career are typically 1-4 pages long, where one page is about 750 words.
Thank you for your consideration,
Krystal Taylor
Associate Professor | Mathematics
The Ohio State University
I have an unfinished article called Why mathematics is boring, about how to write in boring ways, but it's probably too negative for this sort of early career advice.
I'm giving a talk at the Edinburgh Mathematical Society on Friday. It should be recorded.
Time: Fri, 8 December, 16:30 – 17:30S
Title: Category Theory in Epidemiology
Abstract: "Stock and flow diagrams" are widely used for modeling in epidemiology. Modelers often regard these diagrams as an informal step toward a mathematically rigorous formulation of a model in terms of ordinary differential equations. However, these diagrams have a precise syntax, which can be explicated using category theory. Although commercial tools already exist for drawing these diagrams and solving the differential equations they describe, my collaborators andI have created new software that overcomes some limitations of existing tools. Basing this software on categories has many advantages, but Iwill explain three: functorial semantics, model composition, and model stratification. This is joint work with Xiaoyan Li, Sophie Libkind, Nathaniel Osgood, Evan Patterson and Eric Redekopp.
Location: Usha Kasera Lecture Theatre, Old College, University of Edinburgh, South Bridge, Edinburgh EH8 9YL, UK and Zoom (map)
Directions to the venue: Enter the Old College quad from South Bridge (map: https://www.ed.ac.uk/maps/maps?building=0001). Head toward the grass, then turn right and enter the building using the entrance labelled “School of Law Reception”. Head for the Usha Kasera Lecture Theatre.
For those who saw my talk at CATNIP on Monday, this is suspiciously similar and can be skipped.
I will take this chance to let the Edinburgh mathematicians know about the Fields Institute project Mathematics for Climate Change, and also a new twist: the International Centre of Mathematical Sciences is joining this project!
Roy Kerr, the guy who discovered the solution of general relativity describing a rotating black hole, recently came out swinging against Penrose's singularity theorem. I wrote three posts about this on Mathstodon. I had to keep rewriting them as the story developed... and so far it's not looking good for Kerr.
Today @Matteo Capucci (he/him) is coming to my house to talk about categorical cybernetics.
I want to talk a lot more, since I think these ideas are coming up a lot in my new work on agent-based models.
I've realized that a lot of people get lost in the math of tuning systems, so now I'm going to explain them with more pictures. The idea is to label an edge between notes with an arrow pointing from the lower note to the higher note, and a number saying their frequency ratio.
I'm trying it out in my new explanation of 'quarter-comma meantone'. This system is designed to give a lot of 'just major thirds' with frequency ratios of 5/4. The price you pay is that most fifths have a frequency ratio of the fourth root of five - an irrational number!
C-based quarter-comma meantone: circle of fifths
Luckily,
which is very close to 1.5, the best possible fifth.
I think the use of irrational numbers is why it took so long for quarter-comma meantone to be discovered. After all, irrational numbers were anathema in the old Pythagorean tradition of harmony!
It seems that quarter-comma meantone was discovered in a burst of more sophisticated mathematical music theory in Renaissance Italy. It was definitely well understood by 1558. Then it spread and more or less replaced the earlier favorite way to get just major thirds: just intonation. It lived a long and happy life until well-tempered scales became popular among radical youths like Bach.
Here's my article about this:
A funny email I got today:
Hi
I came across your website and was impressed by the applied category theory your company offers.
My name is Rachel and I work at Wishpond, a company that specializes in helping conference organizers grow their revenue and build a stronger online presence.
By integrating our lead generation expertise and automation solutions, one of our clients improved their sales conversion by 35% in just two months.
I will be happy to share other examples over a call. Do you have 15 minutes in the next day or so?
Thanks,
Rachel
422 Richards St, Suite 170. Vancouver, BC V6B 2Z4
P.S. Please let me know if you don't want to hear from me again.
I don't have a company.
I think you should stop posting private emails (especially those including names and addresses) in a public stream.
Is there really some assumption of confidentiality for cold emails being blasted broadly across the internet? I don't see how that's a real concern. In particular, business addresses are not usually private and there's only a first name here.
If he wants to quote it, why not just this part?
I came across your website and was impressed by the applied category theory your company offers
By integrating our lead generation expertise and automation solutions, one of our clients improved their sales conversion by 35% in just two months.
Also, in the past he did it with "more serious" emails too.
I also think it's weird to write things like "Today ... is coming to my house" and stuff like that.
422 Richards St, Suite 170. Vancouver, BC V6B 2Z4 is a meeting room hire and hotdesking workspace business
https://thenetworkhub.ca/about/vancouver
Hardly a private address. And the email is literally spam to advertise
The Company’s Propel IQ platform offers an “all-in-one” marketing suite that provides companies with marketing, promotion, lead generation, ad management, referral marketing, sales conversion and outbound sales automation capabilities on one integrated platform.
...none of which would be remotely relevant to John if a human had manually generated this email. The MathOverflow mods email address gets this junk all the time.
I think it's really important to apply category theory to subjects outside mathematics, and to subjects that don't primarily benefit big tech companies. So I've decided to knuckle down and do this.
I'm working with Nate Osgood (who does epidemiological modeling at the computer science of the University of Saskatchewan) and Patricia Mabry (at Health Partners Institute, a medical research institute in Minnesota) to apply for an NSF grant on incorporating human behavior in epidemiological models.
Our hope is to use AlgebraicJulia's category-based system to develop new software for agent-based models and use these to apply the National Institute of Health's new theories about behavior change to model tobacco addiction.
Both Nate and Patricia have written about tobacco addiction; Nate is an expert on agent-based models and Patricia is an expert on human behavior in health. I'm the math guy in the team, and we'll try to pay some people at Topos (along with Nate's students) to write software and help develop the math further.
We may also hold training events for epidemiological modelers wanting to use our category-based software, and for applied category theorists wanting to get involved in this subject.
Leopold Schlicht said:
I also think it's weird to write things like "Today ... is coming to my house" and stuff like that.
I appreciate (unironically) your concern, someone may be uncomfortable with that. For the record, I wasn't!
Three questions to you. How come the software tool was made in algebraic julia, and not, say python? I associate julia with academics, who like python, wanting to do fast number crunching. And to what extent is this agent-based project your idea for what you'll technically do in the Fields institute effort? Finally, is there an up to date version of your last month's list of relevant topics? I've been planning to stir in such a direction for a while. Actually since the 2020 era Biden announcement that copious sums would flow into fields adjacent to this - but it's dubious to what degree that happened, haha.
Thanks for these questions, @Nikolaj Kuntner!
How come the software tool was made in algebraic julia, and not, say python?
I believe @Evan Patterson and @James Fairbanks made the choice to use Julia because of its blazingly fast ability to numerically solve ODE and do other computations. I'm just the math guy, not the software guy. So I hope one of them, or someone else here, can explain this decision. By now AlgebraicJulia has the ability to work with presheaf categories, operad algebras, structured cospans and other things I want to use... so that's why I like it.
And to what extent is this agent-based project your idea for what you'll technically do in the Fields institute effort?
The Fields Institute Mathematics for Climate Change project aims to spend millions of dollars getting mathematicians to help plan the human response to the climate crisis humans have created. So it should be much bigger than any particular math project that I, personally, am working on. But I hope that my ICMS project on agent-based models will play some part.
Finally, is there an up to date version of your last month's list of relevant topics?
Yes, last month's list has been updated a bit. But it's far from final, so if anyone has other topics to suggest, they should let me know.
Actually since the 2020 era Biden announcement that copious sums would flow into fields adjacent to this - but it's dubious to what degree that happened, haha.
It will happen, but everything run by governments happens a bit slower than one wants.
Right now I'm busy trying to apply for a million-dollar grant to work on agent-based models for tobacco addiction with Nate Osgood and Patricia Mabry (see above). The technology we create should also be relevant to climate change. However, while a million dollars may sound like a lot, it's not much when it comes to the scale of these problems! For starters, the academic institution that the grant goes through takes at least 50% of the money. But with luck this will be just the beginning of a much larger project.
I hope everyone here thinks about the idea of helping use category theory to deal with the big challenges facing humanity. Anyone curious can watch my talk on Mathematics in the 21st Century.
I imagine the algebraicJulia is a more custom language and so likely it's better to specify things. It would be just easier for open source efforts - if things shall expand further - if it's a more common language. But then again, that shall not be the argument to not use a more tailor made language.
I'll take a deeper look at the list and try to figure out the overlap and differences for myself. Maybe I make a video, for a tiny bit more exposure.
Good luck for the grants.
Yeah, here people like to compare numbers in government projects to Ronaldo wages and contracts. He makes a two digit millions wage, but his contracts with companies are in the three digit millions.
edit: Just googled and it seems Micheal Jordan is the highest sports earner (wage, prices, deals, merch), summing to $3.3 billion. Net worth might be different as investment gains of that money is not included.
The AlgebraicJulia project uses Julia because it's the only language that has both the scientific computing and the metaprogramming abilities we need. The flexible but powerful type system is also key to the software. We do recognize that not as many people know Julia and we have ports of attributed C-sets, the key data structure in the software system, into Python, Typescript, and a couple of other languages.
In this article I show that three tuning systems that dominated western music for centuries all fit into a 1-parameter family:
1) Pythagorean tuning has a lot of fifths with frequency ratios of exactly 3/2. It may go back to Mesopotamia, but it was discussed by Greek mathematicians - and it was widely used in medieval European music, especially before 1300. Medieval musicians loved fifths.
2) Quarter-comma meantone has a lot of thirds with frequency ratios of exactly 5/4. It was introduced around 1550 and was popular for keyboard instruments until around 1690, since Baroque musicians loved these nice thirds.
3) Equal temperament has the same frequency ratio between each pair of neighboring notes. It was widely adopted in France and Germany by the late 1700s and in England by the early 1800s, since musicians started wanting to easily switch between all possible keys.
It turns out you can get all 3 of these systems, and infinitely many others, just by choosing a point on an algebraic curve!
It would be interesting if more elaborate tuning systems, like the many 'well-tempered' systems that flourished after 1690, could be described using some higher-dimensional (but still fairly small) 'moduli space' - some sort of algebraic variety.
By the way the above algebraic curve is
and if anyone can work out anything interesting about this curve I'd like to hear it. I guess it's a 12-sheeted branched cover of the projective line with branch points at 0 and infinity. Does that sound right? I guess that intrinsically speaking it's just another projective line.
Maybe the curve itself is dull but it's still mildly interesting if we work rationally (or integrally???) and think of it as a branched cover of the projective line?
Next:
The most popular tuning for keyboards from about 1550 to 1690 was 'quarter-comma meantone'. But what's a quarter comma and what's so great about it? That's what I explain in this article.
Here's the basic idea: a comma is what you get when one sound vibrates exactly
times as fast as another. And this number shows up automatically when you look for numbers of the form that are close to 1 - it's one of the first really good examples.
As a result, this number shows up as a tiny glitch when you try to play music with lots of nice octaves, fifths and thirds. To deal with it, people realized they should reduce their fifths by a quarter of a comma. My article explains why.
This means replacing the mathematically ideal perfect fifth, with a frequency ratio of
by something a bit less:
You can't really hear the difference. But mathematically, a cool thing happens: this number is also the fourth root of 5.
Yes, that's right: some influencers in the 1550s realized that music would sound better if they tuned their keyboards using the fourth root of 5. And it caught on.
A bunch of us - @Nathaniel Osgood, @Kris Brown, @Evan Patterson, Patricia Mabry at HealthPartners Institute and myself - are applying for an NSF/NIH grant on Incorporating Human Behavior in Epidemiological Models.
It's been rough, since I haven't had much experience writing grant proposals, much less a grant proposal involving 4 institutions, two of which are non-university research institutes in the US, and one being a university outside the US. All this means that lots of special rules apply - and my university, being the only US educational institution involved, is necessary the "primary institution", so I'm doing a lot of the work.
On top of this, Patricia Mabry (our expert in human behavior) fell ill in the final week.
But the proposal seems to be almost done!
Our proposed project is called "New Mathematics for Reproducible Modular Agent-Based Models in Epidemiology".
The basic idea is this: we want to use category to create an open-source, web-based collaborative software framework for agent-based models (ABMs). The goal is to build ABMs from small reusable, transparent modules corresponding to different processes identified by a National Institute of Health program called the Science of Behavior Change (SoBC). As an example we propose to focus on nicotine addiction, which Mabry is an expert on .
As with the software we've already made for stock & flow models, these modules will be written directly in AlgebraicJulia or created using a collaborative, web-based visual interface that doesn't require the users to know category theory or AlgebracJulia.
We're starting this project already, and a bunch of us (along with @Xiaoyan Li and William Waites, who both do computer science and epidemiology) will get together for 6 weeks in Edinburgh to work on it more from May 1st to June 12th this year at the International Centre for Mathematical Sciences. But that initial batch of work will not get into the Science of Behavior Change or tobacco addiction.
The diverse nature of agent-based models is really pushing us to new heights of abstraction - as compared with our stock & flow model software, which only used presheaf categories, double categories of decorated and structured cospans, and the operad of wiring diagrams. @Kris Brown is coming up with some great ideas. I will blog about these as time permits!
The 12th root of 2 times the 7th root of 5 is
And since the numbers 5, 7, and 12 show up in scales, this weird fact has implications for music! It leads to a remarkable meta-meta-glitch in tuning systems. Let's check it out.
Two important glitches that afflict tuning systems are the Pythagorean comma and the syntonic comma. If you go up 12 fifths, multiplying the frequency by 3/2 each time, you go up a bit less than 7 octaves. The ratio is the "Pythagorean comma":
And if you go up four fifths, you go up a bit more than 2 octaves and a major third (which ideally has a frequency ratio of 5/4). The ratio is the "syntonic comma":
In music it would be very convenient if these two glitches were the same - and sometimes musicians pretend they are. But they're not! So their ratio is a tiny meta-glitch. It's called the "schisma":
and it was discovered by an advisor to the Gothic king Theodoric the Great.
In the most widely used tuning system today, a fifth is not 3/2 but slightly less: it's
The ratio of 3/2 and this slightly smaller fifth is called the "grad":
Look! The grad is amazingly close to the schisma! They agree to 7 decimal places! Their ratio is a meta-meta-glitch called the "Kirnberger kernel":
If you unravel the mathematical coincidence that makes this happens, you'll see it boils down to
being very close to 4/3. And this coincidence let Bach's student Johann Kirnberger invent an amazing tuning system called "rational equal temperament". Much later the physicist Don Page, famous for discovering the "Page time" in black hole physics, became so obsessed with this coincidence that he wrote a paper trying to wrestle it down to something where he could do the computations in his head.
For details see:
Speaking of mathematics and music, NewScientist has a new article, Mathematicians have finally proved that Bach was a great composer. I'm hunting down more details.
Interesting! I immediately wondered if they compared Bach to some similar composers like Scarlatti, because without a comparison you can't tell if Bach is "great" or just ordinary. At the very end the article says
Information theory also has yet to reveal whether Bach’s composition style was exceptional compared with other types of music. McIntosh says his past work found some general similarities between musicians as different from Bach as the rock guitarist Eddie Van Halen, but more detailed analyses are needed.
“I would love to perform the same analysis for different composers and non-Western music,” says Kulkarni.
If they haven't found differences between Bach and van Halen, I get the feeling they are not doing detailed comparisons yet!
Here's a paper by Kularni and others: https://arxiv.org/abs/2301.00783
Information content of note transitions in the music of J. S. Bach
Suman Kulkarni, Sophia U. David, Christopher W. Lynn, Dani S. BassettMusic has a complex structure that expresses emotion and conveys information. Humans process that information through imperfect cognitive instruments that produce a gestalt, smeared version of reality. How can we quantify the information contained in a piece of music? Further, what is the information inferred by a human, and how does that relate to (and differ from) the true structure of a piece? To tackle these questions quantitatively, we present a framework to study the information conveyed in a musical piece by constructing and analyzing networks formed by notes (nodes) and their transitions (edges). Using this framework, we analyze music composed by J. S. Bach through the lens of network science and information theory. Regarded as one of the greatest composers in the Western music tradition, Bach's work is highly mathematically structured and spans a wide range of compositional forms, such as fugues and choral pieces. Conceptualizing each composition as a network of note transitions, we quantify the information contained in each piece and find that different kinds of compositions can be grouped together according to their information content and network structure. Moreover, we find that the music networks communicate large amounts of information while maintaining small deviations of the inferred network from the true network, suggesting that they are structured for efficient communication of information. We probe the network structures that enable this rapid and efficient communication of information--namely, high heterogeneity and strong clustering. Taken together, our findings shed new light on the information and network properties of Bach's compositions. More generally, our framework serves as a stepping stone for exploring musical complexities, creativity and the structure of information in a range of complex systems.
Thanks, I'll check it out! But gosh darn, why didn't they analyze several different composers?
Yes, one should at least try to see if the analysis can distinguish between obviously different composition styles! And also not distinguish (or distinguish much) between similar composers, eg compare a work of the late Beethoven to Brahms' First Symphony. Or even early Beethoven to late Beethoven.
I also read the Bach article and found it very interesting how they were able to apply "networks" (I believe this was just a fancy way of saying directed graphs) and information theory to music. I like when an article makes connections between fields because it makes me wonder how the analysis looks in the category POV. That is, for this case, what kind(s) of functor did they (implicitly) use to go from the world of networks (the category DiGraph) to the world of information theory?
After a couple weeks of hard work, racing against deadlines and many obstacles, our team has completed writing a grant proposal on using category theory to design agent-based models! I've been working with
We're pretty fired up, because the ideas seem novel yet practical.
Here is a summary:
Overview
Modeling is a key to understanding the specific mechanisms that underlie human behavior. Moreover, explicit examination and experimentation to isolate the behavioral mechanisms responsible for the effectiveness of behavioral interventions are foundational in advancing the field of Science of Behavior Change. Unfortunately, this has been held back by the difficulty of precisely comparing behavioral mechanisms that are formulated in disparate contexts using different operational definitions. This is a problem we aim to solve.
We propose to initiate a new era of epidemiological modeling, in which agent-based models (ABMs) can be flexibly created from standardized behavioral mechanism modules that can be easily combined, shared, adapted, and reused in diverse ways for different types of insight. To do this, we must transform the sprawling repertoire of ABM methodologies into a systematic science of modular ABMs. This requires developing new mathematics based on Applied Category Theory (ACT). The proposers have already used ACT to develop modular models that represent human behavior en masse. To capture human behavior at the individual level in a modular way demands significant further conceptual advances, which we propose to make here.
We will develop the mathematics of modular ABMs and implement it by creating modules that capture the behavioral mechanisms put forward by the SOBC: self-regulation, interpersonal & social processes, and stress-reactivity & resilience. We will evaluate this approach in a test case—the vaping crisis—by using these modules to build proof-of-concept ABMs of this crisis and comparing these new modular ABMs to existing models. We will create open-access libraries of modules for specific behavioral mechanisms and larger ABMs built from these modules. We will also run education and training events to disseminate our work.
Intellectual Merit
This interdisciplinary project will have a longstanding and substantial impact across three fields: Applied Category Theory, Science of Behavior Change, and Incorporating Human Behavior in Epidemiological Models. Our unified approach to describing system dynamics in a modular and functorial way will be a major contribution to ACT. The SOBC has ushered in a new era in behavioral intervention design based on the study of behavioral mechanisms. Our work incorporating SOBC-studed behavioral mechanisms as standardized modules in ABMs will transform the field of epidemiological modeling and lead to more rapid progress in SOBC. The key contribution of our work to all three fields is that it provides the ability to easily compare models with different operational definitions of behavioral mechanisms—since rather than a model being a large monolithic structure, opaque to everyone but its creators, it can now be built from standardized behavioral mechanism modules, and the effect of changing a single module can easily be studied.
Broader Impacts
Our work will be transformative to behavioral science, and the magnitude of the impact can hardly be overstated. The ability to communicate and compare behavioral mechanisms will pave the way for large-scale leveraging of evidence from across behavioral epidemiological models for collective use. A valuable impact of our work will be in providing an accessible framework for reproducible, standardized models built from transparent, mathematically well-defined modules. Until now, interacting with epidemiological models has required knowledge of mathematics, programming skills, and access to proprietary software. Our ACT-enabled modular representation of behavioral mechanisms will open up access to health professionals and community members of all levels to participate in the construction of epidemiological models. We will also build capacity in applying our methodology through events funded by this proposal.
Now I can relax for a bit and think about pure math, where modular representation means something completely different. :upside_down:
Regarding whether or not anyone has written up the (appropriately 2-categorical) symmetric monoidal structure of SMC, I suspect you're looking for Vincent Schmitt's work: https://arxiv.org/abs/0711.0324
Thanks! This thread is about "blog posts and attribution", so I'll copy your comment to my blog post and hope the math conversation continues there.
Warning, though: in Vincent Schmitt's work, the 1-cells of SMC are lax symmetric monoidal functors. I rather expect that John wants strong symmetric monoidal functors.
If we want to discuss John's mathematical questions on Zulip as well as on the n-Cafe, we should move it to a different topic in a different stream.
Okay, a bit of pure math:
Here I conjecture that the forgetful 2-functor from the 2-category Cart of cartesian monoidal categories to the 2-category SMC of symmetric monoidal categories has left and right adjoints given by and , respectively, where:
@Mike Shulman filled in an important part of the story with another conjecture, namely that Cart is equivalent to the category of -modules in SMC.
Todd Trimble said:
Warning, though: in Vincent Schmitt's work, the 1-cells of SMC are lax symmetric monoidal functors. I rather expect that John wants strong symmetric monoidal functors.
This is true! Although I've checked at least some of it in the strong monoidal case and it still works.
Could a moderator please move all posts in this thread starting with Chad Nester's to a suitable new stream?
Like @Morgan Rogers (he/him) or @Matteo Capucci (he/him), for example?
6 messages were moved here from #theory: category theory > symmetric monoidal to cartesian monoidal by Morgan Rogers (he/him).
Ah, I thought it would be a good idea to merge these conversations but now things are out of order... Hopefully people can piece together how the conversation went?
I was trying to imagine why the tensor product of symmetric monoidal categories might work differently in the 2-category of symmetric monoidal categories with lax versus strong monoidal functors, and I couldn't see why.
.....
Someone very sweet, a total stranger, wrote:
"I wanted to send you a short thank you. Early in my college career, I found your article on how to learn math and physics. As a child I experienced educational neglect and knew very little about math or science, or even how to study it! I was lost before I even started."
"I have lived by your quote, 𝗴𝗲𝘁 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗵𝗮𝗯𝗶𝘁 𝗼𝗳 𝗺𝗮𝗸𝗶𝗻𝗴 𝗶𝘁 𝗰𝗹𝗲𝗮𝗿 𝘄𝗵𝗲𝘁𝗵𝗲𝗿 𝘆𝗼𝘂 𝗸𝗻𝗼𝘄 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗳𝗼𝗿 𝘀𝘂𝗿𝗲 𝗼𝗿 𝗮𝗿𝗲 𝗷𝘂𝘀𝘁 𝗴𝘂𝗲𝘀𝘀𝗶𝗻𝗴, but couldn't remember where I had read it! I recently found it in my journal from my 1st week of college! Most amazingly, this approach works for every single subject!"
It's interesting to see someone who firmly latched onto that principle and profited from it. I know a bunch of math grad students who are really good in other ways but still don't impose that discipline. They trip up all the time. It's great to have intuitions that go beyond what you can prove, but it's bad to mistake those for certainty.
John Baez said:
Mike Shulman filled in an important part of the story with another conjecture, namely that Cart is equivalent to the category of -modules in SMC.
I replied on the nCafè about this, saying that since actions of SMCs are equivalent to just functors between them this doesn't look plausible, but then I've seen this reference where they actually do seem to prove this fact and I'm... confused? Actually, more afraid that the representation theorem in the actegories paper might be wrong :S
Uhm I guess the mismatch is that my result is about modules in whereas youse are considering the tensor product structure
This also explains why the conjecture is plausible: in , one has and thus comultiplication of the desired comonoid structure on is inherited completely from (same for the counit actually)
Yes, in everything I write, a "module of a symmetric monoidal category in SMC" is supposed to be a pseudomonoid (SMC, ⊠) together with symmetric monoidal functor with some extra structure and properties, which I could easily list if you held a gun to my head.
In short I'm studying pseudomonoids and their actions in the (conjectured) symmetric monoidal 2-category (SMC, ).
This is supposed to be very much like studying monoids and their actions in the monoidal category (CommMon, ) - that is, rigs and their actions on commutative monoids.
And that is supposed to be very much like something that mathematicians have written dozens of books about: monoids and their actions in the monoidal category (AbGp, ⊗) - that is, rigs and their actions on abelian groups, called 'modules'.
So one thing I'm doing is drawing on my intuitions about rings and their modules, and making guesses about how that theory categorifies. John Berman did this more professionally in The commutative algebra of categories.
But sometimes while doing this I make the same slip you seem to have made, namely thinking about pseudomonoids in (SMC, ) rather than (SMC, ⊠).
Note a pseudomonoid in (SMC, ×) is just a symmetric monoidal category, by a kind of categorified Eckmann-Hilton argument!
Hmm, now I'm really confused.
So maybe you are indeed catching some mistake in my thinking, @Matteo Capucci (he/him). My original post didn't mention this "categorified ring theory" perspective and I'm quite confident of the conjectures I made there. Mike Shulman (and John Berman) brought it up. I realize now that I don't understand it as well as I hoped. Basically I don't see why the thing I'm calling F is a pseudomonoid in (SMC, ⊠). Such a pseudomonoid should be a "categorified rig" with both addition and multiplication. But I don't see how F has both - it seems to just have multiplication, being the free cartesian closed category on one object.
I suppose it can still "act" on symmetric monoidal categories, though, just as a monoid can act on abelian groups (in a linear way).
Isn't the unit for ?
If were the unit for , then would be equivalent to for any , which would contradict the conjectures.
Now I'm confused. Since is , isn't it a rig category with the cartesian product and coproduct of (which are the cartesian coproduct and product of )?
The unit for is the free symmetric monoidal category on one object, better known as the groupoid of finite sets and bijections, made symmetric monoidal using disjoint unions. This is just like how the unit for in AbGp is the free abelian group on one generator, better known as , made into an abelian group using addition.
In fact it's even more just like how the unit for in CommMon is the free commutative monoid on one generator, better known as , made into a commutative monoid using addition.
And just like is a ring, and is a rig, the groupoid of finite sets and bijections is a rig category, with multiplication given by the cartesian product of finite sets. Right?
Right. And Baez's conjecture, now supposedly a theorem, states that the groupoid of finite sets and bijections is the initial rig category, in a suitably 2-categorical sense of 'initial'.
(deleted)
Mike Shulman said:
Now I'm confused. Since is , isn't it a rig category with the cartesian product and coproduct of (which are the cartesian coproduct and product of )?
This is easy to get confused about.
For starters, is a symmetric rig category with as its multiplication and as its addition. So: has these two symmetric monoidal structures, and distributes over up to coherent isomorphism.
Now let's think about . In a feeble and ultimately doomed attempt to minimize confusion, I'm going to use and to mean the exact same functors as before, but transferred over to the opposite category!
So: still has two symmetric monoidal structures, which I'm still calling and , and still distributes over up to coherent isomorphism. So: it's a symmetric rig category.
But now is the coproduct in , and is the product in !
Now, back to my claim that is the free cartesian category on one object. I'm still sure this is true. But notice: the cartesian product here is what I'm calling (since it comes from disjoint union of sets, and obeys rules like ). So it's not the multiplication in our 2-rig: it's the addition. In other words: it doesn't distribute over ; distributes over it!
I will let people check my work and draw the obvious (???) conclusions.
Good, that all makes sense to me. So that means that is a -pseudomonoid, since it has this other monoidal structure that distributes over . And that's the pseudomonoid structure with respect to which -modules are conjectured to be cartesian monoidal categories.
I'm struggling to see if I believe that conjecture. I'm still stuck in "op hell", for some reason that's difficult to explain.
("Op hell" is like when I go to the UK and think "when crossing the street, I should look the opposite way from the way I usually do" - and then after a few weeks it gets hard to quickly remember which is the way I usually do, because I'm getting used to looking the opposite way.)
(I learned to avoid thinking that.)
Is there some way to categorify either the or functions? Addition distributes over those, so they're a candidate for the other operation on . This has the added benefit of being interesting to categorically minded tropical geometers...
I think Mike nailed it after my analysis here: F= is the rig category we want, with its coproduct as multiplication and its product as addition!
Yes, you heard me. This way multiplication distributes over addition. Then we (= you?, but maybe Berman already did it) just need to check that a module of this rig category F in SMC is a cartesian category.
Basically I just want to see how the presence of a diagonal map for each object of F endows any object of any F-module with a diagonal map.
Ah. For some reason I thought we wanted something that (which is really the product in this category) distributed over. I'll need to think about why we actually want something distributing over it, but I don't have time just this second. I'm finishing up a blog post about something else.
In the meantime, one thing that would make my life much easier (both to understand this / thing, and to actually prove something interesting about Cart as -algebras, if that hasn't been done yet) would be a concrete description of ... Do you happen to have one? I'm sure if I read *Pseudo-commutative monads and pseudo-closed 2-categories* or that Schäppi paper more closely I could figure it out myself, but it would be much easier to start from as concrete a description as possible, and I'm SURE there is one.
Chris Grossack (they/them) said:
Ah. For some reason I thought we wanted something that (which is really the product in this category) distributed over.
Yeah, I thought that too for a while, but I think it's wrong. is the free cartesian category on one object, but much to my surprise that cartesian structure is playing the role of the addition. This is not so crazy, because we are treating symmetric monoidal categories as analogous to abelian groups - morally, additive abelian groups - and we're asking how making into an -module of some sort can force its monoidal structure to be cartesian. Apparently this arises from the fact that the addition in is cartesian.
I think everything would also work, and be less freaky, if we talked about cocartesian categories and used instead of . Claim: is the free cocartesian category on one object, it's a rig category, and a module of this rig category in SMC is a cocartesian category.
Chris Grossack (they/them) said:
In the meantime, one thing that would make my life much easier (both to understand this / thing, and to actually prove something interesting about Cart as -algebras, if that hasn't been done yet) would be a concrete description of ... Do you happen to have one?
I've never seen anyone write it down. I think you can crank out a concrete description starting from the universal property, just like for the tensor product of abelian groups. But it's more lengthy than the case of abelian groups, since it's a '2-universal' property. Something like this:
Let and be symmetric monoidal categories, and write their tensor product as + in each case, just to make the analogy to abelian groups clear. The objects of will be formal sums of objects called for . Instead of imposing equations
we put in isomorphisms. And then the fun starts: what equations do these isomorphisms obey?
These will be forced by the universal property. Let be the category of functors from to that are bi-symmetric monoidal: that is, symmetric monoidal in each variable. Let be the category of symmetric monoidal functors from to . Then we need
where the equivalence is induced by precomposing with a bi-symmetric monoidal functor sending to .
Maybe the equations I was looking for will come from the fact that we need to be bi-symmetric monoidal: we write down what that means and see what we get.
However, I don't see how working out this stuff will help me prove anything, unless I'm a glutton for punishment and want to prove things by doing long calculations!
I do, however, think that having some description of like this, hovering vaguely in my head, will make it easier to convince myself that is always a cocartesian monoidal category. (I've decided it's less stressful to work with , the free cocartesian object on the category , rather than its opposite!)
For example, we could try to do some half-assed impromptu calculations to argue that having a codiagonal will give any object of a codiagonal. Let's try it.
Every object of is a formal sum of things like where is a natural number (I'm using a skeleton of ) and these in turn will be isomorphic to sums of things like .
So the functor sending to should be essentially surjective.
So up to equivalence we can assume has the same objects as , and all the same morphisms too, but with a bunch of extra morphisms thrown in and also a bunch of extra equations between morphisms.
Now here's the fun part: the codiagonal gives a morphism in , and thus a morphism
So there's our desired codiagonal!
That was very sketchy, of course, but I think that's the basic idea.
If you/we want to get serious about this, it's probably worth reading Berman's paper discussed on the n-Cafe, to make sure we're not reinventing too many wheels. (Of course it's also good to practice just diving in and trying stuff, as above.)
Berman does pretty much everything we've been talking about, including the extension and coextension functors, and he does it for -categories!
So if I continued down this road I'd learn this stuff and apply it to more examples, or something.
But it might be easier to work on something else... there are lots of similar ideas bubbling up in our conversations with Jim, including lots of stuff about doctrines that we've barely gotten into, since right now he's excited about the Modularity Theorem.
John Baez said:
Something like this:
Let and be symmetric monoidal categories, and write their tensor product as + in each case, just to make the analogy to abelian groups clear. The objects of will be formal sums of objects called for . Instead of imposing equations
we put in isomorphisms. And then the fun starts: what equations do these isomorphisms obey?
Aaaah, this is very much like the deligne tensor product, so I probably could have thought harder and come up with something like this. It did actually cross my mind, but in the deligne case you write every object as a "sum of pure tensors" (where "sum" = colimit). Since we don't have colimits in this context, I threw this idea out... Somehow it didn't cross my mind to write every object as a "sum of pure tensors" (where "sum" = monoidal product).
Also, instead of putting in isomorphisms (which is certainly the right thing to do eventually) we could make life easier and work with strict 1-categories, at least to start. Then all these things you wrote will be honest equations, and it should be easier to check a version of the result we're after before moving onto the full bicategorical version
Yes, the tensor product of symmetric monoidal categores has a family resemblance to the Deligne tensor product of abelian categories, which I consider a poor man's version of Kelly's tensor product of categories with finite colimits. Deligne's tensor product is not defined for all abelian categories! Kelly's tensor product is always defined, and when Deligne's tensor product is defined it agrees with Kelly's:
Basically, the problem is that abelian categories are defined using a mix of limits and colimits; it's cleaner to just use colimits, which (as you note) are like categorified linear combinations.
It might still be worth getting a concrete description of the tensor product of symmetric monoidal categories and especially the special case that launched us into this: the free cartesian (or cocartesian) category on a symmetric monoidal category. The latter should be quite simple!
If you look at Berman's paper you'll see he describes the tensor product of symmetric monoidal categories rather abstractly. So do Hyland and Power, who describe quite generally a tensor product of strict algebras of a pseudo-commutative 2-monad: symmetric monoidal categories are a mere special case for them.
Also, instead of putting in isomorphisms (which is certainly the right thing to do eventually) we could make life easier and work with strict 1-categories, at least to start. Then all these things you wrote will be honest equations, and it should be easier to check a version of the result we're after before moving onto the full bicategorical version.
The completely strict version seems quite easy but I'm not sure it's very useful. I seem to remember there's some obstruction to strictifyng a rig category to the point of getting both left and right distributivity isomorphisms to be identities - that is, it only rarely happens. I can't remember for sure.
If we want a bite-sized problem, I think a better one is to describe the free cartesian, or cocartesian, symmetric monoidal category on a symmetric monoidal category in a very concrete way. I sketched how it works in my blog post, and I think that description can be made precise without huge suffering.
For a while I've had an idea of how to construct the tensor product of symmetric monoidal categories in a way which should make such things provable, although I haven't actually written anything concrete. I also mentioned this to @Aaron David Fairbanks recently, mentioning him in case it's useful.
I'll try to outline the approach.
The tensor product is concretely and “easily” describable in the case of coloured props, that is, strict symmetric monoidal categories whose set of objects is a free monoid. (If the generating set is seen as a “set of colours”, objects are “lists of colours”). This is because everything is strict, so it's all equations etc.
See this paper by Hackney and Robertson, or my own paper here.
If is the category of coloured props and morphisms sending “colours to colours” (or “either colours or the unit”, which is the choice that I made in my paper), then the tensor product is part of a closed monoidal structure on . (I will from now on say simply “prop” for “coloured prop”).
Now, in there you can quite easily prove that the “free cartesian prop” is isomorphic to the tensor product with the suitable version of , i.e. the prop of cocommutative comonoids, like this.
Let denote this prop. Then
These facts imply that the category of cartesian props is a reflective subcategory of the category of props, with being the reflector, hence the “free cartesian prop” functor.
Now, a semi-folklore result -- which a bunch of people including me, Aaron, and @Cole Comfort have thought about writing an exposition of, but so far has not happened --states that
*Symmetric monoidal categories are equivalent to representable coloured props *
These are coloured props such that
More precisely:
One way of packaging this would be to define a monad on , which “freely adds” all these isomorphisms.
Then
We have the usual “free - forgetful” adjunction between and .
Now, I think the following ought to be true:
The monad is a commutative monad wrt .
By general results, this would allow us to lift the tensor product of to a tensor product on , and describe it explicitly as a coequaliser diagram.
Essentially, this coequaliser would say that, if are SMCs, then is obtained by constructing , i.e. freely adding a unit and adding compositors for all colours of -- which are pairs of colours of and -- and then imposing that
Now, in principle this only gives us a tensor product of strict monoidal functors. However, I think it is possible to extend this to a tensor product of strong monoidal functors as follows.
Claim. If are all symmetric monoidal categories, then there is a natural embedding of into .
To go in one direction: given ,
In the other direction, we precompose with the “quotient” morphism , and then precompose this with the unit of the free-forgetful adjunction .
It follows from monad equations that the latter operation is a left inverse of the one above.
Oh hey I wish I had entered this conversation sooner! I spent some time figuring out this stuff last year in the context of wanting to work with modules of symmetric rig categories (acting on symmetric monoidal categories), which we can tensor together "over" the rig category in the same way that we can tensor together modules over a commutative ring. These notes haven't seen the light of day yet because the context was building models of -calculus or linear logic generalizing posetal models or weighted relational models (where we see ordered rigs showing up), and that work is still in progress. I don't have much to add to the discussion at this point except that coming up with presentations of these categories which make proofs straightforward is tricky :sweat_smile:
Then, given two strong monoidal functors and , we should be able to obtain their tensor product in by
which produces a strong monoidal functor .
If all of this checks out, this approach avoids having to ever deal with coherence isomorphisms.
Then it should be possible to turn the argument about being left adjoint to the inclusion of cartesian props into all props, into an argument about being (“weakly”?) left adjoint to the inclusion of cartesian symmetric monoidal categories into symmetric monoidal categories, although there's a few extra things to check in order to make it work...
I believe @Mario Román and @Diana Kessler have also been thinking about related topics.
Amar Hadzihasanovic said:
Now, a semi-folklore result -- which a bunch of people including me, Aaron, and Cole Comfort have thought about writing an exposition of, but so far has not happened --states that
*Symmetric monoidal categories are equivalent to representable coloured props *
This result is great! This should make working with tensor products of symmetric monoidal categories much easier, especially if they are described as finitely generated colored props. I worked out the theory of presenting props by generators and relations in Appendix A here:
This was only in the 1-colored case, but most of it should generalize to the arbitrary colored case. Then, given presentations of two props, we should be able to easily write down a presentation of their tensor product.
Combined with your work, this should make it easy to see what what happens when we tensor any prop with the prop for commutative monoids (namely the free cocartesian category on one object), the prop for cocommutative comonoids (namely the free cartesian category on one object), etc. - there are many interesting variants here.
It may also give a more workable theory of rig categories, and thus a better proof of what people call Baez's conjecture: the initial symmetric rig category is the groupoid of finite sets.
The current definition of rig category involves a big pile of coherence laws worked out by Kelly and Laplaza. But I conjecture:
Conjecture on Rig Categories. A rig category is a pseudomonoid in , and a braided (resp. symmetric) rig category is a braided (resp. symmetric pseudomonoid) in .
[EDIT: here I'm assuming the additive monoidal structure in the rig categories under question is symmetric monoidal.]
This slicker approach, combined with your more workable approach to the tensor product of symmetric monoidal categories, might make it easier to prove things about rig categories. For example, it would be nice to have a slicker proof what people call Baez's conjecture: the initial symmetric rig category is the groupoid of finite sets. Right now @Niles Johnson and Donald Yau seem to be the experts on this subject - see their 3-volume book.
John Baez said:
Conjecture. A rig category is a pseudomonoid in , and a braided (resp. symmetric) rig category is a braided (resp. symmetric pseudomonoid) in .
I have had the same thought before. I hope this is true.
Here of course I'm taking the addition to be symmetric monoidal, which seems fair.
If that isn't true, then something must be wrong with the world. Or with the definition of rig category. (-:
Great!
By the way, I want to emphasize that with all the conjectures I'm stating, I am not trying to "stake my claim" in the sense of saying that I want to be the first to prove these things. On the contrary I am hoping someone else proves them.
I'm sorry to insert myself in a conversation I am barely following, but it does seem like this recent paper on tensor products for permutative categories might be of interest:
https://arxiv.org/abs/2211.04464
Yes, I only know about it because I was one of the people involved! And yes, we're restricting to permutative categories, not general symmetric monoidal. But, as a trade-off, we have been extremely detailed and explicit about what exactly works, and what doesn't. It's a bit wild, and I hope it will be useful :)
Yes, I only know about it because I was one of the people involved!
Hey, that sounds like most math in the world!
And yes, we're restricting to permutative categories, not general symmetric monoidal.
I forgot what a 'permutative' category is, and from your paper it looks like the same thing as what I'd call a strict symmetric monoidal category (or maybe a symmetric strict monoidal category). Does that sound right.
This restriction is fine with me as long as you get an biequivalent bicategory of them, and you do:
From the perspective of symmetric monoidal bicategory theory, restricting to permutative categories rather than studying all symmetric monoidal categories makes little difference: the inclusion of the 2-category of permutative categories into the 2-category of symmetric monoidal categories is a biequivalence, so long as the 1-cells in both consist of all symmetric monoidal functors.
(Sotto voce: and by 'symmetric monoidal functor' he means strong symmetric monoidal functor.)
I forgot what a 'permutative' category is, and from your paper it looks like the same thing as what I'd call a strict symmetric monoidal category (or maybe a symmetric strict monoidal category). Does that sound right.
yeah; permutative = strictly associative and unital (but symmetry is still just an isomorphism)
(Sotto voce: and by 'symmetric monoidal functor' he means strong symmetric monoidal functor.)
ha ha, yeah, choosing to use strong instead of more general lax monoidal functors turned out to be important!
.....
On Wednesday February 28th I'll be giving a talk in the U.C. Riverside category theory seminar. It won't be recorded. Here's a draft of my slides:
Heisenberg reinvented matrices while discovering quantum mechanics, and the algebra generated by annihilation and creation operators obeying the canonical commutation relations was named after him. It turns out that matrices arise naturally from 'spans', where a span between two objects is just a third object with maps to both those two. In terms of spans, the canonical commutation relations have a simple combinatorial interpretation. More recently, Khovanov introduced a 'categorified' Heisenberg algebra, where the canonical commutation relations hold only up to isomorphism, and these isomorphisms obey new relations of their own. The meaning of these new relations was initially rather mysterious, at least to me. However, Jeffery Morton and Jamie Vicary have shown that these, too, have a nice interpretation in terms of spans.
Speaking of conjectures, I explain a very pleasant conjecture of Morton and Vicary here, involving a bicategory of groupoids, spans, and spans-of-spans.
It's the sort of thing that's entirely plausible, but would require skill with bicategories to avoid a lot of tiring calculations - too tiring for Morton and Vicary to have attempted.
Todd Trimble made some nice observations here about how Morton and Vicary's work is connected to general ideas about how you can convert spans of groupoids into enriched profunctors, and why composition of the resulting enriched profunctors corresponds to composing spans by homotopy pullback.
.....
My wife and I recently bought a flat in Edinburgh, and we will be going there on April 10th, and staying until late August. This will include the period May 1 - June 12 when a bunch of us are working on categories for agent-based models at the ICMS, but we'll be there a lot longer. In theory I might go to ACT in Oxford, but I won't have have time to prepare a paper on our new agent-based modeling ideas by then, since those are due at the end of March. So I may just hang out in Edinburgh and think about math - except for a trip to Cambridge in early July.
Do make your theory
In theory I might go to ACT in Oxford,
practice! Send us a paper for MFPS or ACT! it will be fun.
It would definitely be fun to see y'all.
It would be a surprise (to me) if I submitted a paper to MFPS instead of ACT. If I did, it would probably have to be about decorated cospan double categories, how to get double functors between these, and how these are (or could be) used in epidemiological modeling. But I feel like I've already been talking about this for years! The same problem holds with ACT.
Right now I'm working on a bunch of new stuff with the "epidemiology gang": a formalism for agent-based models. But while we're making a lot of progress, I don't want to try to cough up a talk on that by the end of this month. It just doesn't seem like a good use of time: we need to get ready to spend 6 weeks coding (and in my case, helping people code) starting May 1st.
On a different note, my next AMS Notices column has to be written by May 1st. I'm mulling over various different topics. Which would be most interesting to most mathematicians in the AMS?
Dirichlet series from species as a way to get the Hasse-Weil zeta function of a scheme. The Riemann zeta function is a simple very fun example, but maybe I could do the zeta function of an easy elliptic curve too. (here, here)
Thurston's weird result about counting combinatorially distinct triangulations of a 2-sphere, and how it's related to a lattice in 10 dimensions. (here, here, here)
Something easy about dessins d'enfant and Coxeter groups, especially the (2,3,) Coxeter group, and the cartographic group. (here)
Hmm, maybe that's enough choices.
I also have to write a paper for the AMS Notices about "how to write math", by April 1st.
looking forward to
I also have to write a paper for the AMS Notices about "how to write math", by April 1st.
I also have to talk (in Portuguese) about "choosing problems" and I'm using some of your `realist' advice for that, as well as Hamming's.
Nice! I have a bunch of advice on how to do research which may also be relevant, though maybe for more established researchers.
Thanks!
As I hinted, Krystal Taylor of Ohio State has asked me to write a 1-4 page article about how to write mathematics, for the AMS Notices, due April Fool's Day. She pointed me to a nice list of previous articles about how to write and publish math papers. These are for "Early Career" researchers, meaning maybe grad students and postdocs?
So, I want to do something that's a bit different than those.
Let me try to organize my thoughts. What are some things I always tell people about writing math?
1) Math students (at least in the US) spend a lot of time learning how to write proofs, a very disciplined form of short essay. Then they are traditionally thrust into writing a thesis, which is far longer and more complex. They are not taught how to do this ahead of time. It's a bit like getting a lot of practice making bricks and then suddenly being told to build a house. Then, traditionally, they have to break their thesis into papers and publish those. (What's that like - chopping up the first house you built and trying to turn it into apartments and sell those?) It's not surprising that people finding writing math difficult. Luckily more and more grad students are expected to write and publish a paper or two before starting their thesis.
(That doesn't count as useful advice, and I don't even know the fraction of math grad students who submit a paper for publication before writing their thesis, but it could be a way of breaking the ice.)
2) 90% of people who see your paper will only read the title. Of the remaining 10%, 90% will only read your abstract. Of the remaining 1%, 90% will only read your introduction. Only about 0.1% will actually read your whole paper. So, it's very important to put a lot of work into writing a good introduction! It should attractively describe your main results and their context, with a minimum of technical distractions.
Compared to the introduction, you should put in about 10 times as much effort per word into writing the abstract. Again it should again describe your main results, clearly but more tersely. This is what people look at, to see if you've proved something they need, or might enjoy.
And compared to the abstract, you should put in about 10 times as much effort per word in choosing a title. A good title will be accurate, yet short enough that people can remember it and spread the word. Bott wrote "The stable homotopy of the classical groups”, and we can remember that. If it gets too long, people will always say "that paper by so-and-so about..."
3) To write a good paper it helps to think of it as telling a story. Perhaps our primal way of relating to a text is to read it as some sort of story - imagine a kid reading a book. A good story involves characters who we care about, or come to care about. They engage in some actions - often a conflict, sometimes a quest - where the outcome is suspenseful. Then the outcome becomes clear, and there is a denoument where the main characters hang around for a little bit and help the reader absorb the significance of what happened before the story ends.
All this applies to math. The characters are usually mathematical objects, but we usually still need to introduce them and "develop their character" before the reader will care about them. If your paper is about the absolute Galois group of you are in luck - this is like a superhero franchise where the audience knows right away who Batman is. But if you are writing about Hügelschäffer curves or pseudoreflection groups, you should say what they are. Otherwise only people who already know and love those entities will bother reading - and that, I'm sad to say, is a small audience.
The quest - that's usually pretty clear in a math paper. It may seem hard to build suspense if you say in the introduction what theorems you'll prove - as you should. But the suspense, or maybe I should interest, can come from the reader wanting to know how you will prove it. Don't be afraid of "giving away too much too soon". In fact, it's so hard to follow a typical proof that you can announce it ahead of time quite loudly and people will still be wondering how the proof works.
The denoument is something mathematicians are often bad at. When the main theorem is proved, often the curtain drops and the lights turn off before we even have a chance to feel good about it! This makes the reader feel the author has lost interest. It's better to end with a bit of discussion of subtle points that would be distracting earlier, or interesting open problems you didn't get around to solving. Don't feel bad that you've left things for the reader to do! Many readers appreciate that. A paper that finishes everything too neatly does not invite further thought.
John Baez said:
It may seem hard to build suspense if you say in the introduction what theorems you'll prove - as you should. But the suspense, or maybe I should interest, can come from the reader wanting to know how you will prove it. Don't be afraid of "giving away too much too soon". In fact, it's so hard to follow a typical proof that you can announce it ahead of time quite loudly and people will still be wondering how the proof works.
I would go even further than this and suggest that one should not put any effort into "building suspense" when writing a math paper. Maybe that's what you meant to say, but "it's hard to build suspense" could be read as suggesting that it's something you should be trying to do even though it's hard.
You're right. I was thinking about the analogy to story-telling. In story-telling people deliberately inject suspense, and I suppose that's good (though now I'm wondering). I think there's a lot of suspense in reading math too, e.g. I'm slowly gearing up to understand the proof of the Modularity Theorem and I don't have a clue how it works: the suspense is killing me, and that helps motivate me to keep studying. But to the extent that the suspense is caused by someone not explaining things clearly enough from the start, I don't like it.
Maybe math, unlike fiction, always has a superabundance of suspense.. :upside_down:
Most suspense is built from sustained ignorance (not in the prerogative sense; I mean a deliberate incompleteness of information), and you certainly don't want to be sustaining ignorance when writing maths!
In story-telling I think there are good and bad ways to inject suspense. I get annoyed when an author creates suspense by deliberately withholding information that I feel I had a right to be told, like a first-person narrator omitting to mention certain things that he did. But I think different readers have different opinions about that.
Agreed. But we do need to make the reader decide that there's something they want to know! And typically (?) that happens before they've learned that thing.
Say I'm going to classify semisimple Lie algebras. First I have to tell you what they are (if you don't already know). Then I should say something about why it's great that we can classify them. For example, I should make a quick aside about how Lie tried to classify 1d Lie algebras, and then 2d Lie algebras, and so on, but quickly sank into a mire of complexity: for real Lie algebras, it seems the 4d classification was fully nailed down only in 1963. Then something about how the marvelous way in which semisimplicity avoids this quagmire. Every ideal has a complementary ideal, so every semisimple Lie algebra falls apart neatly into a direct sum of simple ones!
This buildup should not take long - I basically just did it. But I think it helps the reader power through the actual classification: it lets them realize they're being given the key to a treasure room.
There's a bit of "suspense" involved here, but that's probably not the best word for what we're after. Mathematicians say "motivation".
Does anyone know how, or why, or when, the "storytelling" paradigm for math research first appeared? It seems to be aligned with the broader perspective of the Narrative Paradigm (https://en.wikipedia.org/wiki/Narrative_paradigm), but I don't know how explicit or intentional the relationship is.
It seems hard to imagine that this is the kind of thing that had a "first" appearance.
Yeah, I often get that kind of response when I ask about this. For at least the broader Narrative Paradigm, it's just one of several versions of communication theory. I guess what I mean is, when, or how, did this become the only framework I hear about for writing math? (The motivations for it are clearly compelling, I just am surprised there aren't alternatives.)
I've never studied frameworks for writing math. I got interested in this "narrative" stuff at a meeting called Mathematics and Narrative in 2007, organized in Delphi by Apostolos Doxiadis, author of Uncle Petros and Goldbach's Conjecture and later Logicomix. I remember that @David Corfield and Barry Mazur were there. All the participants were going to write chapters for a book on this but somehow I slipped off the mailing list, nobody pressured me to finish my chapter, and I never did. So I've decided I'll finish it now! But as much less of a serious essay, mainly just some writing tips.
The book has come out by now.
Thanks!!
Update now that I've read the introduction: that book is really something! I have some conflicted thoughts, but I'll be interested to read what you have to say about it. Good luck!
I haven't read it. I'm really just writing a short opinion piece on how to write math well, so I hadn't planned to discuss previous scholarship or even study it. But I guess I should.
(There's a general pattern where people write articles saying how to do things well, and basically they spew out their personal opinions and we just have to decide what we think about that.)
Niles Johnson said:
Thanks!!
Update now that I've read the introduction: that book is really something! I have some conflicted thoughts, but I'll be interested to read what you have to say about it. Good luck!
You can find my contribution along with a link to the associated interviews here.
Thanks!
I just wrote two expository articles on elliptic curves and their -functions. I've found the subject fairly hard to get into, so I'm trying to make it easier:
Counting points on elliptic curves (part 1) - One commonly seen definition of the L-function of an elliptic curve looks really ugly. Let's dig into it a bit - maybe we can improve it.
Counting points on elliptic curves (part 2) - Four things can happen when you take an elliptic curve with integer coefficients and look at it over a finite field. There's good reduction, bad reduction, ugly reduction and weird reduction. Let's see examples of these four cases, and how they affect the count of points!
Ultimately I want to lead up to a slick category-theoretic definition of the -function of an elliptic curve. But I need to check that the usual tweaks people make for 'primes of bad reduction' are accounted for by this slick definition. So I had to learn a bit about these 'primes of bad reduction', which come in three different kinds. I jokingly call these bad, ugly and weird.
And in the process, I learned about a beautiful classification of 1-dimensional connected algebraic groups, which includes an exotic or 'weird' case I'd never thought about.
If I wanted to sound impressive, I'd say this exotic case can be understood using Galois descent. I link to an article Reid pointed out, which does that. But in my article I describe it in a more lowbrow way. It's still very pretty.
Nice articles John!
Thanks! In the second one, Jim Borger and Allen Knutson noticed that there are four connected 1-dimensional algebraic groups over , which are exactly analogous to those I was studying over the finite field . I should have noticed this since I was talking about the resemblance between and .
Suppose you draw a polygon whose corners lie on a square lattice and whose only interior point is the origin. Then it has a 'dual' which is another such polygon!
If the lattice points on the boundary of the first polygon are p₁, p₂, p₃,..., you get those boundary points for the dual polygon by taking differences:
q₁ = p₂−p₁
q₂ = p₃ - p₂
and so on. Here's an example:
There are more cool facts. First, the dual of the dual is your original polygon.
Even better, if you add up the number of lattice points on the boundary of your original polygon and its dual, you always get 12!
Even better, this is connected to all the other cool stuff you may have heard about the numbers 12, like
1 + 2 + 3 + 4 + ... = -1/12
For a full explanation of these mysteries, see:
• Bjorn Poonen and Fernando Rodriguez-Villegas, Lattice polygons and the number 12.
Hmm from trying a quick example it seems that something breaks if you don't assume that the polygon is convex?
(From (1,0), (2,1), (-1,0), (2,-1) I got a dual which was self-intersecting with edges passing through 12 lattice points!)
Yes, it should be convex - I left out that condition but it's in the paper. Thanks!
The connection to toric varieties is intriguing..! Thanks for sharing @John Baez
Some things I've done or should do:
I finished this little series of posts:
In Part 1, I gave Wikipedia's current definition of the L-function of an elliptic curve, which was truly horrid. In this definition the L-function is a product over all primes . But what do we multiply in this product? There are 4 different cases, each with its own weird and unmotivated formula!
In Part 2, I studied the 4 cases. They correspond to 4 things that can happen when we look at our elliptic curve over the finite field : it can stay smooth, or it can become singular in 3 different ways. In each case we got a formula for number of points the resulting curve over the fields .
This time I gave a much better definition of the L-function of an elliptic curve. Using the work from last time, I showed that it's equivalent to the horrible definition on Wikipedia. And eventually I may get up the nerve to improve the Wikipedia definition. Then future generations will wonder what I was complaining about.
That's super cool! So it's essentially a species thing, then? I noticed you carefully avoided mentioning them, but
An element of is a way to make the set {1,…,n} into a finite semisimple commutative ring, say , and choose an element of .
So I would guess that your is really a functor , and then you also want to mix this up with some kind of comma category construction involving (or maybe ), sending a set with elements to the set (or groupoid) of ways of making this set a semisimple ring....
Heh, I didn't click through to the page on your web on the nLab, to see that you explicitly do this with species and stuff types there.... :blushing:
There's a typo in the blog post, I believe:
That’s also why we write in the formula for the zeta function instead of : it’s a deliberately unnatural convention designed to keep out the riff-raff.
The s should be s, no?
David Michael Roberts said:
Heh, I didn't click through to the page on your web on the nLab, to see that you explicitly do this with species and stuff types there.... :blushing:
No problemo! In my new post I was trying to define the L-function of an elliptic curve as simply as possible, so I didn't want to make species a prerequisite, or even talk about them at all.
But the real goal of my three-part series on elliptic curves was to check that the species-based definition of the zeta function of an elliptic curve matches the complicated-looking definition on Wikipedia, where we consider 4 kinds of primes and multiply factors for each prime, defined differently in the 4 cases. I keep seeing people say that we have to treat primes of bad reduction with great care, so I needed to check that the species-based approach handles them correctly without any special fussing.
David Michael Roberts said:
There's a typo in the blog post, I believe:
That’s also why we write in the formula for the zeta function instead of : it’s a deliberately unnatural convention designed to keep out the riff-raff.
The s should be s, no?
You're right! Thanks. I'll fix it.
John Baez said:
- I'm almost done with an abstract for a "software demonstration" for ACT2024. This will be joint with Nate Osgood, Xiaoyan Li and maybe some others. It will aim to show off some of ModelCollab's new features involving functorial semantics. Nate or Xiaoyan will actually give the demo, but I hope to take the train down from Edinburgh to Oxford to attend the conference.
New features? Is ModelCollab being developed in secret now? The last commit to the GitHub repo was not recent.
And I will ask here again in case bystanders can point me in the right direction: where are the user-friendly installation instructions for ModelCollab?
Alright, John pointed me to https://modelcollab.web.app/ I vaguely recalled that such a thing probably existed, but I couldn't find the link!
David Michael Roberts said:
New features? Is ModelCollab being developed in secret now? The last commit to the GitHub repo was not recent.
I'm not closely following which changes in the underlying StockFlow.jl software are being incorporated in ModelCollab, and which of those require changes to the ModelCollab software. I'm constantly inundated with update notifications but I now see they're at the StockFlow.jl github:
https://github.com/AlgebraicJulia/StockFlow.jl
Anyway, nothing is being done "in secret".
If anyone wants to know anything, I can put you in touch with Nate Osgood and his crew. I'm really just the math guy. I'll ask Nate what's going on with ModelCollab now.
There's a student in Osgood's lab doing more or less a whole master's on ModelCollab but for what sound like quite trivial reasons, something about closing out a GitHub bill, he hasn't pushed any of his work. It's a frustrating situation.
Oh!
I'll try to work with Nate to resolve this.
@David Michael Roberts and @Kevin Carlson (aka Arlin) - I spoke to Nate and here's the story.
First, a lot of new developments on ModelCollab are being pushed to Github; you just need to look in the right place, namely here:
https://github.com/UofS-CEPHIL/modelcollab/tree/maxgraph/webui/src
Eric Redekopp is doing most of this work; he is the one doing a masters on ModelCollab with Nate. The new developments are mainly of two sorts:
1) making ModelCollab more user-friendly in many ways
2) adding support for causal loop diagrams and system structure diagrams - the two main kinds of diagram used in System Dynamics other than the stock and flow diagrams I have often spoken about.
Second, these developments have not yet been pushed to the publicly accessible server for ModelCollab (and the main branch?). To do this will either require paying GitHub about $30/month (for reasons I don't understand) or moving the serve to the University of Saskatchewan (which is reluctant for security reasons, but perhaps persuadable).
Nate wants to solve this problem soon.
We will certainly have these problems solved before Nate or Xiaoyan demonstrates ModelCollab at ACT2024 (assuming their talk is accepted).
Third, Nate has put @David Michael Roberts in contact with Xiaoyan Li, Eric Redekopp and someone else, to help David install the AlgebraicJulia software that underlies ModelCollab - namely, StockFlow.jl. Since Xiaoyan has just taken her kids on vacation, there may be a bit of a lag, but David shouldn't be shy about pushing this forward.
Nate says that currently, installing StockFlow.jl is a rather protracted process that involves installing Julia, Jupyter notebooks, blah blah blah. So he plans to "containerize" all this stuff so you can click one download button and do it all in one smooth motion. He also wants to teach the Topos Institute folk about containerization, so they can make other pieces of their AlgebraicJulia software easier for people to use.
Thanks heaps, @John Baez ! I got Nate's email, and will have to get in touch with Eric and Xiaoyan.
I've been busy today a) teaching my school student the steps leading up to Hilbert's Nullstellensatz (Noether normalisation being the big step today, so we could get Zariski's lemma) and then b) meeting with people at my other job about how we are pushing our UI and software forward. Only just opened email to see the notification about all the helpful stuff here.
I think I have Julia already installed, or at least I did. I think I've got Jupyter notebooks set up, and I've been working with Haskell in VS Code, and some other things, so I'm at least partway there.... It's the GUI stuff that's much more mysterious to me...
That's just for ModelCollab, not StockFlow.jl, right? Depending on what you're trying to do, you might prefer interacting textually with StockFlow.jl. It has more of a learning curve, but there are lots of examples in the long appendix here:
Yes, ModelCollab, it's got all kinds of unfamiliar tech (at least, from the point of view of getting in running locally from the GitHub repo). I had a skim through that appendix looking at the pictures of the UI, it was rather nice!
Eric Redekopp is doing his masters on "HCI", an acronym I hadn't known, which means "human-computer interaction". So he uses lots of software that's standard, I guess, for getting web browsers to draw pictures that interact in various ways with your mouse. This software would probably be unfamiliar to most mathematicians (and certainly me).
Some good news: my former master's student @Owen Lynch got accepted by Oxford for a PhD in computer science!
As a software developer who is now using Julia I decided to check out ModelCollab. I'm impressed with the technology ModelCollab is built on, but installing the software is a challenge I haven't conquered yet. Great news they are looking at using containers to fix the problem.
I'm looking at providing online Jupyter notebooks for my friends and mathematics website. Jupyter notebooks are the best alternative to Mathematica and provide great support for Julia and Python. I thought it might be nice if folks could access and run ModelCollab from a website. I happen to host a couple dozen websites, so I'm fixed up to do that type of hosting. Just an idea, but if I was promoting ModelCollab I would consider acquiring a relevant domain like ModelCollab.org. I'll try to contact Eric and see what help I might be able to provide.
People can access and run ModelCollab from a website, but I've been reluctant to distribute the URL publicly because the server can get overloaded, people have accidentally deleted example models on the website (due in part to faulty design that doesn't make this impossible), and right now Firefox gives a warning when you go to this website.
So, there are a number of problems to solve, and Nate Osgood is now pressuring Eric Redekopp to solve them. What sort of help do you think you might provide?
I think it's better to talk to me a bit here before emailing Eric Redekopp, an unsuspecting grad student, and offering him help.
It's great ModelCollab is on the web. That was the main idea that came to mind for doing something helpful. I try and find people to help as opposed to projects, but after looking around the web it doesn't seem like Eric is making himself very accessible. I have a broad technical background, but I'm not an expert on any of the project's core technologies like Julia, React or Firebase. Plus I much rather get the system operating on my computer and give it a good look at from both the code and the testing perspectives before saying what I might be helpful with. I am able to help with chores others don't find interesting like systems administration and testing.
What I'll do is pass on your message to Nate Osgood, and talk to him, and see if there's something you can do. Since he's leading the software side of this project, and he has a bunch of grad students and collaborators, he'll probably have more ideas than Eric.
Wow, more good news today: my former grad student @Joe Moeller just got a postdoc position at Caltech, working with Aaron Ames on robotics and category theory!
I have an article on how to write mathematics papers due on April Fool's Day, and I'd love to hear people's suggestions - here is my draft:
BTW, the updated version of ModelCollab is looking much nicer!
Great! I forgot to mention that partially in response to your difficulties, Nate Osgood and his students have dockerized StockFlow.jl. This is a way of making it much easier to install all the software necessary to run it. It's all bundled in a single package, and supposedly you can download it all in almost a single click.
They are also en route to dockerizing AlgebraicRewriting.jl, which does double pushout rewriting for presheaf categories. We may use this for our agent-based models.
That's great! I've be a bit remiss in replying, as I've been busy with work, but I should say that it doesn't just look nicer, the terrible lag and slight brokenness I ran into is gone.
I'm getting really busy right now:
The space of self-adjoint matrices with
form a space called "hyperbolic 3-space" - it's the space where we do 3-dimensional non-Euclidean geometry with constant negative curvature.
For my column on the hexagonal tiling honeycomb I needed to check that matrices of this form whose entries are Eisenstein integers
lie at the centers of the hexagons shown here:
This is the hexagonal tiling honeycomb: a geometrical structure in hyperbolic space made of sheets tiled by regular hexagons, with 3 of these sheets meeting along each hexagon edge.
Greg Egan helped me out a lot by coming up with an explicit formula for a bunch of matrices of this form describing the centers of sheet of hexagons.
I still need to show that every hexagon center corresponds to a self-adjoint matrix with Eisenstein integer entries and .
I also need to show that every self-adjoint matrix with Eisenstein integer entries and describes a hexagon center!
But for this I'm hoping to use a bunch of group theory.
The power of group theory is but a shadow of the power of category theory, but it's still bloody amazing.
Next, I want to show that these hexagon centers correspond to "principal polarizations" of the complex projective variety
This is product of two copies formed by taking the complex plane and modding out by the lattice of Eisenstein integers, . It's a very symmetrical variety - for example it's an abelian group object in the category of complex projective varieties, but it also gets symmetries from the fact that the lattice looks like a tiling of the plane by equilateral triangles.
What's a principal polarization? First of all, there's a group of isomorphism classes of holomorphic complex line bundles on any complex projective variety , called the [[Picard group]] of . They form a group because you can tensor two line bundles and get a line bundle.
The Picard group has a quotient group where we say two holomorphic line bundles are the same if they're isomorphic as topological complex line bundles. This quotient is called the [[Neron-Severi group]] of .
Polarizations, and principal polarizations, are certain special elements of the Neron-Severi group. So we can say they're certain very nice isomorphism classes of line bundles.
But first, it turns out that the Neron-Severi group of our friend
can be naturally identified with the set of self-adjoint matrices with Eisenstein integers as entries!
So various concepts about the Neron-Severi group of , i.e. various concepts about topological isomorphism classes of holomorphic complex line bundles on translate into concepts about self-adjoint matrices with Eisenstein integers as entries!
For example a holomorphic complex line bundle on a variety is called an [[ample line bundle]] if it has enough sections to define an embedding of that variety into projective space. The topological isomorphism class of an ample line bundle is called a polarization.
And in our case, the polarizations of correspond to the self-adjoint matrices with Eisenstein integer entries that also have
Of these, the most special are the 'principal' polarizations, which are in some sense the 'smallest' polarizations. I won't bother to define them in general - every way I know to do that involves a kind of digression - but in our example they correspond to the self-adjoint matrices with Eisenstein integer entries that have as small as possible meeting the above conditions... which turns out to mean
AND THESE ARE THE HEXAGON CENTERS IN THE HEXAGONAL TILING HONEYCOMB!
So, we can actually "see" some rather abstract structures in algebraic geometry.
To get this side of things to work out, I needed to learn a bit more about polarizations. I asked some questions here:
and Will Sawin, formerly the child prodigy son of my friend Steve Sawin, answered them very nicely.
So, with a lot of help I'm getting this idea worked out. (I first came up with it in conversations with James Dolan.)
By now Greg Egan has done most of the key arguments needed to prove that the hexagons in the hexagonal tiling honeycomb correspond to principal polarizations of the Eisenstein surface. He uses Mathematica in a wonderful way to carry out arguments that require too much calculation for anyone to quickly do by hand. All I can do is provide suggestions for what to do next.
I'm starting to write things up. Here:
I remind people what a Néron–Severi group is, and introduce the Eisenstein surface, a product of two copies of the elliptic curve where is the lattice of Eisenstein integers:
Then I compute the Néron–Severi group of the Eisenstein surface and show it's , the hermitian matrices with Eisenstein integers ( as entries.
Here:
I explain honeycombs in general, and the hexagonal tiling honeycomb in particular, leading up to the conjecture:
Conjecture. The points in lying on the hyperboloid are the centers of hexagons in a hexagonal tiling honeycomb.
and its consequence
Main result. There is an explicit bijection between principal polarizations on the Eisenstein surface and hexagons in the hexagonal tiling honeycomb.
In my next post I want to give a proof of the conjecture.
Category theorists, and especially applied category theorists, may find it strange to spend so much energy on a single very specific visually attractive entity (the hexagonal tiling honeycomb). But I find it to be a good way to learn the hard part of algebraic geometry. For people like me who enjoy abstraction, sheaves and Grothendieck topologies and so on are in some ways easy to love, while more specialized concepts like abelian surfaces and polarizations take more work. But the work becomes fun when I'm exploring a particular example, since then I want to learn lots of theorems: I need them to do what I'm trying to do!
Also, I strongly believe specific highly symmetrical objects like the icosahedron or the hexagonal tiling honeycomb serve as meeting-points for different branches of mathematics. For the icosahedron this is very well-known: Felix Klein wrote a famous book about the icosahedron and the quintic equation (Lectures on the Icosahedron) and Shurman wrote an updated version (Geometry of the Quintic), and now my friend Bruce Bartlett has written yet another update (The quintic, the icosahedron, and elliptic curves). But the hexagonal tiling honeycomb seems less explored.
Okay, we did it!
Greg Egan I proved my conjecture that the hexagon centers in the hexagonal tiling lattice correspond to the positive definite 2x2 matrices of Eisenstein integers with determinant 1.
It would have taken me forever to do this without Greg's computer calculations, but he also revealed some beautiful structures that are the key to the proof. And after he finished the proof, someone called Mist on Mathstodon came up with a different (but related) proof using more of the theory of [[Coxeter groups]]. So I've included that proof.
Yesterday I decided to pull the whole proof together in 7 hours, before meeting with @Chris Grossack (they/them) and James Dolan to talk about modular forms. It was the hardest I've worked in a long time, because a lot of facts we'd been using weren't as easy to show as I thought. But it was exhilarating and I got it done just in time!
Our software demo "ModelCollab: Software for Compositional Modeling" has been accepted by the Applied Category Theory 2024 (to be held at Oxford June 17-21). Here "we" are @Nathaniel Osgood, @Xiaoyan Li and me.
Together with @Kris Brown and William Waites, we're currently meeting at James Maxwell's childhood home, hard at work on agent-based models. We're going to use stochastic rewrite rules in presheaf categories.
Summary of our team's work so far on the use of categories in epidemiological modeling, written in the style of Alexander Pope:
In realms where numbers dance with grace profound, where minds unravel mysteries unbound, behold the union of thought and art as Category Theory unveils its part.
In Health's domain, where life's intricate maze beguiles the learned with its endless ways, there, too, it finds its fertile ground, in modeling realms where health is crowned.
Presheaves, stochastic, weave their tale, in state charts' realm, where concepts sail! Markov chains, with randomness bestowed, in each transition their secrets showed.
But lo! Deterministic, steadfast, keen, o'er ordinary differential equations' scene, they govern stock-flow, Petri's net, with semantics clear, yet not to forget
Electrical circuits, pulsing, alive, in each electron's dance, they thrive. Category Theory, a guiding light, in Health's modeling, a beacon bright.
With reverence due, we tread this path, where math and health share common swath, for in the nexus of these domains, innovation's spark forever reigns.
So let us raise our pens and minds, to where the abstract and concrete bind. In Health's pursuit, let knowledge soar, with Category Theory, forevermore!
Our team started work on agent-based models in James Maxwell's old house on May 1st. That feels like a long time ago. We started by settling on a concept of 'stochastic C-set rewriting systems', which I explain here:
A C-set is just a functor from a category C to Set, so it's basically a presheaf. C-sets form a topos and thus an [[adhesive category]], so they are a good context for double pushout rewriting. As I explain in the article, a stochastic C-set rewriting system consists of
1) a category C
2) a finite collection of rewrite rules, which are cospans in CSet:
3) for each rewrite rule in our collection, a timer , which is a stochastic map
In the article I explain all this stuff, and how to 'run' a stochastic C-set rewriting system.
In fact we went ahead and started implementing a more general version of this notion, which allows the timers to depend on more data, as well as giving more general 'application conditions' which say when we're allowed to apply a rewrite rule.
In the next article, I begin to explain how Kris is implementing this setup in AlgebraicJulia:
Instead of diving in an explaining the general code that can run any stochastic C-set rewriting system, I'm beginning to explain how we can apply it to run the Game of Life. But I realized there's a lot of basic stuff to explain, so this is the first of several parts.
Allen Knutson is helping me make a bit of progress on the question I raised here:
Briefly: you can define the cross product of vectors in a well-behaved way only in 3d and 7d Euclidean space. The 7d cross product is weird because it's not preserved by rotations in 7d space. But there is a way to get 3d rotations to act on 7d space that preserves the 7d cross product - a highly nontrivial way, giving an irreducible representation of on . I know this because smart people say it's true, but I don't know an explicit description of how 3d rotations do this. And I want to know!
His way of communicating is very much designed for people who are rather comfortable with representation theory. I presume that you will unpack it and explain for the rest of us, @John Baez ? Some of the steps are so fast I can't tell what the thing is he's talking about.
For a minute I thought you were spying on our emails, but now I see Allen has posted about this on the nCafe! He's made progress since our last correspondence, but unfortunately a lot of his progress required using the LieTypes package in Macaulay 2, which I regard as "too computational, not conceptual enough" - though it could lead to something more conceptual in the end.
I find Layra Idarani's approach a lot more human-friendly, though it still requires enough computation that he's only sketching these calculations, not actually presenting them.
Heh, no I was talking about Layra's comment, I assumed it was Allen's! My fault
Paul Schwahn has come back with an interesting new approach to get 3d rotations to act on 7d space in a way that preserves the cross product. He points out there's an SO(3)-invariant way to "cube" a vector in 3d and get a vector in 7d, and he believes that you can define the cross product in 7d by
where the at left is the cross product in 7d and the at right is the cross product in 3d. (He uses different notation than I'm using here.) But I don't think he's worked out all the details.
An epidemiologist having a category-theoretic revelation:
This is Nate Osgood discovering how the process of converting system structure diagrams into causal loop diagrams can be captured by a left pushforward functor between presheaf categories.
These two kinds of diagrams are both important in the modeling tradition called 'system dynamics', which is used in epidemiology as well as economics and other disciplines. People often start modeling systems by building a causal loop diagram and then refine that crude model by creating a system structure diagram. Our team is now on the brink of writing software to automatically implement a forgetful functor that converts system structure diagrams into causal loop diagrams.
This software won't automatically refine your crude models for you - since that requires knowledge of the system being modeled - but it can automatically check that your more refined model really does refine the crude one. And someday we may be able to build a tool that helps guess how to refine the crude model.
The forgetful functor should be an example of what David Spivak calls 'left pushforward'or '-migration':
• David Spivak, Functorial data migration
It's the left adjoint of the functor between presheaf categories that you get from any functor .
It's already implemented in AlgebraicJulia, so we like using it as much as possible in our modeling work.
Amazingly there seems to be a nontrivial connection between the 14 rare earth metals called lanthanides and the 7d space of imaginary octonions:
The reason is that the lanthanides are defined by the fact that their outermost electrons are filling up the so-called 'f shell', which corresponds to the 7-dimensional irreducible representation of SO(3), and this SO(3) action extends to an action of G2, which is the automorphism group of the octonions.
Unfortunately some of the ideas were developed in hard-to-find work by the physicist Racah, famous for his work on quantum mechanics and representation theory.
So there's some digging to be done!
For more on the purely mathematical side of this story, see
Here I outline the challenge of finding an subgroup of that acts irreducibly on the 7d irrep of , and what success in this would imply for the vector cross product in 7 dimensions. In the comments, first Layra Idarani and then Paul Schwalm present approaches to solving this. Schwalm's seems very pretty. I still want to write up a good proof!
Goofing off, I wrote about how in his later years Mendeleev predicted the existence of two elements lighter than hydrogen. The most interesting was "newtonium": his attempt at a chemical explanation of the lumiferous aether!
I think it's good to see how even famous scientists screw up royally.
Also: Jack Morava told me that the Kervaire invariant problem has been solved and the solution will be unveiled tomorrow.
John Baez said:
Also: Jack Morava told me that the Kervaire invariant problem has been solved and the solution will be unveiled tomorrow.
And then perhaps we can construct without already knowing ??!
Alas, I don't see how this will help anytime soon, it looks like a gnarly spectral sequence calculation, but this open problem about 126-dimensional manifolds perhaps is related to the 128-dimensional Riemannian manifold you just mentioned, in some way too deep for me to fathom.
I may ask Morava.
I don't think there's a theorem or even a formal conjecture, but just a pattern. The dimensions of the (known and potential) Kervaire invariant-one manifolds that do not reduce to stable homotopy theory (30, 62, 126) line up as 2 less than the dimensions of the homogenous spaces , and there is a construction from Bokstedt using morse theory and homotopy theory that works for the dimension-30 case; I think the most relevant paper from Bokstedt is 0411594 but see the slides above by Jones.
To be clear, I don't understand _any_ of this very well! I just collected these things as part of my literature review, as I thought I might have a novel approach using computer-aided search among potential retracts from . I haven't given up yet, but currently the search space is still too large, so its on the back burner until new insights arise. So if they can figure it out, maybe I can stop worrying about it. :)
You can at least use your insight so far to guess which way Zhouli Xu will finish solving the Kervaire invariant problem tomorrow: will it be true the Kervaire invariant of 126-dimensional manifolds can be nonzero, or false? He's only announcing that in the talk tomorrow! But it's clear what you're hinting:
Can the Kervaire invariant of 126-dimensional manifolds be nonzero?
It would have been funny if the talk wasn't titled "Computing differentials in the Adams spectral sequence" but "More applications of algebra to a problem in topology", as a nod to Mike Hopkins' famous 2009 talk where he announced the partial (but almost complete) solution to the problem....
The paper ended up being 262 pages long, and was finally published in 2016 in Annals of Mathematics. I wonder how long this one will be, if it uses computer assistance, and how long until it is published!
So it turns out Xu had announced the solution earlier, and there is a 126-dimensional manifold with nonzero Kervaire invariant, as @Eric M Downes and Atiyah and I guessed on purely numerological grounds.
Our paper has been published!
I decided that I could not do a decent job heading the Fields Institute project on climate change. Currently this job mainly amounts to writing grant proposals. It was painful to realize that even though I consider this important, I resist actually doing it. I wrote to the director:
I've realized that I can't serve as the "champion" for the Fields Institute's climate change related projects. I am willing to take on selected projects as they come up. But the climate crisis is the defining issue of our day, and the Fields Institute needs someone working energetically full-time to lead the institute's response to that issue. It's become clear that I am not the right person for that job.
There are a few reasons. Nobody I've met so far has caught fire with enthusiasm when I tell them about this Fields project. Even Nate Osgood, who supports it in principle, is always busy doing other things. It seems completely up to me to push this project forward. But I naturally gravitate toward a mix of applied research, mathematical physics, and science exposition. Those are the things I'm good at. It turns out that my feeling of duty to do something about climate change isn't enough to make me cut back on these activities and put a lot of time into the Fields project. It's taken me a while to realize and admit this.
I think the right sort of person for this job would be quite different. They would be focused on climate change and eager to make a name for themselves. The idea of a Fields Institute project would fit naturally into what they're already doing. And they would have a large pre-existing network of contacts among people who are focused on climate change. I also think it would be helpful for the Fields Institute to provide this person with a clear context and support - e.g. a salary, an office, and some clear-cut goals.
Thanks for giving me the chance. I hope you (or the next director) can find someone up to the job.
I couldn't imagine myself doing a job that was full-time just applying for grants :face_with_spiral_eyes:
Also, I mean…no salary for a job that literally just applying for money? :weary:
Yes, I want to help deal with climate change but this sort of work is painful for me, and I'm not even good at it, so it's unclear that the pain would pay off. I should not have accepted this position, but I had not expected it to rest solely on my shoulders.
Ugh thats rough. Seems like you're making the right decision, though. I firmly believe that doing stuff we are bad at purely out of a sense of obligation helps no one!
In any case, if you still do feel a sense of responsibility, you could help advertise the position if/when FI gets serious about it, e.g. they offer as you say, "a salary, an office, and some clear-cut goals" (!!)
They said they could not offer such a position until someone - like me! - applied for and got a big grant.
Over on the n-Category Cafe, we're getting close to understanding the deep connection between the cross product in 3 dimensions (which they may have taught you in college) and the cross product in 7 dimensions (which they almost certainly did not). They are not separate things! It seems you can define the latter in terms of the former!
What this all really means - like how the physicist Racah used this math to study the rare earth elements called 'lanthanides' - remains mysterious. But we're miles ahead from where we were half a month ago:
John Baez said:
They said they could not offer such a position until someone - like me! - applied for and got a big grant.
This sounds doomed to history unless someone else is proactive about it.
Is there a possibility that someone motivated to write such a project could get in contact with the Fields institute to obtain their (nominal) support for the project? That is, would the Fields Institute be willing to legitimize a grant application from an unaffiliated researcher?
I doubt they would want to work with an unaffiliated researcher unless that person had enough "cred" to make up for their lack of affiliation. Maybe there's some recognized expert on the math of climate change, or the human response to climate change, who is working in industry or at a nonprofit? I don't know.
A bit more likely is that some academic would want to leverage the prestige of the Fields Institute, and the staff the Fields Institute has for putting together budgets for grant applications, to improve their chance of getting a big grant. That's why I suggested someone "focused on climate change and eager to make a name for themselves" - maybe someone young and ambitious.
I didn't necessarily mean someone with no affiliation at all, just someone not already a member of the FI :grinning_face_with_smiling_eyes:
(I'll take the second half of your message as a "maybe" on that front)
Yes, they definitely need someone unaffiliated with the Fields Institute, since as far as I can tell there's really nobody on the permanent staff who could create climate change proposals except perhaps the director (who successfully led the charge on COVID modeling). They have a lot of short-term visitors, so theoretically they could hire one to work on this.
In this article I started explaining what our Edinburgh agent-based model team is doing with presheaf categories, using the concepts of 'schema' and 'instance':
I illustrated these concepts using @Kris Brown's software for the Game of Life.
In the next article I sketched how Kris uses double pushout rewriting to get time evolution to happen in the Game of Life:
And in tonight's article I explained how we equip objects in presheaf categories with 'attributes':
I link to @Owen Lynch's nice explanation of attributes in terms of profunctors, but since I'm aiming for a broader audience I gave a more lowbrow explanation. I again used the Game of Life to illustrate this concept.
I also highly recommend @Kris Brown's code for the Game of Life, which he annotated using a system called literate
which makes the documentation read like a nice essay (if you write as well as Kris does):
I wrote a quick explainer of the complex things that happen in liquid water, starting with the fact that if you could watch an individual molecule, roughly once in 10 hours on average it does this:
Not my work exactly, but:
Looks like he may be defending it on September 5 in the PA-3 amphitheater 3 pm WEST (Lisbon time), which is on the first floor of the Pavilhão de Matématica if anyone happens to be in the neighborhood!
Science popularization:
My overall goal in 2-6 was to learn how bonding works for atoms with all their shells full. The three main tricks are 1) excited states can be sometimes be bound when the ground states are not, 2) as a limiting case of this, ions can be bound when molecules are not, and 3) even when two atoms can't form a bond, they can stick together via van der Waals forces.
I summarized how we get and force laws between point charges, dipoles and induced dipoles here:
For 6 weeks I worked with @Nathaniel Osgood, @Xiaoyan Li, @Kris Brown, @Evan Patterson and @ww in Maxwell's childhood home in Edinburgh, working on software for agent-based models. That meeting ended on June 12th.
We developed a category-theoretic framework that lets you model discrete structures that change randomly at discrete moments in time, coupled to continuous variables that evolve according to ordinary differential equations. It's incredibly flexible! We've written code for a lot of it, but there's more to do. And I need to write explanations - illustrating the math and software with a bunch of well-known COVID models. So we'll keep meeting on Zoom.
But still, I have more free time now! I want to finish a bunch of papers.... and books. It's a bit intimidating, but I'm starting with this:
• "Tweets on Entropy", the course on entropy I taught on Twitter. I started with the basic idea of entropy as "missing information", and the principle of maximum entropy, and eventually used this to derive a formula for the entropy of an ideal gas.
I edited my next AMS Notices column based on referee's comments - it's here:
Back to tweets about entropy! I realize I should explain Boltzmann's constant clearly right from the start - or at least give a pretty good explanation as a placeholder for the better explanation that comes later. It's sort of weird that one "nat" of information equals joules/kelvin.
I posted about why current in a wire flows in the opposite direction from the electrons, and the most funny response so far was:
Franklin realized that if electrons were positive, STEM students would have to learn the Left-Hand Rule, and the majority of them wouldn't be able to hold their fingers correctly.
I was also intrigued by two British people who claimed this convention (that current flows the opposite direction from the electrons) was purely a US thing. That surprised me, because I believe US and UK electrical engineers, unlike highway engineers, all agree on the direction things flow.
For what is worth, Italian electrical engineers also agree that current goes from + to - (opposite of electrons). So it's definitely not a US thing!
Having studied physics in Hungary, my teachers and textbooks did emphasize that current flows opposite to electrons. I cannot think of any way U.S. conventions might have shaped Hungarian physics (20th-century events might suggest some reverse influence though). And since 1941, U.S. and Hungarian highway engineers concur on the direction of flow.
I was taught the same convention (positive to negative) in a British school, so I suspect those comments reflect rather exceptional experiences (or hazy memories).
One of the Brits clarified:
Oh yeah I get that. I don't mean that we were taught that electrons were positive, but I am sure we were briefly taught 'conventional current' before moving on to using 'electron current' or something for the rest of our studies.
The UK was going through a 'new math' kind of period then.
This BBC website suggests that the British rebellion against 'conventional current' is not over:
Originally, current was defined as the flow of charges from positive to negative. Scientists later discovered that current is actually the flow of electrons, from negative to positive. The original definition is now referred to as ‘conventional current’, to avoid confusion with the newer definition of current.
This makes it sound like 'conventional current' is obsolete! But as far as I know, it reigns supreme in all of physics and engineering.
By the way, I pity beginner students trying to understand the phrase "the flow of charges from positive to negative".
When I took solid state physics in college (semiconductor physics) the professors worked with electron flow and hole flow depending on which one is convenient. (A hole is a vacant position in an orbital shell, it is "positively charged". As electrons jump from orbital shell to orbital shell, holes flow in the opposite direction)
Electrons and holes are constantly cancelling and dividing in an equilibrium similar to that of H and OH ions in water.
Nathanael Arkor said:
John Baez said:
This BBC website suggests that the British rebellion against 'conventional current' is not over:
Originally, current was defined as the flow of charges from positive to negative. Scientists later discovered that current is actually the flow of electrons, from negative to positive. The original definition is now referred to as ‘conventional current’, to avoid confusion with the newer definition of current.
This makes it sound like 'conventional current' is obsolete! But as far as I know, it reigns supreme in all of physics and engineering.
"Conventional current" is positive-to-negative, which is (rightly) obsolete.
To me "obsolete" means "no longer or rarely used". I find conventional current annoying, but I don't think it's obsolete.
Let's make sure we're talking about the same thing. I find the term "positive-to-negative" somewhat odd, so let me describe conventional current in the way a mathematical physicist would:
Conventional current is where a flow of electrons in the direction produces a current vector with , because electrons have negative charge. As the name suggests, this is the standard convention in physics and engineering. So I wouldn't say it's obsolete: as far as I can tell, it's universally used in the subjects that care most about electrical current. Eliminating it would require a vast amount of work, e.g. rewriting all the standard textbooks.
Ah yes, I see what you mean; I got the directions mixed up there.
Whew - that's a relief. This is the third time today someone has expressed doubt that conventional current is what's conventionally used. I keep thinking maybe they're right... but I check around, and I see conventional current still reigns supreme except for some British secondary schools. So it's been an exciting day.
I had a dream about a conference on an applied category theory. Some guy was boasting that he now thought of everything in terms of "double double categories". Mike Shulman asked what exactly he meant by a double double category, but he didn't answer.
Then the guy started annoying me because he kept saying things like "biology is really just the study of distributive semilattices". I argued that these fields were huge and he was caricaturing them. He responded by saying what big grants his research group was getting.
I just realized now that "distributive semilattice" makes no sense! I guess that was a little joke on the part of the dream director.
Are you sure it was a dream and not a premonition? Sounds too realistic...
I think it's some sort of reaction to my feeling of guilt for missing ACT2024.
John Baez said:
Then the guy started annoying me because he kept saying things like "biology is really just the study of distributive semilattices". I argued that these fields were huge and he was caricaturing them. He responded by saying what big grants his research group was getting.
I think you were both circling around the much bigger issue of whether biology should have been called "weak 2-ology", or possibly even "ology of two variables". This gets even more confusing when you consider biological products (biproducts) in categories that have both products and coproducts coincide. And don't get me started on cat-enriched biology.
John Baez said:
I just realized now that "distributive semilattice" makes no sense! I guess that was a little joke on the part of the dream director.
Well, at first it makes no sense because a distributive lattice is a poset with binary joins and meet such that for every , we have or equivalently, such that for every , we have . And the word semilattice means either a meet-semilattice i.e. a poset with binary joins or a join-semilattice i.e. a poset with binary meets. So that you can't write the equations defining the distributivity.
But in fact there is notion of distributive semilattice. I've just learned that by looking at this Wikipedia page: Distributivity (order theory).
They say that a meet-semilattice is called distributive when for every such that , there exists with and such that . Join-semilattices are defined dually.
And they say that a lattice is distributive iff it is distributive as a meet-semilattice and iff it is distributive as a join-semilattice. I would be interested in understanding this.
Jean-Baptiste Vienney said:
They say that a meet-semilattice is called distributive when for every such that , there exists with and such that . Join-semilattices are defined dually.
And they say that a lattice is distributive iff it is distributive as a meet-semilattice and iff it is distributive as a join-semilattice. I would be interested in understanding this.
One theorem (that you can find in Birkhoff's book) that vastly simplifies thinking about such lattice problems; every lattice which is not distributive contains one of these:
Probably then you can categorize what it is saying.
That would be interesting.
I was wondering first if there is a way to define "distributive category with binary products" and the like with coproducts. Maybe you could ask that for every morphism , there exists objects , two morphisms and and an isomorphism such that .
I would be surprised though if the above characterization of distibrutive lattices and the theorem of Bitkhoff can be categorized. Posets are pretty special categories since every diagram commute in these.
A fact that clearly doesn't categorify is the equivalence between the two definitions of distributive lattices: it isn't true for distributive categories.
Maybe there is a theorem similar to the above one. Maybe this: Is every distributive category such that (binary) coproducts distribute over (binary) products thin?
My feeling is that a way to categorify facts about lattices is to prove theorems like these, or negations of theorems like this.
I like that last idea!
I actually meant something much simpler, though. For a meet-semilattice, rephrase the "distributivity" in terms of pullbacks or limits that would be true in a category with the latter. I think its a simpler property that just gets called distributivity for historical similarity with rings.
Anyway, what I hear the Oracle that Speaks Through John's Dreams saying, is that gene flow semilattices for mostly-Darwinian* evolution are "distributive" in the sense used on wikipedia:
If two species in the fossil record share a most-recent common ancestor , which is equal to or has descendant species , then there are two species, descended from and descended from , who also share a most recent common ancestor .
Eric M Downes said:
IFor a meet-semilattice, rephrase the "distributivity" in terms of pullbacks or limits.
This is what I was trying to do. I can rephrase (and correct, I mixed up products and coproducts) better what I suggested: Say a category with binary products is distributive if for every morphism of type , we can equip with a structure of binary product of two objects (i.e. you can find morphisms and which make into a product of with projections ) and is a product with , .
I think very few categories with binary products will verify this property.
I think semi-additive categories (= with [[biproducts]]) suffice; the UP for coproduct gives you the needed morphisms, yes? I know that's kind of trivial, maybe I'm just stating the obvious. [Edit. No! A category with at least one biproduct , generated by adding two more distinct morphisms need not satisfy; generally.]
FWIW, A weaker property which occurs much more commonly (in algebraic varieties for instance) is modularity which is very adjoint-functory. Is there a weaker version of this for semilattices?
(Okay, lets open a new topic for this if we want to keep going and not hijack John's topic :)
John Baez said:
Some guy was boasting... Then the guy started annoying me...
Sounds like an encounter with the Shadow (if you're Jungian).
Wow, my shadow is really a jerk.
My NSF/NIH proposal with Nate Osgood and Patty Mabry, "New Mathematics to Enable Modular, Standardized Representation of Behavior in Agent-Based Epidemiological Models", was rejected. 2 of the referee reports said it was "good" and two said "fair".
Patty and I did a "post-mortem" to determine how to improve the next proposal. But the main improvement will be that thanks to our 6-week Edinburgh we've written a lot of the software we were dreaming about in the rejected proposal. So our next proposal can explain it with examples, links on Github, maybe links to videos and talks about it, etc. Then we can propose using the software for concrete tasks, focus on those tasks, but also try to get some money to keep improving the software.
Does the second paragraph hint at what the referees didn’t like? Not enough concreteness, basically?
To some extent, but surprisingly that wasn't the main problem. All referees were worried about how will we evaluate the success of our project, since we didn't describe a plan to do that. Also, several were worried about how exactly findings from the Science of Behavior Change would be incorporated in our models, since we said it would - Patty is knowledgeable about that stuff - but weren't very specific about how we'd do it: we were more focused on describing our plans for developing the math and software of agent-based models.
I've been offered, and have accepted, a job at the University of Edinburgh. It's called the Maxwell Fellow in Public Engagement. It's
a 5-year, part-time, Reader-level appointment held jointly between the Schools of Mathematics and Physics and Astronomy. The primary responsibility of the fellowship holder is to generate public engagement with fundamental concepts and applications of mathematics and physics, reaching a global and diverse audience. The holder of the Fellowship will also be expected to pursue a program of original research in mathematics, theoretical physics or their applications, and to contribute to teaching in the Schools.
If we succeed in getting a visa - as required for this job, and likely to happen - my wife Lisa and I will need to be in the UK at least 180 days a year. But job is part-time, so we can continue to spend some time in California.
After 3 years I should be able to get permanent residency in the UK, so when the job ends we'll be able to spend basically as much or as little time in the UK as we want. At least that's the hope.
We're both very excited about this.
Congratulations!
After 3 years I should be able to get permanent residency in the UK, so when the job ends we'll be able to spend basically as much or as little time in the UK as we want. At least that's the hope.
Far future, but good to know that Indefinite Leave to Enter/Remain (the King's English for the most common form Permanent Residency) is not all that permanent, in that it still does lapse if you don't spend at least 180days per year in the UK at least 1 year of any 5 year consecutive period (edit: actually, any 2 year period for those who, unlike me, did not get theirs through the EU Settlement Scheme). The only way to get leave to enter for a lifetime is to become a British national, but that's only a matter of money once you have ILR.
Yes, we spoke to an immigration lawyer and learned a bit about those rules. But I'd already forgotten the precise rules, so thanks. I'll definitely have to make sure I look them up when the time comes - and see if they've changed.
We really like it here, so it should be easy to spend enough time here to keep the right to stay... at least until we get so old that we stop wanting to travel and decide to live in a retirement community where people take care of us. (It sounds yucky, but my aunt is old and ill enough that I know such a time may come.)
Sorry to hear it didn't get funded, and hope your visa etc. goes well.
Does it help your grant proposal to have users of your software? I have been thinking of some toy monte carlo models from your posts recently (like H-bond percolation in water, and HGT sim + tracking inheritance semilattice) that I know how to implement in python, but it would also be fun to learn how to use your system and build stuff with it.
Users definitely make software look better, and AlgebraicJulia has incredibly responsive maintainers :innocent:
It would definitely help to have more users. Right now AlgebraicABMs.jl is bleeding-edge technology, very much still under development. It should be a lot easier to use in a couple months.
In the last few weeks, @Xiaoyan Li has been using it to first implement a simple well-known model of the transmission of pertussis (whooping cough) among children, and then use 'data migration' (functors between presheaf categories) to turn this into a more complex model where people occupy locations on a spatial network.
In the process, she's bumping into some bugs in other AlgebraicJulia packages, or things that seem like bugs but aren't (i.e., cases where doing something requires expertise that's hard for people outside the initial development team to acquire). I think this is tremendously useful. But navigating it successfully requires that she have close contact with AlgebraicJulia experts - which is we're having weekly meetings involving @Kris Brown, @Evan Patterson, @Xiaoyan Li, @Nathaniel Osgood and myself - as well as Owen Haaga, a doctoral student at Oxford who is a summer researcher at the Topos Institute.
So, what I'm trying to slowly say, @Eric M Downes, is that it would be great for you to try out AlgebraicABMs, but you should expect to need some help from the "experts" if you try it soon.
One of my jobs now is to publicize and explain @Kris Brown's demo of a predator-prey population biology model on a spatial grid created using AlgebraicABMs.
If you click the link you'll get Kris' explanation of that model, which is in many ways already better than my explanation will be.
So, @Eric M Downes, you might take a look at this.
Congrats John! it looks like they've written the description for you
The primary responsibility of the fellowship holder is to generate public engagement with fundamental concepts and applications of mathematics and physics, reaching a global and diverse audience.
Thanks! Yes, I love the sound of this job.
This paper of mine, submitted to the arXiv on June 23rd, has finally made it out of moderation 18 days later:
Abstract: As an introduction to the concept of "moduli space" we consider the moduli space of similarity classes of acute and right triangles in the plane. This has a map to the moduli space of elliptic curves which is onto and generically three-to-one. The reason is that from any acute or right triangle we can construct an elliptic curve, and every elliptic curve is isomorphic to one constructed this way.
Thanks for @David Michael Roberts for noticing this even before I did!
Here's the story:
I write an expository column for the 𝘕𝘰𝘵𝘪𝘤𝘦𝘴 𝘰𝘧 𝘵𝘩𝘦 𝘈𝘮𝘦𝘳𝘪𝘤𝘢𝘯 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘚𝘰𝘤𝘪𝘦𝘵𝘺. A while back I wrote about the icosidodecahedron. My column was corrected by two referees and accepted. I submitted it to the arXiv's section for "history and overview", because the ideas were not new. But someone at the arXiv added the section "combinatorics". I thought this was a weird choice.
More recently I wrote this column about the moduli space of acute triangles. Again it was corrected by two referees and accepted. I again submitted it to the arXiv, choosing the section "history and overview". This time some sort of AI system recommended that I submit it to "algebraic geometry". I'd never seen that before! I followed its suggestion and removed "history and overview".
My paper was put on hold for 18 days. Now it has appeared in the section "algebraic geometry". The delay may be because I had never submitted anything to that section before, or it may be because the AI system flagged it.
Not a big deal in the end, but I think it's worth telling this story in case anyone here gets their paper put on hold at the arXiv, because it's essentially impossible to find out what's going on when this happens.
And today, more publication shenanigans! A while back someone asked me to write a paper for the AMS Notices on the subject of math exposition. I polished up my paper Why mathematics is boring. They liked it but they said the title might give a bad impression, and they suggested changing it to something more... boring.
I dug in my heels, and said I would only submit the paper if they let me use that title. After all, it's an example of what I'm trying to explain (techniques for getting people interested), and the first sentence of the paper says that math is not actually boring: people just make it seem boring.
They accepted this. Or seemed to.
But today I got the 'consent to publish' form from the AMS Notices. It lists the title of my paper as 'Publishing and presenting mathematics'.
Either this is a mistake of some sort, or someone changed the title of my paper without asking me! I've asked the person who originally solicited the article what's going on.
I'm not going to publish the paper under this stupid title. It's not like I need more publications or something. It's a perfect example of how people make writing more boring!
Any progress on getting the title corrected?
I asked the person who solicited and edited the article what's going on, Thursday July 11th, but I haven't heard back yet. I guess in a couple days I'll give them a reminder.
Whew! I sent them an email and this time they got back to me and said that nobody has changed the title of my paper.
I asked them if they could get somebody to send me a Consent to Publish form with the correct title. (I don't want to take any chances.)
I wrote a little book and blogged about it:
It's free. If you find problems in it, please let me know!
Typos on the page "A tale of two gases", under
The formulas look very similar. There are three differences
...
- The entropy for distinguishable particles has a term equal to 3/2 kN , while for distinguishable particles it has a term equal to 3/2 kN
one of these should have a 5/2 , and you have two "distinguishable"s
Ack! Yes. Though what I said was perfectly true. :upside_down:
Okay, I've made this correction and a bunch of others. The latest version is here:
http://math.ucr.edu/home/baez/what_is_entropy.pdf
(like all the earlier versions were). Yesterday afternoon I added a section 'Entropy comes in two parts' on page 72, clarifying how entropy of any system in thermal equilibrium splits into two parts, one connected to the system's free energy and one connected to the system's expected energy. This way of dividing entropy into two parts is useful for understanding the various systems I consider.
On page 41 you mention that the Gibbs energy takes values in . I don't know how precise you want to be, but it is also possible for the entropy of a distribution on to have both a spike contributing an entropy of and a tail contributing entropy of . In this case I think you'd have to say that the entropy was simply not defined.
You mean Gibbs entropy. Your point is very good - thanks!
I'm out of lines of text on this page, and I don't want a huge blank page. Maybe I can compress Puzzles 33 and 34 into a single puzzle and also add the pathology you point out.
John Baez said:
[...] my book [...]
Sorry for the off-topic comment, but that is an awesome design for the front page.
Thanks! That counts as on-topic to me.
On page 28 I don't understand the condition in the 2nd display . Isn't that always true? Maybe the condition should be the same as the one in the following display, ie ?
Yes, it should be! Thanks.
(My excuse for making that mistake is that we're looking for entropy-maximizing probability distributions such that . But it's still a mistake.)
You made me notice and fix another typo further down that page, too: mixing up and at one point.
Two more possible typos:
And one question on page 46: you write ' A joule/kelvin is about nats' --- but strictly speaking those are different units since the RHS is dimensionless and the LHS is joules/kelvin?
As I understand it, Shannon entropy is dimensionless and Gibbs entropy is in joules/kelvin, due to the factor k?
FWIW I think these are both reasonable, though, and actually preferable, though maybe more explanation could be given. The Shannon entropy becomes the Gibbs entropy just when
The conversion of J/K to nats even shows up in Szilard's (and later Landauer's) calculations on maxwell's demon; within the context of a thermodynamic systems at equilibrium, they are equivalent: information has a concrete physical reality. Too often people talk about these entropies of Gibbs and Shannon as if they were different concepts, but they absolutely aren't, the former is a precise restriction of the latter. So I find John's approach refreshing.
Jonas Frey said:
And one question on page 46: you write ' A joule/kelvin is about nats' --- but strictly speaking those are different units since the RHS is dimensionless and the LHS is joules/kelvin?
Thanks for all the corrections. Yes, I should have said something like "corresponds to" instead of "is". The most precise statement is that a joule/kelvin divided by Boltzmann's constant is nats. But I'm trying to convey something a bit more informal here, without being downright wrong.
As I understand it, Shannon entropy is dimensionless and Gibbs entropy is in joules/kelvin, due to the factor ?
Exactly!
I need to work with both these forms of entropy, and not do the usual mathematical physicist trick of working in units where , because I want to teach people a bit of real-world practical physics. At the end I want to compare our entropy calculations with experimental measurements, which are always in joules/kelvin.
Also, using joules/kelvin for entropy is a good reminder of the key formula
Jonas Frey said:
- page 41, both Shannon entropy and Gibbs entropy are called in the displays, but the first should probably be ?
Yes, I'm trying to use for Shannon entropy, which is missing the factor of .
Thanks - I've made all your changes in the online version here, and if you download the new version you won't have to see a bunch of other mistakes that were present in old versions.
I was worrying that the American Mathematical Society had changed the title of my piece "Why mathematics is boring" to "Publishing and presenting mathematics". They didn't: my piece is just one of several short pieces in a collection with that title.
But the hilarious part is that my piece is the first in this collection, so my title looks like a subtitle to the whole thing:
It would be funny to keep it like this, but I think I'll be nice and suggest that they arrange the short pieces so mine doesn't come first.
In today's email:
Subject: three proofs for Riemann's hypothesis
Usually I just get one at a time!
On a more serious note, my next big project is to work with @Joe Moeller and @Todd Trimble to finish off a paper we've been working on for a couple of year. It's almost done. It might be called "The splitting principle" or it might better be called "Universal properties of 2-rigs".
Honestly, if they had just sent one proof, I might have been skeptical.
"We prove that the empty set is inhabited. Thus, everything is true and the Riemann hypothesis is true."
I wrote some more articles explaining our software for agent-based models based on 'stochastic C-set rewriting systems':
This is an explanation of how we use categories of 'attributed C-sets' to endow entities with 'attributes' like age, height, weight, etc. It's illustrated with the Game of Life, where we only use these attributes to endow the squares with coordinates.
This is a review of all my articles so far on the Game of Life example. The big new thing is a link to Kris Brown's 'literate code'. 'Literate code' is code that reads like an expository essay.
This explains how we can make the Game of Life stochastic - that is, random - by changing one word in our code.
Next I should explain how Xiaoyan Li was able to create a disease model using state charts, automatically turn it into a stochastic C-set rewriting model using her program StateCharts.jl, and then use data migration to put this model on a spatial network.
I also need to explain how we incorporate dynamical systems described by ordinary differential equations into our models. This goes beyond the paradigm of stochastic C-set rewriting.
Yesterday I met with Nathaniel Osgood and Patty Mabry (a researcher at HealthPartners Institute who applies systems science to health modeling) to start outlining a paper. Our goal is to explain the benefits of category-based modeling to medical researchers who do modeling but may not have modeling methodologies as their main focus.
We will explain some of our general points by using our ModelCollab software to build up some stock-flow models of diabetes by composition, pullbacks, etc. Nate and Patty want to find some existing models in the literature so we don't have to justify the choice of models.
This is an example of the kind of work you need to do in 'truly applied' category theory: explaining the benefits of your tools without getting into much detail about the math under the hood. If you can't explain the benefits without talking about the math, you're not really applied yet.
John Baez said:
We will explain some of our general points by using our ModelCollab software to build up some stock-flow models of diabetes by composition, pullbacks, etc
This sounds quite exciting John! Out of curiosity, is this a sort of sequel to "Compositional Modeling with Stock and Flow Diagrams" and further building upon the methods there? Or is it pulling things in a novel direction with a different disease species?
First we wrote Compositional modeling with stock and flow diagrams which uses plenty of category theory and explains our AlgebraicJulia package, StockFlow.jl.
Then we wrote A categorical framework for modeling with stock and flow diagrams, which is aimed more at public health and system dynamics experts. This explains the math for people who don't already know category theory, explains the new web-based ModelCollab interface, and also includes lots of code illustrating how to use StockFlow.jl.
If we were smarter, we would have switched the titles of these two papers.
Since then we've thought a lot about how our software can help public health modelers. And we - meaning not me - have improved the ModelCollab interface a lot. The previous papers both have too much math for most public health experts. You don't need to know this math to use ModelCollab. So we want to write a paper that's more focused on the advantages of our software for public health modeling.
We want to go through the various features using one example. Nate and Patty decided that instead of an infectious disease model - which is sort of old-fashioned textbook stuff, which will make the readers yawn - we should do something like smoking or diabetes. To really incorporate lots of details about human behavior, which are important in these diseases, it's best to use agent-based models. But our agent-based model software is still just being built, so we decided to stick with stock and flow modeling. Nate and Patty thought there should be some pretty good stock and flow models of diabetes in the literature, so we'll try to base our examples on one of those.
One cool thing we're going to try is to put all the examples we talk about online, in ModelCollab, so reader can just click a link, see the example in their web browser, and play around with it!
I'm back to working on a paper with @Joe Moeller and @Todd Trimble, after about 3 months of being distracted by other things.
This paper is called something like "The splitting principle". It studies a bunch of 2-rigs (categorified rings, or rigs) that show up in representation theory and topology. In topology the "splitting principle" says that if you have a vector bundle over a nice topological space you can lift it some space over , say , with the property that
1) The pullback of to splits as a sum of line bundles.
2) The pullback map gives a 1-1 map on -theory, .
This means that for proving identities in -theory you can always switch to a context where your vector bundles split as a sum of line bundles.
Our original goal was to categorify this famous result - i.e., work with the 2-rig of vector bundles rather than its Grothendieck ring - but also generalize it - i.e., work with general 2-rigs, not just 2-rigs of vector bundles.
We have a conjecture for their generalization, but we have not managed to prove it yet, and the paper is getting so long that we plan to quit with a nice partial result. Our partial result handles one particularly important 2-rig: the free 2-rig on one object.
But in fact if that's all we wanted we could have made the proof much shorter, using standard tricks with symmetric functions and such. What we really wanted was an argument that proceeds at the categorified level at every step, i.e. working with 2-rigs throughout rather than their Grothendieck rings. This led us to study a lot of interesting 2-rigs and maps between them, and a lot of interesting general 2-rig theory.
I wrote a series of 5 short posts on the importance of dimensional analysis in algebra and geometry, from the ancient Greeks to 11th-century Arab mathematicians to Descartes and Newton to James Dolan's work on "doctrines in algebraic geometry", based in part on our discussions of the ontological and epistemological status of real numbers
Sometime I may try to put this on the n-Cafe.
https://doi.org/10.1371/journal.pone.0112827
Abstract starts: "Classical dimensional analysis in its original form starts by expressing the units for derived quantities, such as force, in terms of power products of basic units etc. This suggests the use of toric ideal theory from algebraic geometry."
Nice! I should show James Dolan that paper. Here is his key idea:
I spent a weekend at a strange old country house in Scotland, with no internet access except for cell phone, so I took a break from working on the paper with Todd Trimble and Joe Moeller and thought about 'Wick rotation' - how inverse temperature in statistical mechanics acts like imaginary time in quantum mechanics. I've written about related ideas here:
and here:
But I've known for a long time that I hadn't put all the puzzles pieces together - simply because the above two analogies are distinct and not yet connected. After a few days of quiet and occasional boredom I think I'm a lot closer.
One step was realizing that what I'd been calling 'free action' in my paper on quantropy - the quantity in quantum mechanics that's analogous to free energy in statistical mechanics - is well-known under the name effective action. Noticing this helped me make some other connections, but mostly it helped to spend hours playing with the puzzle pieces and try hard to make them all fit!
On Mathstodon, I explained how the Bernoulli numbers emerge naturally from the quantum harmonic oscillator. It's really pretty. But I still find it mysterious. I once read Alain Connes say something about this (follow the link for details), which made me feel he understood it more deeply. But his remark was so sketchy it didn't help much. Something about how Bernoulli numbers show up in the [[Todd class]] and you can understand the Todd class using quantum physics, like Getzler's proof of the Atiyah-Singer index theorem. I understand the first part, sort of, but not the second part.
I'm getting pulled in two directions now, which are strangely parallel. On the one hand my attempt to get to the bottom of Bernoulli numbers using quantum statistical mechanics led Allen Knutson and then Jack Morava to make helpful comments on the n-Category Cafe relating Bernoulli numbers to complex oriented cobordism theories, or more precisely formal group laws. I think this is the "royal road" to understanding the role of Bernoulli numbers in topology, which is something I really want to understand - yet I'm still confused about some very basic stuff, like why exponentiation in the multiplicative formal group amounts to . I have to ask them about this.
On the other hand, I'm trying to better understand the analogy between quantum mechanics and statistical mechanics, and here I'm on much firmer ground since I understand most of the basics and can move ahead to do new things - or at least, things I haven't seen anyone do. My old paper Quantropy worked out this analogy:
But while there's nothing wrong with the math, I later decided this analogy wasn't the best because times Planck's constant isn't really analogous to temperature - it's analogous to Boltzmann's constant. So that's what I'm sorting out now.
At first it's hard to separate temperature and Boltzmann's constant because in statistical mechanics they so often appear together in the quantity . However, they do play conceptually different roles, because also appears in the formula for Gibbs entropy, . So I've learned that setting or working only with rather than was really holding back my attempts to sort out the big picture. I had to overcome the mathematician's desire to set annoying constants equal to .
I have now managed to do some nice computations, which I need to write up. My ultimate goal is to clarify the relation between classical statics, classical mechanics, thermostatics and quantum mechanics - a huge amount has been written about this but it's fragmentary and incoherent, and I think the picture is much more beautiful than what we're normally told. Part of me wants to write a series of blog articles and then a paper called "The Unity of Physics". But to avoid seeming overly pompous I want to start by presenting a bunch of specific computations.
John Baez said:
But while there's nothing wrong with the math, I later decided this analogy wasn't the best because −1
times Planck's constant isn't really analogous to temperature - it's analogous to Boltzmann's constant. So that's what I'm sorting out now.
Ah, that's good to know --- I was recently rereading your posts on quantropy and this analogy puzzled me: hbar is a constant, while T isn't!
Everyone is puzzled by that, including me, but at the time I was unable to get the analogy to work smoothly any other way.
I think I can now redo all that stuff and go further.
Still it's wonderful that the analogy between Planck's constant and temperature works as well as it does! While we're unable to study quantum physics by turning a knob that adjusts , in our wonderful modern civilization we have the next best thing: most rooms and ovens come with a dial that lets us adjust .
So while "quantum fluctuations" are largely out of our control, their close relative, "thermal fluctuations", are something we can turn up or down with a twist of the wrist!
John Baez said:
But I still find it mysterious.
Just to show people how long this has been going on, here we are in 2003 on this theme.
Heh. Yes! It's been a back-burner project, but I figure now that I'm retired I should get serious, understand everything I can understand, write it up, and pass on the torch.
I have a half-written paper on the number 24, which shows how the number 240 has many interesting properties analogous to the number 24. They're connected to and , respectively. But the place I feel most frustrated is getting a good understanding of the Todd class, which is the key to many of these mysteries. Allen Knutson put his finger on it: the Todd class shows up in the relation between the additive formal group law and the multiplicative formal group law. But there are pathetically basic things about this that I don't understand! Once I do, the Grothendieck-Riemann-Roch theorem should make intuitive sense - see the commutative square there and how the Todd class enters. I think that's the main place where Bernoulli numbers enter topology.
Anyway, in case any students are listening: I think it's good to have a number of ambitious and difficult slow-moving projects on the back burner, along with a lot of easier faster-moving ones. The faster-moving ones let you publish several papers a year, while the slow-moving ones can take decades - but if you wait until you really understand something well, it can be worth it. It's like omelettes versus stews.
Also, the slow-moving ones let you produce some good papers even when you're retired and everyone thinks you're over the hill. :upside_down:
Is it really true that there have been no posts for 12 hours? Or is there some technical problem?
I think it's true?
University has started again for many people around the world so I'd expect them to be busy with their classes and admin work and whatnot.
Should we start a new engaging discussion? e.g. on foundations? :)
Ah, school is back! U. C. Riverside and also the University of Edinburgh start much later.
So yes, start new a new engaging discussion! I will watch.
Sorry John! I'm chasing down a paper submission and am planning to return to our discussion in #learning: reading & references > Quantum Techniques for Stochastic Mechanics either sometime this week or early next week! :joy:
I've been blogging about physics on Mathstodon while I struggle with statistical mechanics:
I found that my attempts to get classical thermodynamics to emerge from classical statistical mechanics by taking a limit where Boltzmann's constant approaches zero had a serious flaw. And yet I felt sure the idea must be correct in some sense. I decided we have to do something else while taking the limit. To get a better sense of this I decided to look at an example... and this example turned out to be interesting in its own right. So I wrote this up separately:
Nothing about the limit is visible in this post - instead it's a mini-course on what happens when you have a bunch of identical copies of a system in classical statistical mechanics, and how this is connected to the Central Limit Theorem, and how you can use this to prove Stirling's formula.
But it actually helped me get to the bottom of what I was stuck on. Calculations with examples are so often helpful!
This is very interesting work! To be honest I didn't even really consider statistical mechanics and thermodynamics as separate things at all, I always assumed they were the same thing. I guess that's because the place I first learned about both was from a single book. I forgot the title but this post reminded me of it since it included plenty of ways of deriving thermodynamic principles from statistical mechanics. The thing I find the most fascinating is that these derivations were first done back in the late 1800s, which was before the concept of the atom was even universally accepted!
Yes, we often meet both these subjects in single course that starts with thermodynamics and then 'explains' thermodynamics in terms of the microscopic constituents of matter using statistical mechanics (which comes in two flavors, classical and quantum). But thermodynamics is a self-standing discipline of its own which began before Maxwell, Boltzmann and Gibbs introduced statistical mechanics.
Whenever you're using probability theory (either classical, involving integrals, or quantum, involving traces of operators) you are doing statistical mechanics, not mere thermodynamics. Whenever you see Boltzmann's constant you are doing statistical mechanics, not mere thermodynamics.
Thermodynamics has a formula for changes in entropy in terms of things you can measure macroscopically, most simply , which says how much the entropy increases when you change the energy of a system by heating it up. Statistical mechanics says that entropy is proportional to the unknown information about the microscopic constituents of matter - where information is defined using probability theory. Then Boltzmann's constant comes in: it says how many joules/kelvin of entropy correspond to one nat of unknown information. It's no accident that is close to the reciprocal of Avogadro's number: in most forms of matter we know, there are roughly a dozen bits of unknown information per atom.
I am now trying to formalize how classical statistical mechanics reduces to thermodynamics as . But you can't just let : you have to change other things too.
Many students, including very smart ones, dislike thermodynamics on first contact because it features a welter of formulas involving partial derivatives in different coordinate systems. To really enjoy these formulas you need to use [[differential forms]] - and it turns out the ubiquitous [[Legendre transforms]] are best understood with a bit of [[symplectic geometry]] or [[contact geometry]]. Since none of this stuff is explained in a typical first course on thermodynamics, students tend to gravitate toward the clarity of statistical mechanics - which is also more powerful.
The real eye-opener for me came when I noticed that the so-called Maxwell relations in thermodynamics are formally identical to Hamilton's equations in classical mechanics:
I've been trying to get to the bottom of this ever since! (Rather slowly: this is a stew on the back burner.)
John Baez said:
The real eye-opener for me came when I noticed that the so-called Maxwell relations in thermodynamics are formally identical to Hamilton's equations in classical mechanics:
I've been trying to get to the bottom of this ever since! (Rather slowly: this is a stew on the back burner.)
It's been nice to re-read this one. The Hamiltonian-thermodynamical analogy is so bizarre, I love it.
Thanks! It blew my mind, and these days my main project is figuring out what it "really means". But I don't know how to answer that question directly, so I've figured out some strategies for sneaking up to an answer.
The other day I was thinking about how some of the hardest problems barely even look like problems, because you don't even know what to try to do. They just look like "strange coincidences" - or you may only have a "feeling" that something interesting is going on.
I've been trying to learn more about Hamiltonian mechanics recently, I hadn't learned it before since I always dismissed it as an overcomplication- that is, introducing extremely complicated math when alternatives (like Newton and even Lagrange) used much simpler math to do the same thing. I admittedly still kind of think of it this way, but reading the classical mechanics vs thermodynamics article, I see that sometimes introducing more math can help reveal previously obscured connections and analogies. In any case I've been thinking about this some more and why this analogy might exist, but I think I still have a lot to learn on this.
This might be unrelated but I found a way Hamiltonian mechanics is used to study statistical mechanical systems such as in thermodynamics. The approach is given by the "Liouville equation" which measures the time evolution of a statistical distribution on phase space.
On a theoretical level, I feel that all the variational principles discussed should be derivable from the principle of stationary action, since the principle of stationary action itself is directly derived from the more fundamental path integral formulation of QM.
I've been trying to learn more about Hamiltonian mechanics recently, I hadn't learned it before since I always dismissed it as an overcomplication- that is, introducing extremely complicated math when alternatives (like Newton and even Lagrange) used much simpler math to do the same thing.
I whole-heartedly disagree: Hamiltonian mechanics is full of fascinating ideas and powerful results not present in Newtonian or Lagrangian mechanics, and none of these three formalisms are equivalent to each other, though they have a large overlap and interact in a lot of useful ways.
But you're not alone: people didn't study Hamilton's equations much until quantum mechanics was invented and Dirac realized that Hamilton's equations are the classical limit of Heisenberg's fundamental equation for how observables evolve in quantum mechanics
with the Poisson bracket being the classical limit of the commutator. In fact Dirac tells a funny story about not remembering what the Poisson brackets were, starting around 16:57 here.
Here's my first post about getting thermodynamics as a limit of classical statistical mechanics:
This mainly sets the stage, explaining the rough idea of a 'moduli space of physical frameworks', and sketching some examples rather loosely, before turning to the 1-parameter family of frameworks that I'm trying to understand now, where the parameter is Boltzmann's constant. I talk about how this is connected to tropical algebra, and how Boltzmann's constant is analogous to Planck's constant.
In this post I give a lightning review of thermodynamics for mathematicians, focusing on systems described solely by one real-valued function on the real line, describing entropy as a function of energy:
I also introduce the Legendre transform, which plays a key role in my schemes.
The main goal is to define a thermodynamic system with the precision necessary to contemplate proving that a classical statistical mechanical system gives a thermodynamic system in the limit where Boltzmann's constant goes to zero. For this I used the definition of 'thermostatic system' from a paper by @Owen Lynch, @Joe Moeller and me.
In the next post I show how the Legendre transform arises as a limit of the Laplace transform... or at least a close relative of the Laplace transform:
This post requires no physics to understand! But the goal is to nail down some math needed to see thermostatic systems as limits of classical statistical mechanical systems. I'll try to do that next time.
Hopefully @David Corfield has an opportunity to see your third blog post.
Thanks. I'm sure he will eventually! He's a devoted reader of the n-Category Cafe, and always happy to see progress on slow-cooking projects.
Btw, @Madeleine Birchfield, I belatedly replied to a comment of yours.
I saw that comment; I don't really have anything else to add there since I don't know the answer to that easier question either.
Okay. Here's an even easier one which everyone here except me probably knows the answer to: what are the initial algebra and final coalgebra for the functor , where is some set?
The initial algebra is the empty set, and the terminal coalgebra is the sequence set .
Okay, thanks. If is the set of states of some system in classical mechanics, is the set of states of a countable collection of copies of that system. I was hoping that the answer to your question on the n-Cafe, and my simplified version of your question, would have a similar flavor: we were talking about some physical systems with some extra structure beyond having a mere set of states, so I'd hope the corresponding terminal coalgebras would describe a countable collection of copies of that system. But I don't know how to get it to work!
My old friend the animator Peter Chung, famous for his work on Aeon Flux, has released the first of our Zoom conversations! We range over many topics: artistic, scientific and philosophical. You may like to hear Chung's opinions on authors including Greg Egan, or my thoughts on the difficulty of explaining quantum mechanics in everyday language.
If you go the link and click "Join For Free" at right, and give your email address, you can immediately watch the video. I'll try to get the video onto my YouTube channel.
He writes:
We have been close friends since twelfth grade, when we were new students at Langley High School in Virginia in 1978-79. In spite of having divergent career paths, I think we shared an outsider's view of high school culture that brought us together. I've wanted to interview John for a while, and this is the first of a series of dialogues which I hope will be of interest to listeners. John has requested that our dialogues be available to everyone without paywall restrictions.
The story of thermodynamics and statistical mechanics continues! In Part 1, I explained my hopes that classical statistical mechanics reduces to thermodynamics in the limit where Boltzmann's constant approaches zero. In Part 2, I explained exactly what I mean by 'thermodynamics'. I also showed how, in this framework, a quantity called 'negative free entropy' arises as the Legendre transform of entropy. In Part 3, I showed how a Legendre transform can arise as a limit of something like a Laplace transform.
Here I put all the puzzle pieces together:
I explain exactly what I mean by 'classical statistical mechanics', and how negative free entropy is defined in this framework. Its definition involves a Laplace transform. Using the result from Part 3, I show that as , negative free entropy in classical statistical mechanics approaches the negative free entropy we've already seen in thermodynamics.
I need to do a lot more to clarify what's really going on: there are some functors that turn classical stat mech systems into thermostatic systems and vice versa, and this limiting process fits into those somehow.
After making tons of corrections suggested by readers, I put my new book on the arXiv:
I've always been fascinated by how gravity breaks the usual rules of thermodynamics. I just read a cool article about how in Newtonian mechanics a sufficiently large sphere containing a bunch of point particles interacting gravitationally will keep getting hotter and hotter indefinitely! I couldn't resist summarizing it in this blog article:
Is the odd behaviour due to the fact that the attractive potential is unbounded as the stars get closer and closer? Does the phase space then have infinite volume?
The phase space of a point particle in a bounded set has infinite volume, since it's , where describes the position and momentum . The momentum can get arbitrarily big. And thus the kinetic energy can get arbitrarily big.
For a collection of point particles in the set the story is the same: the phase space is .
Is the odd behaviour due to the fact that the attractive potential is unbounded as the stars get closer and closer?
Exactly: it can get arbitarily large and negative. So even if the stars have a fixed total energy, they can have arbitrarily large positive kinetic energy, balanced by their arbitrarily large negative potential energy.
Or to be a bit less technical: as a cluster of stars crunches down, they zip around faster and faster - and if they were point particles described by Newtonian gravity, there'd be no limit to how bad this can get! That's the gravo-thermal catastrophe.
In reality the stars will collide and form black holes.
Okay, has infinite volume. But the slice of it at fixed energy is bounded. I know there's a canonical volume form on the phase space, but I don't know if there's a canonical way to say that this slice of codimension has finite volume.
Thanks for engaging with me on this! I love physics.
The slice where the energy equals is bounded if the energy is purely kinetic energy. It's not bounded if the energy is kinetic plus potential energy and the potential is unbounded below. For example consider 2 particles of mass interacting gravitationally, where the energy is
Here are the position and momentum of the ith particle for , and is the gravitational constant. The momenta can get arbitrarily large while keeping the energy constant if gets arbitrarily small!
(The energy is undefined where .)
There is no natural volume form on a codimension-1 slice of phase space. This is why statistical mechanics prefers the 'canonical ensemble' (a god-given probability distribution on phase space w.r.t. to the canonical volume form there) to the 'microcanonical ensemble' (supposedly a probability distribution on the energy = E slice of phase space, but alas there's no canonical volume form there so this is problematic).
In less jargonesque terms, we should not pretend we have measured the energy is E with perfect accuracy, if we're trying to understand what particles moving around randomly are doing.
(Actually the issue of a volume form on the energy slice is more complicated than I was letting on. If all we have is the slice, there's definitely no canonical choice of such volume form. But if we also get to use the 1-form we can look for an (n-1)-form on this slice such that , where is the canonical n-form on the phase space.)
The reason I was thinking of the phase space volume is that I know Liouville's theorem says this volume is preserved by the Hamiltonian flow. So if the phase space has finite volume then there's a uniform distribution which is preserved over time. It seems impossible that any catastrophe could happen in that case.
I agree. Highly mathematical physicists love to think about phase spaces of finite volume - especially compact symplectic manifolds, which arise naturally from Kaehler manifolds (complex manifolds with a complex-analytic analogue of a Riemannian structure, whose imaginary part is the symplectic structure). But collections of particles moving around in space have phase spaces of infinite volume.
@Todd Trimble, @Joe Moeller and I are almost done with our paper '2-rig extensions and the splitting principle'. It's time to figure out what I'll write next. I have a column for the AMS Notices due November 1st, but that's just 2 pages. There are various bigger things to tackle:
Write a paper with @Nathaniel Osgood, @Xiaoyan Li and @Evan Patterson on categories in system dynamics, an approach for studying nonlinear dynamical systems that's widely used in economics, epidemiology, population biology, etc. We have software that implements this category theory, but we have not told the full story of how the math works, and we want to write a paper that will be good for category theorists (as well as other papers).
Finish my book with Derek Wise called Lectures on Classical Mechanics. Perhaps the title should be Lectures on Lagrangian Mechanics since that's the focus. The most annoying thing is that none of the encapsulated postscript figures work anymore, on my current LaTeX installation.
Edit This Week's Finds in Mathematical Physics: Weeks 151 to 200 and put it on the arXiv.
Maybe polish up Dirichlet species and the Hasse-Weil zeta function, which is joint work with James Dolan. This is a smaller job.
Maybe write a book A Mathematical Introduction to Tuning Systems, based on my blog posts.
I’ve found the program ‘eps2svg’ is effective for the action in its name.
Thanks, I should try it!
- Write a paper with @Nathaniel Osgood, @Xiaoyan Li and @Evan Patterson on categories in system dynamics, an approach for studying nonlinear dynamical systems that's widely used in economics, epidemiology, population biology, etc. We have software that implements this category theory, but we have not told the full story of how the math works, and we want to write a paper that will be good for category theorists (as well as other papers).
Oh wow this sounds quite exciting @John Baez! Are you referring to the work on ModelColab and the work presented at the Topos Colloquium talk, Towards Compositional System Dynamics for Public Health? Can you give a teaser about what else you want to explain regarding to the "full story of how the math works"?
Yes, we want to explain the math behind ModelCollab and StockFlow.jl. So we'll be basically continuing on from our earlier papers which already did this:
John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel Osgood and Evan Patterson, Compositional modeling with stock and flow diagrams, Proceedings Fifth International Conference on Applied Category Theory, EPTCS 380 (2022), 77-96.
John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood and Eric Redekopp, A categorical framework for modeling with stock and flow diagrams, in Mathematics of Public Health: Mathematical Modelling from the Next Generation, eds. Jummy David and Jianhong Wu, Springer, 2023, pp. 175-207.
If you haven't read those, those would be the best possible 'teaser'.
But now we want to go into a lot more detail on the math, for example using more double categories, and Evan Patterson's ideas on cocartesian double categories and parametrized dynamical systems - and explaining in more detail the double functors sending stock-flow diagrams to causal loop diagrams and system structure diagrams. (These are 3 commonly used kinds of diagrams in the field of system dynamics, which we already explained in the second paper above).
I wrote a post on some exciting discoveries about the largest asteroid:
Basically: while the Titius-Bode law predicted a planet where the asteroid belt is, and the asteroids aren't much like planets, Ceres is a lot more like a planet than I'd realized!
More on how classical statistical mechanics reduces to thermodynamics when Boltzmann's constant approaches zero:
In part 4, I presented a nifty result about how this works. I used a lot of physics jargon to explain why I care about this result, and some math jargon to carry out my argument. But to understand the result, you only need to know calculus! So this time I state it without all the surrounding rhetoric, and then illustrate it with an example.
At the end, I talk about the physical meaning of it all. But for the rest, no knowledge of physics is required.
We finished a paper and I uploaded it to the arXiv:
I'll link to it here when it shows up, but here's the abstract just as a teaser:
Classically, the splitting principle says how to pull back a vector bundle in such a way that it splits into line bundles and the pullback map induces an injection on K-theory. Here we categorify the splitting principle and generalize it to the context of 2-rigs. A 2-rig is a kind of categorified 'ring without negatives', such as a category of vector bundles with as addition and as multiplication. Technically, we define a 2-rig to be a Cauchy complete -linear symmetric monoidal category where has characteristic zero. We conjecture that for any suitably finite-dimensional object of a 2-rig , there is a 2-rig map such that splits as a direct sum of finitely many "subline objects" and has various good properties: it is faithful, conservative, essentially injective, and the induced map of Grothendieck rings is injective. We prove this conjecture for the free 2-rig on one object, namely the category of Schur functors, whose Grothendieck ring is the free -ring on one generator, also known as the ring of symmetric functions. We use this task as an excuse to develop the representation theory of affine categories - that is, categories enriched in affine schemes - using the theory of 2-rigs.
Hey, someone has extended my work on 2-Hilbert spaces (categorified Hilbert spaces) to define 3-Hilbert spaces!
Abstract. Higher idempotent completion gives a formal inductive construction of the n-category of finite dimensional n-vector spaces starting with the complex numbers. We propose a manifestly unitary construction of low dimensional higher Hilbert spaces, formally constructing the -3-category of 3-Hilbert spaces from Baez’s 2-Hilbert spaces, which itself forms a 3-Hilbert space. We prove that the forgetful functor from 3-Hilbert spaces to 3- vector spaces is fully faithful.
Originally I thought that proving the cobordism hypothesis would go hand-in-hand with constructing -categories of -Hilbert spaces, so that an extended TQFT would be a symmetric monoidal -functor
But so far the topological side has raced ahead of the -Hilbert space side.
[[2-Hilbert spaces]] were very interesting - all sorts of things like [[Pontryagin duality]] and the [[Doplicher-Roberts theorem]] can be formulated in terms of 2-Hilbert spaces - so there should be a lot of interesting things to do with 3-Hilbert spaces.
I wonder if it would be easier to define the -category of -Hilbert spaces and then perform some truncation operation to get the -categories of -Hilbert spaces.
Ultimately that might be true, but I think people don't understand the patterns well enough yet.
A 1-Hilbert space is not a special case of a 2-Hilbert space (or at least not in any way that's obvious to me) but 1Hilb is a very good example of a 2-Hilbert space, and that pattern should continue - the paper claims that 2-Hilb is a 3-Hilbert space.
One great thing is that the paper outlines applications to condensed matter physics - so this sort of higher category theory should become "applied n-category theory" fairly soon.
As @Joe Moeller noted, our paper with Todd Trimble, 2-rig extensions and the splitting principle, has hit the arXiv! I explain our paper in this blog article:
It begins:
Superficially this paper is about categorifying a famous method for studying vector bundles, called the 'splitting principle'. But it's also a continuation of our work on representation theory using categorified rigs, called '2-rigs'. We conjecture a splitting principle for 2-rigs, and prove a version of it in the universal example.
But we also do more. I'll only explain a bit, today.
A typo, under "The Splitting Principle for Vector Bundles": there's an instance of , instead of to
Thanks!
Btw, I've fixed a lot of worse errors in the last half hour.
I'm going to give a one-hour talk on my work with @Joe Moeller and @Todd Trimble - about 2-rigs in representation theory and topology, and the splitting principle - at the Octoberfest (October 26-27). This is a purely virtual workshop so it should be easy to attend.
Oh no, I think I might have to miss it. I'll keep an eye out for the schedule, though.
Rick Blute says the Octoberfest lectures will be recorded, @Todd Trimble.
Thank you!
I blogged about the October schedule and talk abstracts - we'll be seeing talks from some people who hang out here!
I'm giving a keynote talk at 2 pm Eastern Daylight Time on Saturday October 26th, and I've made my talk slides available here:
Abstract. A rig is a “ring without negatives”. There are many ways to categorify this concept, but here we explain one specific definition that sheds new light on topology and representation theory. Examples of such 2-rigs include categories of group representations, coherent sheaves and vector bundles. We explain some theorems and conjectures about 2-rigs with simple universal properties. For example, the free 2-rig on one generator is called the 2-rig of “Schur functors” because it acts as endofunctors of every 2-rig. It has as objects all finite direct sums of irreducible representations of symmetric groups. Furthermore, the “splitting principle” for vector bundles has a universal formulation in terms of this 2-rig. This is joint work with Joe Moeller and Todd Trimble.
In this talk there are a number of conjectures and problems for people to work on!
There's a typo in the slides in the definition of the nth exterior power: there's an with no number
Oh-oh! Thanks.
Fixed.
John Baez said:
But so far the topological side has raced ahead of the n-Hilbert space side.
And not just the topological, also the geometric:
Yes.
I'm writing my next column for the AMS Notices about this paper:
It's beautiful, but I found it very difficult to get the main ideas, so I hope my little column will help future readers.
He studied so-called moduli space of all ways of giving a sphere a flat Riemannian metric with 12 conical singularities with nonnegative angle deficits. Two such metric count as the same point in the moduli space if they are isometric up to rescaling. He showed the compactification of this moduli space is the quotient of a 9-dimensional complex manifold, namely complex hyperbolic 9-space, by a discrete group action!
Using this, Engel and Smillie were able to count the number of triangulations of the 2-sphere with triangles, where at most 6 triangles meet at each vertex. To be precise they computed the groupoid cardinality of the groupoid of such triangulations: some of these triangulations have symmetries, and a triangulation with symmetries contributes to the groupoid cardinality. Engel and Smillie claim the groupoid cardinality is
where is the sum of the 9th powers of the divisors of . (The number of triangles, , is always even.)
I find this formula hard to believe, so I'll only include it if I can get some concrete evidence that it's true. (The argument is very sophisticated, using modular forms.)
To me, "hard to believe" means that you think it's wrong. Do you think it's wrong?
Since the argument is too complicated for me to find a mistake in it, I'll try to prove the formula wrong in the case . They seem to be claiming that in this case the groupoid cardinality is .
I can see two kinds of triangulation of the 2-sphere with 2 triangles, under the quite general definition Engel and Smillie use. You can take 2 triangles and:
glue the 3 edges of the first to the 3 edges of the second.
for each triangle, glue one of its edges to another to form a cone; then glue these two cones to each other.
Both these have symmetries, and I might be leaving out some other possibilities... but there ain't no way I'm going to get a groupoid cardinality of . So maybe I'm misinterpreting the paper somehow.
Oh, maybe I see. They’re using a whopping big group as the symmetry group, not just the ‘obvious’ symmetries of the triangulation. But I still would like to check their calculation somehow.
In 5 minutes I'm talking here:
https://us02web.zoom.us/j/84333937950?pwd=HusWB3obEDoaMbyK3xymbzhE7JtaTE.1
about categorified rigs!
I got two very actionable suggestions:
I invite y'all to comment on this draft of my next column for the AMS Notices:
The structures he unearthed in this paper are absolutely delightful. He takes concepts that seem rather disorganized, like "the set of all triangulations of the 2-sphere where 5 or 6 triangles meet at each vertex", and many others, and shows how they fit in a neat algebraic framework.
Here's a little thing on medieval physics:
I've been reading a lot about the 'Oxford calculators' and their work on mathematical physics at Merton College around 1300-1350. They came up with the concept of 'instantaneous velocity' (as opposed to ), but not yet derivatives. They proved the Mean Speed Theorem, which tells you how far an object moves if its velocity is changing at a constant rate - but they didn't say falling objects work this way! They also did a lot of work on logic, e.g. the Liar Paradox, but I'm more focused on the mathematical physics. I wrote two articles on this stuff:
I've also been working on 'causal loop diagrams' - a rather bad name for a very interesting concept used in the subject of system dynamics. At my level of generality, a causal loop diagram is a graph with edges labeled by elements of a monoid, called the monoid of polarities. Very often this monoid is taken to be the group , to describe whether one vertex in the graph 'positively' or 'negatively' affects another. But other monoids are used, and I believe we should be quite general.
Here I explain the basics and show how the adjunction between graphs and categories is important in how causal loop diagrams are currently used:
Here I describe 3 constructions for getting new causal loop diagrams from old. The most interesting one requires that the monoid be the multiplicative monoid of a rig:
Here I explain 'hyperrings' and 'hyperfields', which are like rings and hyperfields where addition is multivalued. These may be useful alternatives to rigs when working with polarities:
I need to loop around back and figure out more about how to use rigs and hyperrings as polarities.
Hopefully you're following a reinforcing loop!
Here I compare two approaches to 'polarities' for labeling edges of graphs in system dynamics: rigs and hyperrings:
I conjectured that any 'doubly distributive' hyperring makes its power set into a rig. Someone should have settled this question (since it seems easy to check if true), but I haven't seen it discussed.
I'm pretty sure it happens in the most important example, the 'hyperring of signs' . And in this case, the smallest sub-rig of containing the singletons ( itself) is a very nice 4-element rig which - as I explain - is quite useful in system dynamics. The idea is that something can affect something else 'positively' (+), 'not at all' (0), 'negatively' (-), or in an 'indeterminate' way corresponding to the subset , which means 'it could be anything'.
If this makes no sense please don't be put off - I explain this a lot more articulately on the blog article.
Here I study two categories of graphs with labeled edges:
The first has as objects graphs with edges labeled by a fixed set . I call this category because it's the slice category of graphs over the graph with one vertex and one edge for each element of . Being a slice category of a topos, it's a topos, and the obvious forgetful functor
is a discrete fibration. If you don't know what this stuff means, don't worry: I explain it, since I want to be (okay, just barely) comprehensible by people who don't have a PhD in category theory.
The second category seems more important in applications to system dynamics. I call this one . It has as objects finite graphs with edges labeled by a commutative monoid , and it has very different morphisms than the previous one: a morphism is a map of the underlying graphs, but when several edges get mapped to one edge we add their labels to get the label of .
Again there's an obvious forgetful functor
but this time I show it's a discrete opfibration.
I expect all the category theory here is known to experts, but I wanted to lay it out for people trying to apply category theory to system dynamics.
These days reporters are interviewing me again about the Azimuth Climate Data Backup Project - because we're again facing the possibility that a Trump administration could get rid of the US government's climate data.
From 2016 to 2018 we backed up up 30 terabytes of US government databases on climate change and the environment, saving it from the threat of a government run by climate change deniers. 627 people contributed a total of $20,427 to our project on Kickstarter to pay for storage space and a server.
That project is done now, with the data stored in a secret location. But that data is old, and there's plenty more by now.
As before, I'm hoping that the people at NOAA, NASA, etc. have quietly taken their own precautions. They're in a much better position to do it!
Recently I got interviewed for this New York Times article about the current situation:
I wrote a blog post on this and some related issues, like Musk going after US government employees working on climate change:
On the positive side, I've been working with @Owen Haaga, @Kris Brown, @Sophie Libkind, @ww, @Nathaniel Osgoodand @Xiaoyan Li on a proposal to work in Edinburgh May 15 - June 30 2025 on category-based software for agent-based models, not only for public health but also the decarbonization of the economy.
Here's a bit of the idea:
Many of the most urgent social, economic and public health problems facing humanity require dynamic simulation modeling for a well-informed response. Sadly, most current models are monolithic one-offs: labour intensive to build, difficult to reuse or adapt, and reliant on proprietary software that researchers are not free to improve. Our team has been developing a flexible open-source modeling framework based that promotes transparency, reproducibility and scientific rigor by allowing model components to be shared and changed [ABM,B1,B2,SF]. This software is based on sophisticated mathematics from applied category theory, but it comes with a graphical web-based interface that requires no mathematical expertise to use.
In the current project we plan to develop software that combines two widely used simulation techniques: "agent-based models'' and "stock and flow models''. We call this blend of techniques "hybrid modeling".
An agent-based model or ABM describes the dynamics of one or more open populations of individual agents interacting with each other and with an environment in which they are embedded [MO]. A stock and flow model uses ordinary differential equations to describe the change of continuous quantities in time as they flow from one "stock" to another [Sterman]. By a hybrid model, we mean an ABM that combines discrete updating of agent state with continuous dynamics described by one or more stock and flow models.
In our 2024 Mathematics for Humanity research-in-groups project we successfully developed a category-theoretic framework for ABMs based on team member Brown's work [Brown]. We then used this framework to create a powerful software system for developing ABMs [ABM]. The system is now being used for research in epidemiology, and as a teaching tool. However, this system does not yet support hybrid modeling.
In 2022, several of our team members joined other collaborators to create a software system for stock and flow models [B1,SF]. In subsequent years they created a web-based interface for this system [B2,ModelCollab], which is now being used for research and teaching---teaching not only at the university level, but in a program involving high school math clubs focused around minority youth in Toronto. However, this software does not allow the creation of ABMs.
In the proposed project, we plan to develop the mathematics and then the software required for a disciplined approach to hybrid models. One key feature will be the ability to "turn on" continuous dynamics of various kinds in various specific states, and disable them in other states. To give an example of great significance in gestational diabetes: when a woman becomes pregnant, but only then, fetal growth and her body weight evolve according to specific dynamics studied in recent research by Thomas and others [T].
We then plan to implement models that demonstrate the capacity of our software in two spheres: public health (gestational diabetes) and economics (labour shocks as the economy decarbonizes).
In case anyone is interested in the references, to learn more:
Exciting developments @John Baez ! Have you started developing tutorials/guides for your software?
Re archiving US climate data: I don't live in the US, but I still think it would be helpful for a potential effort to know what skills were required. Who could be an effective volunteer for archiving government data?
Our team found that the most effective people were knowledgeable about data formats and how to transfer large amounts of data on the internet, and had a good internet connection connected to a large amount of memory. One winds up transferring data for days, and needs to make sure that the inevitable glitch doesn't require starting all over!
It's also good to have someone willing to fish around and look for different databases.
I believe our team, who had 3 or 4 such experts, was far more effective than the "hackathons" where a bunch of people got together for a weekend. We got 30 terabytes of data, while they probably got much less.
Here's the list of databases we nabbed:
https://math.ucr.edu/home/baez/azimuth_backup_project/our_progress_20190412.pdf
Alas, the URLs are not all documented. :cry: That was really dumb of me.
The main reason you'd want people in the US is that the internet connection might work a bit faster. But after collecting our data we transmitted it to servers in Germany for storage.
Mind you, I was just the fund-raiser and publicist. If someone wants to learn the art of downloading large databases, they should talk to Jan Galkowski.
Morgan Rogers (he/him) said:
Have you started developing tutorials/guides for your software?
For now I recommend that people watch some of these videos, which are Nate Osgood's talks and courses.
Yesterday and today:
Today I relaxed by writing a blog post:
Here's a summary version with less detail:
101 starship captains, bored with life in the Federation, decide to arrange their starships in a line, equally spaced, and let them fall straight into an enormous spherically symmetrical black hole — one right after the other.
As the 51st captain, say Bob, falls into the black hole, he sees 50 starships in front of him and 50 behind him. This is true at all times: before he crosses the horizon, when he crosses it, and after he crosses it.
Captain Bob never sees any starship cross the horizon until he crosses it. At the moment he crosses the horizon, he sees all 50 starships ahead of him also crossing it — but not the 50 behind him.
However, when any of the starships behind him crosses the horizon, the captain of that starship will see Bob also crossing the horizon!
Captain Bob never sees any starship hit the singularity — not even the 50 starships in front of him. The singularity is always in his future, until he hits it.
Thus, as Captain Bob falls into the black hole, he sees his partner, Captain Alice, in front of him for the rest of his short life. But how much is she redshifted?
Greg Egan made some assumptions and graphed the result. This shows the frequency of the light Bob sees divided by the frequency of light Alice emitted, as a function of time as ticked off by Bob’s watch:
Thus, “1” means no redshift at all, and smaller numbers means Alice looks more redshifted to Bob!
So: Alice as seen by Bob becomes more and more redshifted as time goes by. She becomes infinitely redshifted at the exact instant Bob hits the singularity!