You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm pretty old... Ask Me Anything and if I'm still awake I'll answer.
Okay. I'll say a tiny bit (but then I'll eat dinner). I started out being obsessed with how we could know anything, so I studied logic in high school and as an undergrad, taking courses with Benacerraf and Kripke. But I was also obsessed with physics since my uncle (Joan's dad) was a physicist. So, I did my undergrad thesis on computability in quantum mechanics. Then I temporarily lost interest in logic.
John Baez said:
I'm pretty old... Ask Me Anything and if I'm still awake I'll answer.
hmm, I do not agree that you're pretty old, as this would make me older still. but I like the idea of AMAs. and I wonder what have you learned/approve of/would do differently about your experience of dedicating yourself to climate change in Singapore. maybe you've written blog posts or books about it, I just missed them. in this case a link is good enough. thanks!
I think I'll plod along chronologically and answer Valeria's question about the climate change stuff later, if that's okay.
Rongmin Lu said:
John Baez said:
So, I did my undergrad thesis on computability in quantum mechanics.
I believe this was published in TAMS in 1983?
Yes! I feel like explaining why I worked on recursivity in quantum mechanics. When I was taking a course on general relativity I got really excited by John Wheeler's crazy ideas near the end of his ttextbook Gravitation. One of them was the "participatory universe", the idea that the universe only exists because it has observers in it. I dreamt of making this precise somehow so I started thinking about how complex a universe had to be to have life in it that was complex enough to understand the laws of that universe I couldn't get far with that, but in the process I started thinking about whether time evolution in quantum mechanics was computable. I reinvented a bunch of the theory of computable functions between metric spaces, and when I got a thesis advisor, John Burgess in the philosophy department, he said "oh, you've reinvented a bunch of stuff, read Descriptive Set Theory by Moschovakis and use that." So I did.
By the time I was done I decided that computability shed no really interesting light on physics. So I decided to work on another one of Wheeler's dreams: coming up with a theory of quantum gravity!
I went to grad school in the math department at MIT and learned a bunch of math, but when it came to getting an advisor I was frustrated that nobody in the math department was really interested in quantum gravity. The people I really liked were Dan Quillen, Victor Guillemin and Irving Segal. My friend Varghese Mathai and I sat in on Quillen's lectures where he was trying to come up with an elementary proof of the Atiyah-Singer index theorem. Mathai wound up working with Quillen. I decided that if I were going to work on quantum gravity I should at least learn quantum field theory well, so I decided to work with Segal on that.
Thus, I studied C*-algebras and wrote a little paper on Bell's inequality for C*-algebra.
This is great! I didnt' know about this Bell's inequality for -algebras until now. And that you and Varghese Mathai were grad students around the same time. Did the idea about generalizing Bell's inequalities come from asking what they look like in a C*-algebraic language?
But mainly I worked on conformally invariant linear QFTs in 4 dimensions, like the massless scalar field and Maxwell's equations. I wanted to rigorously construct the massless theory, a simple conformally invariant nonlinear QFT in 4 dimensions, but I didn't get anywhere with that. My thesis was not very interesting, and I published the interesting bit in Wick products of the free Bose field. I also helped Segal write a book summarizing his work on quantum field theory, along with a postdoc at MIT named Zhengfang Zhou.
Arthur wrote:
Did the idea about generalizing Bell's inequalities come from asking what they look like in a C*-algebraic language?
Yes, I guess I'd mainly seen Bell's inequality phrased in a rather old-fashioned way and I wanted to make it be about commutative vs. noncommutative C*-algebras. Segal was very interested in thinking about probability theory algebraically so I probably absorbed that attitude from him.
And these day's we're trying to think more about probability theory categorically! Hah!
Rongmin Lu said:
Mathai and Quillen wrote up a paper about the Thom form they found, which was later applied in TQFT by Atiyah-Jeffrey. That's known as the Mathai-Quillen formalism.
Right. I used to know that stuff because Mathai was always explaining it to me when we were students... oh yeah, and Quillen had lectured about it in his course! Quillen would work out new ideas in a course he always taught, which was always called "Homological Algebra" regardless of what it was about. He would lecture from an old-fashioned composition notebook. Each hour the lecture would start out flawlessly, but often at the end he'd reach the material he hadn't quite figured out yet, and sometimes he'd get stuck. Then the next class he'd go over that material again, having solved whatever problem he'd gotten stuck on. It was quite inspiring.
By the time I got my PhD I was pretty depressed about my work in math: I hadn't been working on what I really wanted, which was quantum gravity, and my thesis hadn't accomplished very much. People at MIT and Harvard were getting really interested in supersymmetry and string theory and basically whatever Witten was doing - he'd come up and give talks. But I didn't believe in string theory so I couldn't work on it.
I was thinking of quitting math and switching to either philosophy or music. For a couple of years I'd been taking Gian-Carlo Rota's courses on philosophy at MIT. He's mainly famous for combinatorics but I never took a math course from him - unfortunately. I had a friend who was really into Heidegger's Being and Time, and when I saw Rota was teaching a course on that I took that. It was quite exciting.
As a grad student, did you get to think about quantum gravity on the side? Were you reading things? Or was there simply too much to do (responsibilities, etc.) to spend some time on it?
Rota would lecture from notecards from 7 pm to 10 pm on Thursdays, with a break to smoke in the middle, and he'd go out to dinner with some of the more enthusiastic students afterwards, usually at Legal Seafood. I was very shy and had few friends back then, but some of my best friends were in this philosophy discussion group. We started meeting on Fridays at Au Bon Pain in Harvard Square, reading through books by Heidegger out loud... and later dialogues of Plato. We also sat in on a course taught by Gadamer (a European philosopher).
Arthur Parzygnat said:
As a grad student, did you get to think about quantum gravity on the side? Were you reading things? Or was there simply too much to do (responsibilities, etc.) to spend some time on it?
I did not have too much to do. I had almost no friends, and I had a fellowship so I didn't need to be a TA, and I spent huge amounts of time in the library reading. This continued the pattern I started as an undergrad. So yes, I was thinking about quantum gravity, reading stuff about it.
I see. Did you by any chance feel pressure to not work on quantum gravity simply because your interests were not in line with string theory at the time?
Anyway, when I told Rota I was thinking of switching to philosophy, he said that was a bad idea. He said I should get tenure in math and then do philosophy on the side (like he did).
And that was very good advice. (I would actually have been very bad as a professional philospher, and quite miserable I think. It's easier in math to feel like you've actually done something that will remain valuable - not necessarily great, but "true".)
Arthur Parzygnat said:
I see. Did you by any chance feel pressure to not work on quantum gravity simply because your interests were not in line with string theory at the time?
I wouldn't quite say "pressure" - I didn't talk to anyone about this. :upside_down: But certainly I didn't see any encouraging evidence that approaches to quantum gravity other than string theory were something people would find interesting.
I got a postdoc at Yale, and I spent a lot of time thinking about noncommutative geometry. I wanted to solve the problems of quantum gravity by replacing spacetime by a noncommutative algebra where the uncertainty principle acted to create an effect sort of like a shortest length scale. I never really got anywhere on this.
When you were reading about quantum gravity as a grad student/postdoc, did you have any papers that you would say have greatly influenced your thoughts?
Or maybe helped bring some of your ideas to surface? Or simply those that greatly inspired you?
Hmm. It was the final chapters of Gravitation that inspired me the most. I found Connes' work on noncommutative geometry very inspiring too. This was before Connes started working on quantum gravity or particle physics using noncommutative geometry. I was just fascinated by the idea of redoing differential geometry with a noncommutative algebras.
It wasn't until later that I actually met people doing quantum gravity using approaches other than string theory!
At Yale I wrote papers on quantum field theory and nonlinear wave equations with Zhengfang Zhou - that's the stuff I actually published. Zhengfang was really good at analysis - he loved playing with inequalities to get things done. I came up with some nice ideas but when it came to their technical implementation Zhengfang would always save the day.
A good example of a paper like this was Scattering and complete integrability for the massive theory. At one point we use the space - the space of functions whose 24/7th power is integrable. That's the sort of thing Zhengfang could do!
I got my job at U.C. Riverside based on my work on nonlinear PDE, and even today they consider me an "analyst" at UCR.
I like analysis but I don't have the sort of passion for it or mastery of it that someone like Zhengfang Zhou has.
So, when I started work at UCR I continued working on nonlinear wave equations but I started branching out into applications of topology to physics - stuff like topological solitons, quantum groups, stuff with braid groups.
The big change happened when I went to a conference called Mathematical Aspects of Classical Field Theory, maybe in Vancouver or somewhere, and I met Abhay Ashtekar, Chris Isham and Renate Loll, who gave talks about loop quantum gravity! I'd never heard of loop quantum gravity before, and I really have no idea why they were talking about this at a conference on classical field theory.
But their approach got me really excited.
Thanks! It was great meeting these folks, who became friends later on. Ashtekar gave a talk on two different meanings of "gauge transformation" in physics, I think, but I guess it didn't appear in that volume.
I remember Isham in his talk saying "when I learned the proof of the Gelfand-Naimark theorem, it said you could think of a point as a function of functions, and I thought that was the most amazing profound thing I'd ever heard". And then he chuckled, as if sort of laughing at his younger self.
Since I loved (and love) the Gelfand-Naimark theorem, I was delighted to meet someone working on quantum gravity who knew and loved that theorem!
The relevance was that he was creating a commutative C*-algebra built from loops embedded in an manifold - one of the ideas in loop quantum gravity.
John Baez said:
Okay. I'll say a tiny bit (but then I'll eat dinner). I started out being obsessed with how we could know anything, so I studied logic in high school and as an undergrad, taking courses with Benacerraf and Kripke.
But I was also obsessed with physics since my uncle (Joan's dad) was a physicist. So, I did my undergrad thesis on computability in quantum mechanics. Then I temporarily lost interest in logic.
Oh, so that's actually really interesting. As I mentioned before I came across Kripke and D.Lewis in the 1980ies and found those really helpful in thinking on all kinds of topics, from philosophy of language, to causation, epistemology, and recently security. With the possible worlds interpretation of Quantum Mechanics, it even seemed to make QM quite intuitive. Looking around I found this 2004 article by Lewis How many Lives has Schrodinger's Cat? - this must have been published right before his death - and I also found a critique by David Papineau David Lewis and Schrodinger's Cat. I can't really go very far in the direction of QM, or I'll never get done what I was trying to do on security. But it would be nice to know if there is a point where in CT modal logic and QM do come together.
I know a lot more about quantum logic than modal logic - back when I was in college I really liked reading about quantum logic. I didn't study any modal logic with Kripke; I took a course with him on Goedel's completeness theorem where he gave a painstakingly detailed proof. Amusingly, I remember my parents first taking me to Princeton and sitting in on a lecture by Lewis about possible worlds. My mother had a more romantic notion of philosophy and found it completely bewildering and off-putting.
From my limited knowledge of modal logic, it doesn't seem to have much to say about the truly distinctive features of quantum mechanics. Probably someone should invent "quantum modal" logic. Probably someone already has.
I'm really going to bed now. I'd be happy to continue tomorrow; I've gone up to 1990 and I haven't mentioned anything about categories, in part because I didn't say much about how I fell in love with braided monoidal categories!
Rongmin Lu said:
There's been some work on modal quantum logic. This paper by van Fraassen seems to be one of the earliest that got a bit of attention, and Bob Coecke seems to be aware of some of this work.
The van Fraassen article Semantic Analysis of Quantum Logic has a remarkable abstract :smiley:
This paper has a beginning, a middle, and an end. If these parts are to follow the dramatic unities, they will lead from suffering through recognition to reversal; but of this ideal they may fall short.
John Baez said:
By the time I was done I decided that computability shed no really interesting light on physics.
Did you change your mind? Kolmogorov had spent 15 years thinking about the question: "Which local function is the entropy the integral of?"
John Baez said:
Rota would lecture from notecards from 7 pm to 10 pm on Thursdays, with a break to smoke in the middle, and he'd go out to dinner with some of the more enthusiastic students afterwards, usually at Legal Seafood. I was very shy and had few friends back then, but some of my best friends were in this philosophy discussion group. We started meeting on Fridays at Au Bon Pain in Harvard Square, reading through books by Heidegger out loud... and later dialogues of Plato. We also sat in on a course taught by Gadamer (a European philosopher).
Are you saying you had a course by Hans-Georg Gadamer? Which year?
John Baez said:
Okay. I'll say a tiny bit (but then I'll eat dinner). I started out being obsessed with how we could know anything, so I studied logic in high school and as an undergrad, taking courses with Benacerraf and Kripke.
But I was also obsessed with physics since my uncle (Joan's dad) was a physicist. So, I did my undergrad thesis on computability in quantum mechanics. Then I temporarily lost interest in logic.
Are you more or less clear on the "how we can know" question today? You describe how that made you go to contemporary thinkers, as opposed to the classical Spinoza, Descartes, Kant, etc. Do you think we got closer to that question after the positivists at the turn of the last century?
dusko said:
John Baez said:
By the time I was done I decided that computability shed no really interesting light on physics.
Did you change your mind?
By now I'm too interested in other things to pay much attention to work on quantum computation or computability in physics, like that paper on undecidability of the spectral gap, or MIP* = RE.
I did write a paper on algorithmic thermodynamics with Mike Stay, which is based on the fact that you can see Kolmogorov entropy as a special case of ordinary Shannon entropy if you use relative entropy. I think this idea is interesting but I haven't figured out what else to do with it beyond what's in that paper... and it seems nobody else knows what to make of it either.
dusko said:
Are you saying you had a course by Hans-Georg Gadamer? Which year?
Probably 1987 or 1988, at Boston College - he had an arrangement where he'd visit and teach there.
Our philosophy gang - the gang who went to Rota's lectures on phenomenology - took him out to dinner. One of us asked what philosophy we should read. He said "Plato and Chuang-Tse". I'd already read Chuang-Tse so we read Plato's dialogues out loud - which is lots of fun, especially the ones that are really dialogues instead of monologues!
I was surprised to read that you were shy in grad school, since now I think of you as quite gregarious both online and in person. What changed for you?
I'm gonna have to go shopping now, but I was just about to say something relevant. I was gripped with nihilistic despair from about 1978 (when I graduated from high school) to about 1989 (when I met my wife) - the feeling that it was impossible to determine what was really worthwhile. Some of this came from reading Nietzsche and logical positivists, but most was probably due to being lonely. I found it very hard to talk to people because I felt so different from them. I wanted a girlfriend but I was too scared to go after the women I really liked.
Anyway, reading Heidegger helped a bit - that's what this has to do with Gadamer!
That's interesting, because especially Heidegger gives me vibes of upcoming doom. He describes developments of what we make with natura and what we make of nature (how humans end up perceiving it), and the only folks I ever hear talking about him make me think we're down an unstoppable gloomy path.
Yes, the later Heidegger is pretty gloomy in the way you describe. I was talking about Being and Time. It helped break down some sort of over-reliance on "reason" and "logic" that I was suffering from.
If you read Camus' La Peste then you are in for a really gloomy time. I think there is something about that that gets one to think that any thought of life having meaning is an illusion. So if you want to also be a realist then you can only see the world in black. But it was the 1980s too, which were pretty bad somehow. Suddenly there were Goths, punks and Skin-heads around. (at least in the UK). So Nietzsche can have that effect too. Also the problem is that he deconstructs concepts such as causality, questioning everything. In the end it becomes difficult to think.
So I felt something similar regarding reason and logic having very young gotten into programming. That is exactly where modal logic was liberating. It showed how you could have logic and imagination combined. Plus one could start to see how concepts such as causality and knowledge could have interesting definitions.
Of course I read Camus in high school, and Kafka, and Dostoevsky, and everything else that fit my mood. I don't think the zeitgeist had much to do with my problems - it was mainly my analyzing everything and not realizing that it's better to admit life mainly proceeds irrationally, driven by "animal spirits" rather than some deductive proof that some action is optimal.
One big step on my "road to recovery" was joining a vegetarian coop in my last year of college at Princeton. It was called 2 Dickinson St. and it's where all the misfits went: most students joined "eating clubs", the idea of which repelled me. It being the late 80's, there was a mix of hippies and punks there. Very nice people. And so I had friends and a kind of community. But this went away when I graduated and went to MIT... and I was very bad at seeking out compatible people, or environments, because I was so shy.
I would distract myself from my misery by thinking about math and physics, since when I was actually absorbed in those subjects I was usually rather happy or at least not depressed. So I learned a lot of math and physics, but it took me longer to sort myself out.
It's strange trying to imagine what I would have done differently if I were socially well-adjusted in college and grad school. I probably would have worked on some trendy topic in mathematical physics and done a PhD thesis with some nice results in it, and gotten a better job.
The 80s was a world where nuclear holocaust was everyday on the news, the way Covid has been for the past month. So I think that does affect the zeitgeist.
Hmm, I wasn't so worried about nuclear holocaust.
My parents were a lot more worried about it back in the 1950s. They were thinking of moving to Australia to avoid the fallout. But they weren't worried enough to actually do that.
I chose quantum gravity as a subject to work on because I enjoyed grandiose ambitions - in retrospect, those helped keep me happy. I became very happy when I actually started working on loop quantum gravity in the early 1990s. It was funny to hear that Lee Smolin had been working on that in the physics department at Yale back when I was a postdoc there (1987-1989). I'd never known about him then, even though I met a student of his who was struggling to find solutions to some constraint equations (the initial data constraints for the Ashtekar formulation of general relativity, I now know). The way he put it, it sounded hopeless, so I was not attracted. If I'd met Smolin, who is much more optimistic always, I would have started work on loop quantum gravity about 3 years earlier!
In the end I feel my work on quantum gravity didn't accomplish much, but at the time there were moments of real ecstasy, like when I came up with spin foams in 1997. In hindsight, the main good thing about all this work on quantum gravity was that I was trying to develop a category-theoretic approach to fundamental physics, and then I realized I needed an n-categorical approach, and this led me to meet James Dolan, and we figured out a lot of stuff.
I never really got around to chasing down my dream of n-categorical physics; it turned out to be quite hard, in part because n-categories hadn't been developed yet - so it was easier and more productive to just think about n-categories!
Wait I feel like we jumped a bit! When and how were you introduced to categories in the first place? What drew you to categorical ideas?
I find it hard to know what counts as success in the physics and math. I'd bet a lot of things you say "didn't really work out" have value or would help people thinking in a similar direction. Even if just to quicker reject things.
Yes, I jumped forwards about a decade.
IS there Lawvere involved in the story somehow? :D
Thanks, Nikolaj. It's true that all sorts of ideas that people discard turn out to be on the right track.
I've met Lawvere a couple of times, and he's a nice guy, but he basically hates my work.
If you'd been reading the category theory mailing list a decade or two ago you would have known that. :upside_down:
Nikolaj Kuntner said:
I find it hard to know what counts as success in the physics and math. I'd bet a lot of things you say "didn't really work out" have value or would help people thinking in a similar direction. Even if just to quicker reject things.
There's a funny story about this in the physics literature on the "no-cloning theorem." Asher Peres recalls receiving and reviewing a paper that made some outlandish claims, which he knew were wrong, but not why. He encouraged the editors of the journal to publish the paper anyway. And that lead to the no-cloning theorem (https://arxiv.org/abs/quant-ph/0205076). That's a bit extreme though because the authors had made an incorrect assumption. But the point is that it lead to many new discoveries. And that could happen at any moment.
Arthur wrote:
When and how were you introduced to categories in the first place? What drew you to categorical ideas?
I learned a bit about categories when I was learning algebraic topology in grad school, and later as a postdoc when I was trying to understand things like cyclic cohomology and Hochschild homology - since they were important in Connes' work on noncommutative geometry. But this was a fairly pedestrian form of category theory: abelian categories, and a bit of derived functors. It was only much later, when I started understanding n-categories thanks to lots of conversations with James Dolan, that I really got the point about things like homological algebra: like, that chain complexes are strict stable infinity-groupoids, a very tractable sort of infinity-category. Back then I just treated homological algebra the way most algebraists do, as some sort of computational framework.
In what year did you meet Dolan?
In my early career I didn't know about topos theory. If I'd heard about it sooner, from the right sort of person, I might have stayed interested in logic! As an undergrad I got the impression that logic consisted of all these mind-blowing wonderful theorems like Tarski's undefinability of truth and Goedel's incompleteness theorems and the Loewenheim-Skolem theorems, followed by more technical stuff like forcing (which I didn't understand), followed by abstruse investigations of large cardinals, which seemed pointless to me.
Umm, I'll have to guess when I met James Dolan. I could figure it out by reading old sci.math newsgroup articles, because we met on "usenet news", which is where people discussed math and science after the internet was invented and before the world-wide web.
I am very bad at remembering dates since I don't think about the past much. I think maybe 1991 or 1992?
He would write these great intuitive explanations of advanced math in all-caps posts to sci.math.
Okay, just getting a sense of the progression of ideas. Cuz I see you have a paper on Hochschild homology in a braided tensor category in 1994.
Okay, I may have my dates off by 3 or 4 years.
Also, it takes a while for papers to show up.
But the progression went something like this:
I got interested in topological aspects of condensed matter physics, in part from a physicist named Rossen Dandoloff who worked at UCR for a while.
Around 1989 Witten had done his thing connecting the Jones polynomial and Chern-Simons theory.
When people realized that 3d quantum gravity was exactly solvable using Chern-Simons theory, and that some versions of it gave a TQFT, I got really interested in that.
I liked the idea that ultimately quantum gravity would be a very pure, very algebraic sort of thing - not just ordinary quantum field theory involving lots of geometry of manifolds. I really liked how any modular tensor category gives a 3d TQFT. So I got really interested in braided monoidal categories, modular tensor categories and the like.
At some point Louis Crane argued that if 3d quantum gravity used a braided monoidal category, 4d quantum gravity should use a braided monoidal 2-category!
I think I started This Week's Finds right around then.
Yes, this was around 1993:
http://math.ucr.edu/home/baez/week2.html
Categorical physics, by Louis Crane, preprint available as hep-th/9301061 in amstex.
A Categorical construction of 4d topological quantum field theories, by Louis Crane and David Yetter, preprint available as hep-th/9301062 in latex.
Hopf Categories and their representations, Louis Crane and Igor Frenkel, draft version.
Categorification and the construction of topological quantum field theory, Louis Crane and Igor Frenkel, draft version.
These outline Louis Crane's vision of an approach to generally covariant 4-dimensional quantum field theories (e.g. quantum gravity or a "theory of everything") based on 2-categories. "Categorical physics" sketches the big picture, while the paper with Yetter provides a juicy mathematical spinoff - the first known four-dimensional TQFT, based on the representations of quantum SU(2) and very similar in spirit to the Turaev-Viro construction of a 3d TQFT from quantum SU(2). The papers with Frenkel (apparently still not in their final form) describe the game plan and hint at marvelous things still to come. The conjecture is stated: "a 4d TQFT can be reconstructed from a tensor 2-category". This follows up on Crane's earlier work on getting 3d TQFTs from modular tensor categories (big example: the categories of representations of quantum groups at roots of unity). And the authors define the notion of a Hopf category, show how the category of module categories of a Hopf category is a tensor 2-category, and use "categorification" to turn the universal enveloping algebra of a quantum group into a Hopf category. Sound abstract? Indeed it is. But the aim is clear: to cook up 4d TQFTs from quantum groups. Such quantum field theories might be physically important; indeed, the one associated to SU(2) is likely to have a lot to do with quantum gravity.
I am currently perusing Kapranov and Voevodsky's massive paper on 2-categories, which seems to be the starting point for Crane's above papers and also those of Carter/Saito that I mentioned last week. Next week I should post an outline of what this paper does.
This was before I physically met James Dolan. At some point we met in Montreal, where I'd been invited by Prakash Panagaden, I think.
That's also the day I met Joyal.
But anyway, here's an example of how stupid I was back in 1993: I was all excited about 2-categories, and it never even occurred to me to wonder about 3-categories or 4-categories.
I was not really thinking like a good mathematician back then. I was focused on quantum gravity.
So at some point James Dolan told me "you know, people think about n-categories for n > 2".
Nice, the seeds of the cobordism hypothesis.
Right! Later James Dolan said he'd been wondering about me. He said he'd been trying to understand n-categories and homotopy theory for a long time, and then he saw this guy on the internet talking about n-categories and some weird physics called topological quantum field theory. He thought that was very puzzling and mysterious.
So he came to Riverside to be a grad student, but mainly we just talked a lot about n-categories, gradually trying to combine the homotopy theory and the TQFT perspectives, and this led to the tangle hypothesis and the cobordism hypothesis.
And apparently I wrote up the paper on that by around February 1995.
Do you feel some people that you encountered should have not switched fields or interest?
Interesting question! I'm usually too worried about myself to think about whether other people are doing the right thing. Too self-centered, I guess.
I remember hearing about certain mathematicians who were really good at something and then switched, and how people were disappointed in this - like when Voevodsky stopped working on motivic cohomology. But if you read about Voevodsky's turbulent mental state at the time you'll see why did this... and in this particular case, his later work on univalent foundations seems to have justified the switch.
He wrote:
In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.
Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.
Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.
Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.
I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.
Okay, now that I think about it, I can think of more examples of people who quit doing what they were great at, disappointing their fans. Grothendieck, obviously! Or Mumford, who sort of quit algebraic geometry at one point.
Also various musicians like Miles Davis, who switched styles multiple times.
I think we should be very sympathetic to people who quit working on something they're good at. Often the same thing that made them great makes them unable to keep cranking out work in the same style!
There's also the phenomenon that people who receive great awards can't trump themselves in the same field.
Yes. The Nobel Prize is generally considered the kiss of death because people who get it become flooded with distractions and also feel the need to do great things.
Most great work doesn't start from someone sitting down and deciding to do something great.
Thank god I didn't get one yet.
Yes, I've told them several times: "no, don't give it to me!" :upside_down:
That reminds me of your point on logic and irrationality earlier. One way I see these reconciled in modal logic is that the actual world is considered an indexical (but each of us has counterparts in other possible worlds). So we might mistakenly think we are in one world, and then through a realisation discover we need to shift the position in the space of possibilities where we are at. Could this indexicality be the logical anchor of "being in the world" of Heidegger? I think one can illustrate this with an example: someone walking down a dark street hears footsteps approaching them from behind. She does not want to turn around for fear of showing her fear. At this point her world contains as close possibility ones that could involve unpleasant situations. Turning around she sees her brother. This information immediately leads to a reassessment of which worlds are close, which is the feeling of relief. The fear vanishes. The birds are chirping in the trees.
I recently had a lot of fun reading Pais' biography of Einstein, Subtle is the Lord. I used to wonder a lot about how, after so many successes, he sank into completely unsuccessful work on unified field theories in his later years... basically after his work on general relativity in 1915, except for some small things. ("Small things" for Einstein include his work on Bose-Einstein condensates, his paper on the Einstein-Podolsky-Rosen paradox, and his work on wormholes with Rosen!)
This biography explains a lot about what he was doing in those later years. Largely he was grappling with the conceptual puzzles of quantum mechanics that everyone is still grappling with today! He just couldn't get interested in mesons and such.
@Henry Story I think Heidegger would reject trying to sidestep possible shortcomings of (classical predicate) logic alla early Russel via formalizations (such as formal modal logic.) There's a certain flair of "a fox in the woods wouldn't overthink his life either". I'm drawing a caricature now.
But actually, in many philosophy schools, I feel there's a general distain for what's taken to be a naive logicism.
Chomsky comes to mind when thinking of someone who later did something completely different than in his early, academic, life.
Yes, basically I think Heidegger would have considered formal logic as part of what he called the "enframing" (the Gestell). This is the way of approaching the world that underlies technology.
Such enframing pertains to the manner reality appears or unveils itself in the period of modern technology and people born into this "mode of ordering" are always embedded into the enframing.
But there is a huge transformation that occurs with Kripke and Hintikka onwards in logic. They actually introduce the concept of a frame (which I had never thought of as related to this enframing, as I have not read Heidegger directly, but only come across him via Bernard Stiegler, a philosopher of technology who is now very interested in entropy and negentropy, but from a humanities perspective).
@David Corfield recently pointed me to this new book Necessity Lost: Modality and Logic in Early Analytic Philosophy. (sorry if this a bit off topic)
David Corfield and James Dolan I were trying to think of modal logic as "higher logic" a while back, a system where you could quantify over not just elements of a set but also sets in a category and categories in a 2-category and so on. Something like that.
It didn't get too far back then, mainly because none of us were deeply committed to working on this. But David has gotten back to modal logic recently, and looking at it with new eyes thanks to homotopy type theory.
I am currently completely uninterested in modal logic. As Sting sang,
One world is enough for all of us...
I am not sure if you need to believe in possible worlds as real. They are idealizations.
I was really struck by his sentence in chapter 4 of Modal HoTT that
where HoTT itself is the internal language of (∞, 1)- toposes, modal HoTT is the internal language for collections of (∞, 1)-toposes related by geometric morphisms.
I really don't care whether possible worlds are real. Since this thread is about my ideas, I'll come out and say what I always say, which is that the "existence"or "reality" of things like numbers or possible worlds is the least interesting topic I can imagine, in part because people who discuss existence never seem to ask what existence or reality means.
Even if they did discuss what these things mean, which would be a prerequisite for really doing philosophy, I would probably be uninterested.
That's just because I've rarely seen discussions of this that to get anywhere interesting.
And there are so many wonderful things to think about, where progress is so much easier!
I am mostly interested in it from reasoning about the architecture of the contemporary Gestell that is the Internet and the Web, where one has to reason about different points of views on reality. But I am not in a good position to tell what is easy and not easy in the space of CT. I was hoping it would all have been resolved by now. It looks like I may have been overly optimistic. :-)
Sorry for the interruption. Please continue.
Well, I think I'll wait until someone asks me something.
Do you think physicists barely learn category theory because it's inherently difficult to learn? What would change that?
Category theory is not really difficult to learn; the main problem is that most people suck at explaining things.
I do think that "up to isomorphism" is bit more illusive than, say, Riemannian geometry.
Physicists are at an extra disadvantage compared to mathematicians because their training makes it hard for them to think abstractly. Their training is focused on making them good at "becoming one with the physical system under study" - for example, knowing by feel what's the difference between an ordinary hydrogen atom and one with a muon replacing the electron.
"Up to isomorphism" is a lot simpler than Riemannian geometry; you can teach it with real-world examples like cups. But most people don't have the benefit of someone who teaches it that way.
I was very lucky to have James Dolan explain a lot of category theory to me in a really clear way, and now I can teach other people that stuff.
But most math books need to be decoded because they don't come out and talk in plain English: you have to provide the intuitions yourself.
Physicists are trained to work through specific models. Rarely are they taught (as undergrads/grads) to think about the structure of models and the relationships between them. Personally, TheCatsters were my saving grace into category theory (MacLane was too daunting for me).
And TBH, @John Baez 's papers helped dramatically when I was in grad school.
John Baez said:
Category theory is not really difficult to learn; the main problem is that most people suck at explaining things.
I doubt that's really the main problem. There are enough decent and accessible resources around by now. Could it be that most people suck at learning?
Well, I guess we can blame either party! For people who don't suck at learning, it's fine to have teachers that suck at teaching.
In fact sometimes they don't need teachers at all.
But I really don't know a book on category theory that explains things in a way that I'd like to have learned from. I was just lucky to have James Dolan explain a lot of stuff to me personally. And now I could write a book sort of like that - but unfortunately you can't ask a book questions.
Todd Trimble said:
John Baez said:
Category theory is not really difficult to learn; the main problem is that most people suck at explaining things.
I doubt that's really the main problem. There are enough decent rand accessible esources around by now. Could it be that most people suck at learning?
Some people have a stigmatism towards it. I remember specifically meeting Juan Maldacena about 4 years ago, and when I told him I was working with categories he was like (I'm paraphrasing since I don't recall his exact words) "Oh wow, that's super fancy stuff. I have no idea about those things."
Others aren't convinced it's useful so they don't bother. There are tons of reasons.
Yes, a lot of people think category theory is difficult, whereas in fact once you get the hang of it everything else seems difficult.
I know the Catsters series is popular, but I felt they jump right into somewhat hard and fully abstract stuff. I've read various introductions over the last years, and for some reason, my favorite basic text was the explanations in Goldblatt's Topoi.
But still, the definitions of adjoint functors for example seems to be a more or less early feat in category theory, and I find it's bells and whizzles much harder to remember than statements about curvature and what the important maps in geometry are.
I really profited from Goldblatt's book when I was first trying to learn category theory. Then I discovered most of the category theorists I know look down on it.
Yeah, I was kidding somewhat. I'm really suggesting that saying the main problem is that "people suck at teaching" is too reductionistic.
Nikolaj Kuntner said:
I know the Catsters series is popular, but I felt they jump right into somewhat hard and fully abstract stuff. I've read various introductions over the last years, and for some reason, my favorite basic text was the explanations in Goldblatt's Topoi.
But still, the definitions of adjoint functors for example seems to be a more or less early feat in category theory, and I find it's bells and whizzles much harder to remember than statements about curvature and what the important maps in geometry are.
I don't know about that. All I knew when I started The Catsters was the definition of a category, a functor, and a vague idea of what a natural transformation is. In particular, I did not know limits or anything else.
It's a bit odd because, for examaple, I think Goldblatt only introduces functors on page 70 or so.
But what Goldblatt does is do Product and Coproducts very explicitly, e.g. in a cateogry of sets. Like, writing down elements of particular objects and stuff. Crazy.
But then again, everybody has that thing that jives with them I guess.
It's funny if you talk with Haskell coders vs. people doing scheme theory about what category theory is and does and what's relevant :P
@John Baez Try the contrapositive explanation: Teaching that sucks is so and so
Todd Trimble said:
Yeah, I was kidding somewhat. I'm really suggesting that saying the main problem is that "people suck at teaching" is too reductionistic.
Maybe. I actually think most people do suck at teaching, though. I get the best teaching evaluations in the math department at UCR, everyone wonders how I do it, and if I were honest I'd just say "I don't suck". I pay a moderate amount of attention to avoiding some basic mistakes. For example I try to make sure I only use words that the students already understand, or that I've already explained: I monitor what words come out of my mouth. And so on. Just basic stuff like that.
Most math professors have, it seems, not learned this basic stuff.
Which is not surprising, since they weren't taught it.
Because people suck at teaching... including teaching how to teach. :upside_down:
Yeah, that's interesting. I do try to monitor this myself. For example, we often pronounce as "y nought", and it occurred to me that maybe what students hear is "why not" (or something). So I spent a moment making sure they knew I was using this old-fashioned word "nought", and heads up because a lot of mathematicians do the same.
Yes, that's one I probably forget to talk about sometimes. Though I love puns, so sometimes I'll say "We'll call this y-nought? Why? Why not!"
And I guess I've mentioned that the decade of the 2000's is called the "noughties".
Haven't heard you mention it, but that's good. I say "the noughts", but hadn't thought of the wordplay.
The eighties, the nineties, the noughties...
Yeah, I got it.
One great thing about teaching is that you can build up a huge repertoire of jokes, and keep repeating them, but each student only needs to suffer through them once.
Whereas people who repeat jokes to their relatives become insufferable.
Indeed the weird thing about teaching as a career is that you keep getting older, but the people you're teaching never do. When will they ever learn to stop making the same damned mistakes?
Anyhoo, I haven't really had a chance to teach category theory since I've started teaching again, so I'm not quite sure which books "don't suck". (You and I have disagreed about Mac Lane and probably will continue to do so, but I think part of the problem is that he was writing for a different generation and for more matured (professional) mathematicians.) Don't have an opinion about Goldblatt's pedagogy, but the complaints I remember is that he offers a skewed history and a skewed sense of what is important, i.e., is not scholarly enough for some tastes.
McLarty complains that Goldblatt doesn't have any substantial theorems, it's all about taking set-theoretic concepts and expressing them in categorical language and saying "look! how pretty!"
I'm exaggerating a bit but that's the thrust of his objection.
I think another complaint is that Goldblatt doesn't even do adjoint functors. (Is that true?) He mainly, or only, shows how to do math within a category.
Sometimes I do touch upon categorical ideas in a class, but it's almost always obliquely and without using the word "category". For example, making analogies between things they may have seen before (the gcd is like the min, and it's also like the intersection, if you look at it right). I think introducing categorical ideas by looking first at posetal examples is not a bad way to go, but even there a lot is dependent on the audience. (Sorry, is this getting too far off-topic?)
No, not too far off-topic for me!
These days when it comes to undergrad classes I mainly teach calculus and I don't even really try to sneak in any category theory.
From the perspective of someone who has spent a lot of years programming, I find that anything book that shows me how CT ties in with programming extremely helpful. Colagebras for OO (Bart Jacobs), Functorial DBs (Spivak), FP (Bartosz Milewski). Then when I see a concept from maths I get really frightened as I have no idea how long it may take me to learn. I found that quite a few are not that difficult. (I am getting into topology a bit finally). But I can't tell. Is it going to take me a week or 3 years?
Well, the topic is "ask me anything", so I'll ask: do you ever teach undergraduates category theory in a course setting? And if so, how do you get started?
I'm the opposite of Henry: about all I know about programming comes from category theory. If someone takes an idea from computer science and explains it using category theory, then I feel I understand it. It's probably a delusion, but it's helpful nonetheless.
Todd: I've never taught category theory to undergraduates.
I have ideas on how I might do it. I'd probably define a category, and then start giving lots of examples, including a whole lot of finite ones that I can draw on the board.
Okay then: do you teach introductory category theory to graduate students? In a course setting?
Yes, I've taught category theory to grad students. And I start just the same way as I suggested, but with a broader range of examples.
Such as?
I have course notes here:
http://math.ucr.edu/home/baez/qg-winter2016/
The LaTeXed notes are full of mistakes introduced by the people who LaTeXed them.
Todd Trimble said:
Such as?
I have a spiel where I talk about categories of mathematical objects and categories as mathematical objects.
When it comes to categories of mathematical objects, I try to go through all the main things students learn in their required courses: algebra, topology, analysis, geometry...
Of course I didn't have time to go through all the categories they'd encountered.
But more interesting in a way are categories as mathematical objects: monoids, preorders, groupoids... and then the special cases: groups, posets, equivalence relations, etc.
And then a bunch of very small categories like "the walking object" and "the walking morphism".
Or "the walking composable pair of morphisms".
Then you can ask stuff like "what's a functor from a group G to Vect?"
And then bring in natural transformations, and illustrate them with lots of examples, like: "if you've got two group homomorphisms from G to H, seen as functors, what's a natural transformation from one to the other?"
I draw pictures of functors and natural transformations, for some very small categories.
Re small categories. Someone pointed me to Generic Figures and their glueings a few weeks ago, and I found it really helpful, as it illustrates everything with simple diagrams one can draw out on a sheet of paper.
Right. Without looking up your course notes just yet: in a semester-long course, do you tackle the core concepts of universal property, adjoint functor, Yoneda lemma? Or would you wait for later for that?
I definitely talked a lot about adjoint functors, and lots of examples of limits, and then how e.g. products are adjoint to the diagonal.
We have quarters at UCR, so this was a 10-week course.
I didn't get to the Yoneda lemma. I'm gonna expand these notes into a book and then I'll include that, and other stuff (and also make the notes a lot better).
I finished up with the definition of an elementary topos.
I think a 2-year course would be really fun. I would try to get to enriched categories pretty quickly so I could talk about 2-categories.
I prefer to introduce concepts and prove easy theorems than to talk about hard theorems. I think that in category theory getting students used to new concepts is where a teacher can be really helpful.
I see. In your experience, what do students find the hardest about the elementary-ish meterial we're discussing?
Hmm. I don't know! So I was apparently not good at getting feedback. I tried to make everything seem pathetically easy, no dramatic jumps in abstraction.
For example I introduced general limits after doing products and the terminal object and pullbacks... and I introduced them by drawing a random sort of complicated diagram and talking about its limit, following the pattern I'd already introduced in the easier examples.
So I guess I'm saying that I find the abstract definition of limit a bit hard if it's just thrown at you.
Probably. What's the fanciest example of a limit you discuss, one that appears "in the wild"?
Hmm, I probably didn't discuss any fancy ones. I didn't go into much detail on limits or colimits, I guess, except to show they're preserved by right / left adjoints.
There are lots of nice examples of that phenomenon, of course, available to anyone who has studied math.
(At the graduate level.)
Okay, cool. What examples of adjoint functors did you discuss?
I mean, I realize that at any moment you could say "look up my course notes!", but I'm hoping you don't say that too quickly.
I probably drew a big chart of a bunch of algebraic gadgets they were supposed to know about, with sets at the bottom, and then monoids, and commutative monoids, and groups, and abelian groups, and rings, and commutative rings...
Do they know any topology, in general?
... and then started filling in the famous adjoint functors going between these things.
I used Top as an example of a category....
Most of them had probably taken a quarter of point-set topology, and a quarter of algebraic topology, and a quarter of differential topology.
One thing that made a big impact on me when I was first learning category theory was finally understanding, via category theory, why one uses the product topology and not the box topology.
Yes, that sounds good. I didn't dwell on infinite products, but if I did that would be a good thing to talk about.
I forget if mentioned things like the fundamental group of a pointed space... I bet they already had heard that was a functor.
Our algebraic topology quarter-long course has been getting beefed up, under the influence of younger faculty (like Julie Bergner, before she left).
But I probably decided to stay away from the sort of applications of category theory they might have already seen. I wanted much more to let them see that category theory is everywhere.
What size class do you usually get, when you have an opportunity to teach elementary category theory?
I've taught it once! I had a big class - the students were hungering for it.
Ah, okay! Maybe you have a new cohort of hungry students? :-)
Then about two years after that I taught a more advanced class that wound up being all about the simplex category and the bar construction and cohomology of groups.
I didn't get as far there as I wanted.
There is probably a new cohort of hungry students. But next year I've asked to teach classical mechanics, because I have a partially written book on Lagrangian mechanics which I'd like to finish.
I am not taking new students now, so I'm not eager to teach category theory right now.
How about logic? For example, adjoint functors are everywhere in logic and set theory. A famous example is implication and its relation to exponentiation. Is that something the students might latch onto, you think?
I definitely talked about cartesian closed categories a fair amount!
Ah, goody.
I forget how much I talked about them in logic. I've talked about that somewhere... I think I talked about that for one class, showing how the logical connectives are product and coproduct and internal hom.
Yeah, that's pretty important!
The Yoneda lemma also made a really big impression on me when I was first learning category theory. I can't remember when I first really "got it", but really getting the point across is, I think, a critical point in the teaching and learning. Just curious how you would go about it.
how many book projects do you currently have :sweat_smile:
I have two half-written books sitting on my website.
Category theory and Lagrangian mechanics, with the latter much much closer to being done.
I fantasize about many more books.
But I've decided to mainly focus on writing "essays" - shorter things - that later perhaps I can bundle into books. It's less stressful.
Nice, I'm interested in reading the Lagrangian machanics books.
Also, I see more than 2 books here (http://math.ucr.edu/home/baez/papers.html)
Those are old books.
I have a new biannual column in the AMS Notices, starting soon, so I'll have an excuse to write short essays on different things.
Short is good.
Quantum Techniques for Stochastic Mechanics. What I see there I certainly read years ago. But I don't remember it being a book of 200 pages.
Rabbits and wolves.
What's your take on the status of quantum gravity? Or of fundamental physics more broadly construed?
If you were still trying to make progress on fundamental physics today, then how would you go about it? (Or maybe you are, e.g. over in the other thread?)
[Or is this too far off-topic? I see only know that this thread is "History of ideas"...]
Nikolaj Kuntner said:
Quantum Techniques for Stochastic Mechanics. What I see there I certainly read years ago. But I don't remember it being a book of 200 pages.
You probably fell asleep!
nono, I would have remembered the anime fox :grinning_face_with_smiling_eyes:
Tobias Fritz said:
What's your take on the status of quantum gravity? Or of fundamental physics more broadly construed?
If you were still trying to make progress on fundamental physics today, then how would you go about it? (Or maybe you are, e.g. over in the other thread?)
This is a great subject! Of course I think about this a lot, and the fact that I haven't been working on fundamental physics means I don't like what I'm thinking. :upside_down:
Bildschirmfoto-2020-05-22-um-23.20.34.png
That's a rabbit, actually.
There's a bunch of rabbits and wolves in this book.
lol, he seems so warrior like
Yes, I think we wanted the rabbit to turn the tables on the wolf...
I forget what stochastic Petri net we were illustrating there.
There's also a great picture of me "pulling a rabbit from the hat".
In the discussion of creation operators, of course....
Of course!
Props for the illustrations, they are actually quite nice.
Thanks! Jacob Biamonte had befriended a Russian artist, I'm forgetting her name right now.
We should have gotten more illustrations like that.
Okay, @Tobias Fritz!
Fundamental physics is in a very frustrating state right now, as you surely know.
John Baez said:
Tobias Fritz said:
What's your take on the status of quantum gravity? Or of fundamental physics more broadly construed?
If you were still trying to make progress on fundamental physics today, then how would you go about it? (Or maybe you are, e.g. over in the other thread?)
This is a great subject! Of course I think about this a lot, and the fact that I haven't been working on fundamental physics means I don't like what I'm thinking. :upside_down:
I see. I can very much relate to that and have similar sentiments, although obviously based on much lesser knowledge of the subject. Somehow following the physics doesn't satisfy the aesthetics of a mathematical structuralist, while pursuing pure mathematical elegance tends to forget about the physics.
Right now that's true about so-called "fundamental" physics.
I think right now is the time to be doing condensed matter physics.
The stuff people are doing on topological insulators, the 10-fold way, "twisted equivariant matter", extended TQFTs, higher gauge theory in condensed matter, and a lot more - it's wonderful.
And, a bunch of it - though not all - can be tested by experiment, so it feels much more real than high-energy physics these days.
Can wandering through models via renormalization not also be seen as having a top view, categorical, not particlar-model-obsessed flavour to it?
That is, Models, as in different incarnations of theories in a theory framework.
"The space of all theories", which leads to rather masturbatory reflections on the "multiverse" in fundamental physics, is actually somewhat practical in condensed matter physics.
Yes, that makes sense. For doing beautiful math in a physics context, condensed matter seems to be the way to go these days. But it's less interesting to those of us who yearn for understanding "fundamental" things
I know what you mean, Tobias, but you have to break yourself of that 20th-century attitude. :upside_down: It's so last century.
masturbatory reflections on the "multiverse"
I may quote you.
Nature has decided she's not gonna reveal her final secrets to us just yet. She's saying "sorry - you're still too stupid, too bull-headed".
"Not until you stop sucking at teaching."
That could even be part of it!
Tobias Fritz said:
Yes, that makes sense. For doing beautiful math in a physics context, condensed matter seems to be the way to go these days. But it's less interesting to those of us who yearn for understanding "fundamental" things
but More is Different (TM)
A bunch of people thought that by mixing supersymmetry and Kaluza-Klein models and strings we could find the One True Final theory and be done with it pretty quickly. It turned out to be 11-dimensional, we don't even know how to quantize it, and there's no clear way to relate it to our universe.
It could be on the right track... but maybe not.
I sort of doubt I'll live long enough to see the end of that story, so I'm working on other things.
Thinking about the Ehrenfest theorem, would you say there's gonna be a clear and sufficiently rigorous way to move between all the scales of physics?
I think this century is the century of biology... and maybe even more importantly, ecology, the study of whole ecosystems. And within physics, condensed matter physics: getting good at designing matter with arbitrary Lagrangians and knowing what to do with it.
Nikolaj Kuntner said:
Thinking about the Ehrenfest theorem, would you say there's gonna be a clear and sufficiently rigorous way to move between all the scales of physics?
I forget what the Ehrenfest theorem says!
John Baez said:
A bunch of people thought that by mixing supersymmetry and Kaluza-Klein models and strings we could find the One True Final theory and be done with it pretty quickly. It turned out to be 11-dimensional, we don't even know how to quantize it, and there's no clear way to relate it to our universe.
That is referring to SUGRA? I've only heard that mentioned as a toy model, but now I read that people had hope in it being a realistic theory in the past.
I'm talking about M-theory, whose most substantial realization is 11d supergravity.
That's apparently the "classical" version of M-theory.
There are so many ways to compactify it and add branes and such that one can get lots of different kinds of 4d physics out of it, at least as far as our very limited understanding suggests.
I think Urs Schreiber is doing a great job of studying M-theory, so much that I'd never want to even try it myself at this point.
But I have no confidence that there's a description of our universe in there.
Aha, I thought that was supposed to be 10-dimensional, but maybe there's some sort of duality or holographic principle going on. I've met a few people doing string field theory, which seems to be targeted something like that missing quantization, but they didn't seem to have much hope of really getting anywhere
In the "condensed matter theory isn't fundmental" stance, I'm not sure if - even if we got our final theory - there'd be a nice way to move from the small to the larger. So in that way, it might never be the case that a "fundmental" theory would even derive the adhoc theories. Thus not making the in-between scales "redundant" even in principle/conceptually.
Ehrenfest theorem would just be the correspondence of things in quantum and classical mechanics that bear the same name, but the identification would probably harder in a theory with as much bells and wizzles as a quantum gravity with potentially some fuzzy space and whatnot.
Anyway, to finally answer your question @Tobias Fritz - if I were to work on fundamental physics I'd try to create some simple spin-foam-like models, based on higher category theory, that gives theories with local degrees of freedom propagating causally.
I wouldn't demand that they be at all "realistic", except in having the property I just mentioned.
Okay, great to have such a concrete answer!
They could perhaps reduce to just field theories in 2d spacetime in some limit - that would be fine. Well, maybe conformal field theories already do that, so maybe I'd try 3d.
Yes, @Tobias Fritz - my goal would be to do some physics without a spacetime manifold, starting from just something like an n-category, and show that in some limit you'd get a theory that looked approximately like quantum fields on a spacetime manifold. The idea would be: get rid of this annoying spacetime manifold! Everything should be quantum theory phrased in higher algebra!
And there's no Lurie-like n-cateogries for ecology in sight? :)
Right, and in an -dimensional generalized spin foam model, I suppose that one could use (up to) -categorical data
Yes, that's the dream. The problem is, once you go away from TQFTs, you find yourself in a huge unexplored jungle of possible theories, without much of a clue as to which ones count as "good".
So I think the only solution is to start exploring, and stay fairly humble about it, not expecting to get the Theory of Everything on the third try.
Or even the hundredth try.
In other words, think of it as a mathematical game where you try to build models that have nice properties, and start cataloging them and proving theorems about them and also numerically simulating them...
That's what I would do. But I also feel that some people who are real physicists should systematically think about the conceptual and technical problems, and experimental puzzles, and start cataloging clues. Physics is like a big jigsaw puzzle where the pieces are strewn all over the floor and you keep losing them because there are so many.
Isn't it also that one needs more trust? That those who summarize experimental insight did the right thing. I feel in math one can more easily be well versed in topics that are researched at the moment. While experimental CERN insights can't be learned by a remote theorist. At least that's my judgement or fear. I don't even think most people have a solid repertoire of what the analog inputs and outputs of experiments are, that are used to justify the fundamental theories. I'm afraid where the theories are used in practice (say some semiconductor-manufaturer on small scales), the textbook theories actually are used in variants that have many form factors squeezed in, to actually make it work.
Right. When you put it like that, it's quite miraculous how particle physicists have managed to tame the jungle of theories (or possible Lagrangians) by imposing renormalizability and gauge invariance
So part of the problem is that nobody knows enough to have all pieces of the puzzles in sight at the same time. Nice analogy!
I think that's where beauty-arguments come into play.
Nikolaj Kuntner said:
Isn't it also that one needs more trust? That those who summarize experimental insight did the right thing. I feel in math one can more easily be well versed in topics that are researched at the moment. While experimental CERN insights can't be learned by a remote theorist. At least that's my judgement or fear. I don't even think most people have a solid repertoire of what the analog inputs and outputs of experiments are, that are used to justify the fundamental theories. I'm afraid where the theories are used in practice (say some semiconductor-manufaturer on small scales), the textbook theories actually are used in variants that have many form factors squeezed in, to actually make it work.
That's a good point. A lot of theorists wrote papers on superluminal neutrinos back in 2011 and on the 750 GeV diphoton excess in 2015, both of which turned out to be flukes (for very different reasons). The latest thing which may turn out to be of that sort seems to be the Hubble constant discrepancy.
Even when I thought a lot about fundamental physics I found it very hard to remember all the clues and all the constraints at once. This is what real experts are supposed to be able to do. But it could be that as we progress in our understanding of physics we reach a point where no one person can keep all this stuff clearly in view! So we rely on communities... but maybe communities tend to get stuck in specific narrow programs, where basically they're adding on extra constraints of the form "the solution must be like this or we're not interested in it"... so physics gets stuck.
On the other hand, maybe we just need someone smarter! I really recommend reading Abraham Pais' book Subtle is the Lord, on Einstein. I used to think Einstein came up with general relativity more or less by magic, or something like "you write down the only Lagrangian involving the metric and obeying these constraints" - which is how people often motivate it these days. In fact he reasoned his way to it by a remarkable series of thoughts (and mistakes). It makes me really appreciate how smart he was.
The reasoning that led him to invent "photons" was also remarkable - mostly statistical mechanics. Which makes me even more interested in the connection between quantum mechanics and statistical mechanics.
I love how you speak of real experts and physicist as if they were to be found somewhere else :D
I'm not a real expert on particle physics the way some people I know are!
I'm more of a dilettante.
John Baez said:
Even when I thought a lot about fundamental physics I found it very hard to remember all the clues and all the constraints at once. This is what real experts are supposed to be able to do. But it could be that as we progress in our understanding of physics we reach a point where no one person can keep all this stuff clearly in view! So we rely on communities... but maybe communities tend to get stuck in specific narrow programs, where basically they're adding on extra constraints of the form "the solution must be like this or we're not interested in it"... so physics gets stuck.
Having spent about 8 years (off and on) at Perimeter Institute, my personal experience very much confirms that (at least there), and it does not seem to be getting better
Okay. Yes, the Perimeter Institute was created as a way to break through the logjam. :cry:
Exactly! But we're now perpetuating the mainstream split
It may be that fundamental physics needs to "collapse" - with people giving up hope on it, and institutions shutting down - before something really new can happen. Maybe this is an over-dramatic way of putting it.
Ok I have to run now. But thanks a lot for the fascinating input!
Bye!
If you mean politics v. linguistics, my sense is that Chomsky was pretty much both politically active and interested in the study of language right from the beginning. At university in the late 1940s he met Zellig Harris through left-wing politics who convinced Chomsky to study linguistics with him. This led to his early contributions to mathematical linguistics, and then generative grammar. He became for visible for his politics with the Vietnam War.
Interesting. So his later work wasn't at all a surprising change from his earlier activities.
@John Baez That's my sense. He's been remarkably (boringly?) consistent. Would be interested in his reaction to quantum NLP stuff -- probably wouldn't be positive. He's been skeptical of ML-style NLP.
I suppose he'd be more interested in natural language processing based on Chomsky-style grammar!
I wonder what he'd think of Lambek's pregroup grammar: it was at least a mathematical approach to grammar! But he would probably have all sorts of objections.
John Baez said:
I wonder what he'd think of Lambek's pregroup grammar: it was at least a mathematical approach to grammar! But he would probably have all sorts of objections.
Well Lambek Grammars and context-free grammars are basically equivalent, so... :slight_smile:
Are Lambek grammars the same thing as Lambek's pregroup grammars? I vaguely heard he had two approaches to grammar.
John Baez said:
I've met Lawvere a couple of times, and he's a nice guy, but he basically hates my work.
Eep. Yes. I sat next to him at a dinner once, and told him you were the major influence as to why I got into CT. He wasn't impressed, and his daughter (sitting on the other side) had to explain my faux pas to me.
Your faux pas, eh?
John Baez said:
Are Lambek grammars the same thing as Lambek's pregroup grammars? I vaguely heard he had two approaches to grammar.
I meant pregroup grammars. Yes there are two approaches, pregroups should be the "rework" of the old one from the 50s or something, which was more like sequent calculus if I recall correctly
Here's my Lawvere story. Michael Wright ran a conference in Florence in honor of Lawvere's 60th birthday, and invited a lot of people. (This was before Wright went broke.) One of them was me. When I walked into the reception at the start of the conference, Lawvere walked up to me, and smiling broadly, shook my hand and said he hadn't invited me.
Nonetheless, since I have a thick skin, I had dinner with Lawvere and Schanuel at a little restaurant in Florence, and he seemed perfectly willing to be polite and talk about various interesting things.
Later I annoyed Lawvere on the category theory mailing list by saying that the generating object in the Lawvere theory of groups was like the "Platonic ideal of a group". He hates Plato, so he launched into some Hegelian attack on Plato.
Thereafter he'd occasionally attack me on Wikipedia, but cleverly never using my name - e.g. "people who study trendy topics like topological quantum field theory...."
This is a great trick since it forces the person you're attacking to say "Do you mean me?" or else just sit there grumpily wondering.
I think the basic problem is that Lawvere generally hates monoidal categories, because the tensor product is (in general) not defined by a universal product. And he seems very suspicious of quantum mechanics. Topological quantum field theory - or category theory for quantum computation - combines these evil features.
2-categories and such are just as bad. All this stuff goes completely against the way he thought categories should be used.
It's a pity, because cohesive -toposes really seem to capture the ideas he was looking for. If only they could be disguised as "mere" model structures on 1-categories of simplicial presheaves :upside_down:
(Got no question to ask, didn't mean to start ragging on Lawvere!)
What does Lawvere not like about 2-categories? It just seems so odd to me that one could love category theory but not like 2-categories at all.
I think Lawvere's work is great, and it certainly transformed how I think about things, even though apparently it didn't have its intended effect.
John Baez said:
I think the basic problem is that Lawvere generally hates monoidal categories, because the tensor product is (in general) not defined by a universal product. And he seems very suspicious of quantum mechanics. Topological quantum field theory - or category theory for quantum computation - combines these evil features.
Not so sure of that. Think of his paper on metric spaces.
Okay, maybe he likes monoidal categories as long as I don't get involved. Let's see, so in that paper on metric spaces he's talking about categories enriched over a noncartesian monoidal category? How does he discuss them? Does he ever say "monoidal category"?
Is he enriching over a monoidal poset, so that he doesn't need to think about the pentagon identity?
I don't remember. But I think he's perfectly willing to think about closed noncartesian monoidal categories. It might well be more personal than all that!
John Baez said:
Is he enriching over a monoidal poset, so that he doesn't need to think about the pentagon identity?
He enriches over a quantale
At least if I this is the paper I think you are referring too
The annoying thing about noncartesian monoidal categories is that one has to posit the pentagon identity instead of being able to prove it. My theory is that this is the sort of thing Lawvere hates: complicated coherence laws as axioms. But for a monoidal poset, e.g. a quantale, the pentagon identity is automatic.
And this has been developed a lot into a book which I think it's called "monoidal topology". It has a red cover. The book is obviously not his :D
I predict that Lawvere has never said "monoidal category" in his published work, except perhaps in some sort of complaint.
I'm sure that he, a godfather of category theory, could easily be annoyed that some else (like you) is seen as turning more people on to category theory. And that could lead to certain reactions.
Yes, there's also that.
I saw that more clearly in Marta Bunge's category theory mailing list post where she complained that I was giving too many talks.
I thought that was outrageous. But also funny.
Sure, sure. There's envy from people -- who is this John Baez?
I can really sympathize with people who are pioneers in some field, and never getting as much credit as they deserve, feeling annoyed at some youngster who seems to have drifted in from outside, getting a lot of publicity.
As I get older and older I feel more and more unjustly neglected myself. :upside_down:
That's a tendency among older academics that we all have to fight.
Lawvere is a visionary, but he doesn't take the trouble to be pedagogical, and many of his brilliant ideas will eventually be lost unless someone troubles to translate them.
When you're young you don't expect people to respect you yet... you're still fighting your way up the ladder. But when you get older you start thinking you've earned some respect. And that expectation can really make people bitter.
Yep.
Do you really think there are brilliant ideas of Lawvere that could be lost, @Todd Trimble? Ten years ago I would have felt that way. But I feel that by now enough people are getting the hang of them that they won't be lost.
Of course I only know his more basic stuff, but somehow that makes all the rest of it seem more approachable... except for that damned Hegelian taco, which I still don't get.
The nLab has done a lot to bring Lawvere's ideas into broader view.
I suspect there are a lot, but it's been a long time since I thought that I should make it a mission to understand Lawvere, because it's so much work. Plus, I really don't read much, so anything I might venture could be shot down.
OK, so here's a question: do you find much a difference between North American category theorists and those from the rest of the world?
In the US, it almost seems no professional mathematician wants to be called a "category theorist", except some over 70, and Todd, and Emily Riehl. It seems different in Canada.
Or more generally how North American mathematicians consume/use CT vs the rest of the world?
(I say North American, because I guess there's a reasonably interplay between the Canadians and USAnians)
The nLab has all the potential in the world to be largely ignored, because of problems which remind me of the problems with understanding Lawvere.
Young mathematicians love to joke about how hard it is to understand the nLab, e.g. on say Twitter, but they also talk a lot about how that's where they learn everything.
I'm a little too old and jaded to be bothered about being tarred as a category theorist. "Fuck it."
That's the spirit, Todd! :thumbs_up:
I'm a little too old and jaded to strive to be accepted as a category theorist, these days.
I don't mind being called a category theorist - I even call myself that sometimes, but only when there are no real category theorists in the room, like Todd.
David, I think you should weigh in with your own opinion. I do see "Australian category theorists" as somewhat sui generis, and I say: more power to them.
Well, I'm not an "Australian category theorist"^TM, rather a category theorist who happens to come from and live in Australia.
I feel more like someone from the Ehresmann school than from the Sydney school
I think that since category theory was "banned" by the NSF for so many years, American mathematicians generally feel the need to hide their categorical activities under the cloak of something else - mainly homotopy theory.
I know that! Still, you have an opinion, no?
That was speaking to David.
Yeah, what's your opinion, David? [bar-room voice:] What are you getting at? Trying to start a fight?
I don't know. Maybe it's a broader cultural gap between where I stand and the US, or the nature of American academia. I have very limited in-person interaction with other category theorists.
Another NA-an "pure category theorist" -- not precisely knowing if he'd accept that designation -- is Nick Gurski.
I meant to say "bad ass pure category theorist".
I think Marcy Robertson, now in Melbourne is an interesting case: she works on higher operads and so on, and open acknowledges she works in 'higher structures' and is close to higher category theory, but she officially comes from "algebraic topology", I suppose.
It's a bit like Clark Barwick feeling like he has to defend homotopy theory, when most of it is really category theory, because not even renaming it is enough.
I don't know if I have a coherent or informed idea about all this, it's just a feeling I get and it might just be me!
Ha!
John Baez said:
In the US, it almost seems no professional mathematician wants to be called a "category theorist", except some over 70, and Todd, and Emily Riehl. It seems different in Canada.
I think this is changing. Category theory is becoming increasingly trendy. Like, category theory is exactly our sales pitch at Statebox. And personally I would introduce myself as an "applied category theorist" if someone asks
See, now -- that's sexy.
This is probably due to the beautiful work that is being carried out in the last 5-10 years or so.
I was talking about academic mathematicians in the US.
Yes, I hope things will change there as well, if they aren't already. Still, using category theory as a sales pitch 5 years ago would have been unthinkable
Fabrizio is from Oxford, where guys pick up girls in bars by saying they're category theorists.
Lol
Unlike in Cambridge, where CT seems to have collapsed. :-/ What happened there?
I also feel that since Leiden2018 the whole ACT thing started accelerating A LOT. Maybe i'm wrong but I think ACT now is like it was doing Machine Learning in the early 2000
Still not super mainstream but the exponential curve was already evident
I am very excited about this ACT thing and its growth, but I mainly think we should keep our heads down a bit and get stuff done.
Yep. Prob-a-bly.
Another good sign, imho, is that I see quite a lot of pure category theorists are starting to dive into ACT. This is again a verry good sign. And it connects with what you are saying. I think what we need right now is that people that are experts in the theoretical stuff come over importing new amazing tools
Heh, Rongmin Lu keeps telling me to switch fields.
Brendan Fong and David Spivak are going ahead with their Topos Institute, starting in January I guess, and that will turn up the excitement level an extra notch.
Personally I'm looking a lot into applying sheaf theory to stuff lately, and all the material developed in pure CT is just a treasure trove of tools
However, unless someone is willing to employ me while I stay in Adelaide, that's kinda unlikely.
That's very good to hear. On our side we are doing category theory courses for businesses :slight_smile:
The fact that companies pay their employees to learn category theory says a lot. :slight_smile:
(I mean, willing to employ me to work on applied CT stuff :-)
As for Cambridge, @David Michael Roberts - I think it's just that Martin Hyland and Peter Johnstone got old, and Hyland retired (did Johnstone yet???), and they didn't manage to get a successor. Years ago tried to hire Charles Rezk (I think that's who it was), but he decided not to go.
Street did a better job of bringing in younger category theorists so that when he retired the tradition continued.
Jamie Vicary is at Cambridge now
Yes! But in CS.
The situation in Sydney feels quite self-sustaining. The place is an absolute mecca for pure category theory.
Well, the danger was already there if a group is relying purely on two old professors with no mid-career people. I agree that the Macquarie group has managed to persist well (despite opposition, I may add! An open secret that the recently-departed department head really didn't like CT)
It's a Darwinian world out there...
@Todd Trimble well, it's despite the best efforts of some higher-ups. They almost lost Richard Garner, they did lose Michael Batanin
Where's MB these days?
He moved to Czech, I think.
Same place that Urs had been??
It's no small credit to Dom Verity, I suspect, who was on the University Council, that CT had a strong advocate.
Good.
Fabrizio Genovese said:
That's very good to hear. On our side we are doing category theory courses for businesses :)
By the way: who is "we"?
I think with Martin Markl.
(That was an answer to another question.)
I think "we" above is Statebox.
Todd Trimble said:
Fabrizio Genovese said:
That's very good to hear. On our side we are doing category theory courses for businesses :slight_smile:
By the way: who is "we"?
I work/cofounded a startup called Statebox. We use category theory to do software, basically formally verified graphical programming languages.
Yes, good for Dom.
Yes, statebox.org
This is a highly nonsynchronous conversation.
But lately we also set up this "category theory for businesses" course, to spread CT even further
I've had a few drinks by now, which may explain my unaccustomed loquacity, and lack of focus.
John Baez said:
This is a highly nonsynchronous conversation.
Luckily enough some of us are deeply comfortable with concurrency! :P
I can't find confirmation of Batanin's move on various department websites (terrible things), but I heard about it through the grapevine.
I should be trying to write an open-internet home exam for my calculus students.
Todd is not on "category theory Twitter". Here is an ad for a category theory course:
https://twitter.com/statebox/status/1263167840418508800
Our category theory course is now online! https://training.statebox.org/ https://twitter.com/statebox/status/1263167840418508800/photo/1
- Statebox (@statebox)🤮
Sorry, that was for my exam, not statebox!
I've been having to teach from supplied material that I just can't stand.
(will stop derailing the conversation now!)
It's ok, it's 3AM here so I should probably go to bed
Thanks. You're right that I pay little attention to Twitter, but it's such a fact of life that I can't so easily ignore it anymore.
David Michael Roberts said:
I feel more like someone from the Ehresmann school than from the Sydney school
Can you expound on what you think of as the "Ehresmann school"?
Good night, Fab!
Well, my original motivation was geometry and mathematical physics, starting my PhD around the time 2-bundles starting being a thing. So there's category theory but also geometry, and Lie/topological groupoids are my bread and butter.
Who was your advisor?
Michael Murray, though he was pretty hands-off, being swamped with being department head at the time. Mathai was my secondary advisor, and despite trying to get a project off the ground with him, we just didn't click.
It reminds me that Jim Dolan used to say that he doesn't consider himself a category theorist, more a groupoid theorist.
Ok, and so you consider that more Ehresmann's motivations and approach as well?
So bundle gerbes were my childhood toys, as it were.
Yes, Ehresmann was a geometer, who came to category theory organically, I feel. He was also isolated from the rest of the French category theory (i.e. algebraic geometry) people, and tread his own path.
John, by the way I also had an unrelated question -- how did you end up with the group of collaborators/co-editors you did on the n-category cafe? And in particular how did David Corfield get involved and what influence do you think that had on the cafe as it evolved?
John Baez said:
Good night, Fab!
Good night! :slight_smile:
He invented internal groupoids to treat geometry, as far as I can tell, and from there got interested in internal structures more generally. There's also work on infinite-dimensional manifolds, through Andrée Ehresmann's background.
Jean Pradines' (a student of C Ehresmann) ideas were precursors of a bunch of work I've done, too.
@Gershom - you can see the birth of the n-Cafe here: http://math.ucr.edu/home/baez/corfield/
This is the record of David Corfield's blog. Urs and I started commenting on it, talking about 2-groups and Klein 2-geometry. Then we decided we should start a group blog. I think you can see the whole process in the comments.
David was and is very good at catalyzing conversations by asking questions and suggesting things to prove, even though he doesn't prove stuff (as far as I recall).
David Corfield though was interviewing various category theory people in the late 90's, I think in view of his book Towards a Philosophy of Real Mathematics. How he picked up the category theory spoor, I'm not quite sure.
David is amazing, in my view. Yes, and boy how does he ask questions!
Ok so a very basic question -- I've noticed that there's "two types" of categorists (at least!) -- those who cite Mac Lane for basic stuff, and those who cite Borceux. Does this relate at all to the "Ehresmann school"?
David Corfield's grasp of the key ideas of higher CT is impressive. I love it when he suggests something on the nForum and it is a catalyst for a new development.
I'm not sure either how David got interested in categories, but he was interested in big ideas, and he talked to me a lot. I think early on he wrote a paper about groupoids, and the argument over whether they were "just group theory with pretensions" or something really useful.
@Gershom I think that's a pedagogical choice.
Mac Lane is the canonical reference, but not necessarily the best place to send newbies anymore. Borceaux is newer and more comprehensive, and less brutal.
I mean, Borceaux is three volumes, Mac Lane is a not-very-thick book.
I don't think Borceux's book has much Ehresmann-like stuff in it. Does it? By that I mean double categories and other internal categories, particularly in Diff and Top.
I would prefer to cite Emily Riehl's book, these days. Also it's cheaper, and you can get the pdf for free!
You can get the PDF of any math book for free. But for some, you have to feel a bit guilty.
Speaking of reference books, Ehresmann wrote a really impenetrable textbook on CT in his own mathematical dialect. It's really something else.
Borceux feels somewhat more encyclopedic, Mac Lane more like giving a summary of the state of the art at the time (1970). His second edition isn't greatly different, IMO.
No, just a little bit extra on the end.
But that's the part that cites me! :stuck_out_tongue_wink:
The stuff on the end is a bit of higher category theory....
Who around here is teaching category theory? I do just a teeny amount, in the context of "independent studies" at my university (more like "dependent studies"). I'm curious who might have used Emily's book for teaching purposes.
Someone who used to be at Adelaide had the mimeographed notes from the Canberra version of Mac Lane's lectures that went into CWM. He was getting rid of a bunch of stuff and I only managed to get the first page. He didn't attach an importance to them!
That said, his office was a fire hazard with the head-height piles of books on every available surface, so he probably just lost them.
I have to admit that it irritates me when I see people dis on Mac Lane. That's where I learned my category theory, with his book under my pillow (or sleeping bag) at night, so I guess I'm a little sentimental. It's true that the choices were more limited, back in the day.
David Michael Roberts said:
Someone who used to be at Adelaide had the mimeographed notes from the Canberra version of Mac Lane's lectures that went into CWM. He was getting rid of a bunch of stuff and I only managed to get the first page. He didn't attach an importance to them!
The nlab has a scan of notes taken by Ellis Cooper:
https://ncatlab.org/nlab/show/Lectures+on+category+theory
https://ncatlab.org/nlab/files/MacLaneBowdoinLectures69.pdf
They're very nice - they have lots of diagrams that didn't get included in CWM because they were hard to typeset...
Eduardo Ochs said:
David Michael Roberts said:
Someone who used to be at Adelaide had the mimeographed notes from the Canberra version of Mac Lane's lectures that went into CWM. He was getting rid of a bunch of stuff and I only managed to get the first page. He didn't attach an importance to them!
The nlab has a scan of notes taken by Ellis Cooper:
https://ncatlab.org/nlab/show/Lectures+on+category+theory
https://ncatlab.org/nlab/files/MacLaneBowdoinLectures69.pdf
They're very nice - they have lots of diagrams that didn't get included in CWM because they were hard to typeset...
I'll be goddamned. Thanks for this.
David Michael Roberts said:
It's a bit like Clark Barwick feeling like he has to defend homotopy theory, when most of it is really category theory, because not even renaming it is enough.
I don't disagree that a lot of homotopy theory is just another name for higher category theory, but could you explain a little bit more about "not even renaming it is enough"?
@Todd Trimble I think it's good, too. But for non-specialists I'd start with something else.
Heh, you might enjoy this, then: https://ncatlab.org/nlab/files/BarwickFutureOfHomotopyTheory.pdf
Hi David. In lieu of Mac Lane, what would you suggest?
Hah I've read that note many times, as I consider myself a homotopy theorist
Barwick paints his field as being in a dire position. Maybe it's because I am ignorant of the situation, but somehow I see homotopy theory as a Big Deal that gets taken very seriously. Try being part of a field that's really in disrepute before crying poor....
Depends on the audience. Is it a mere hat tip for completeness in a paper for a different field? Just saying "For CT background see CWM", when it's a paper for eg physicists doesn't cut it.
I just read Barwick's manifesto on homotopy theory, and I was struck by this:
We need to stop identifying ourselves as topologists.
Unfortunately I think that would be a bad idea, politically - sort of like math departments seceding from the college of sciences. If homotopy theorists stop calling themselves topologists, other mathematicians will wonder what they are, and what good they are. At least this is true unless and until homotopy theory becomes a lot better understood in its own right by many non-homotopy-theorists.
For basic, basic stuff, why not Tom Leinster's book? Or be specific as to what sections of the book one needs to point to?
I think the dire position doesn't have to do with the lack of people considering homotopy theory a "big deal", but rather the internal workings of the field. The two points that Clark brings up that ring the most true in my experience are (1) the prevalence of unpublished and unshared work being taken as finished, and (2) the pedagogical aspects. There are no books that teach homotopy theory, and that's a travesty
I like CWM now, but when I tried to learn CT from it ages ago I felt that its choice of letters was weird and that lots of important diagrams were omitted from the book - and that I would have to reconstruct them myself. Then I stumbled on a paper that Peter Freyd that later became the basis for "Categories, Allegories" and started using its diagrammatic language to draw my diagrams and I became a weirdo.
For teaching, I think Emily's book is very good, but it does take a particular tack, delaying certain things longer than perhaps I would have thought. But then I would not just rely on a single book, rather mixing up Tom's, Emily's, Mac Lane, etc.
@Reuben Stern I agree on the internal informal workings of the field being an issue (cough Hopkins cough), but it's more the relation to the rest of mathematics I was thinking of.
There are no books that teach homotopy theory, and that's a travesty.
That's just odd. It would be so fun to explain it. I've considered writing a book called something like "Homotopy Theory for Dummies", but that would not be the text that you think needs to be written, namely one that lays out all the modern machinery in a nice way - it'd be an attempt to demystify what's going on, and paint the big picture, without a lot of technical stuff.
Ah, I have a feeling most homotopy theorists are not concerned with the state of the relation of the field to the rest of mathematics; that part is flourishing!
Homotopy theorists have virtually no representation as editors of the top
general math journals.If I look at the papers in the most recent edition of any of
the top five general pure mathematics journals^2, there is incredibly sparse representation in homotopy theory.2 by my account: Journal of the AMS, Inventiones,
Annals, Acta, and Publications of the IHÉS
Yeah, and CT has less.
John Baez said:
There are no books that teach homotopy theory, and that's a travesty.
That's just odd. It would be so fun to explain it. I've considered writing a book called something like "Homotopy Theory for Dummies", but that would not be the text that you think needs to be written, namely one that lays out all the modern machinery in a nice way - it'd be an attempt to demystify what's going on, and paint the big picture, without a lot of technical stuff.
I absolutely agree. I've been considering starting a book project on factorization homology, and using that as motivation to teach a bunch of modern homotopy theory that is distinctly different from the "computational stable homotopy theory" stuff that causes spectral sequences to appear in nightmares
Incidentally, I've considered calling that project "Baby's First Book on Factorization Homology"
Just to be clear, I wasn't trying to argue at David (that Mac Lane is the best, etc.) -- I just want to know what people's experiences are. Leinster's book was one on a reading list I had for a student for Spring 2019. (We didn't venture past chapter 1.)
I guess I'm saying more about Barwick than about homotopy theory...
And should stop!
I think your criticisms of his points are valid, and a healthy debate in my opinion! I really look up to Clark as a mathematician and as a person doing mathematics (there's a philosophical distinction that could open a whole can of worms....) and I'm heartened by his concern for the field, which is borne only out of love. But it's strictly a good thing to hear that certain concerns may no longer be valid! Homotopy theory has grown a lot in the last 3 years.
"By the time I was done I decided that computability shed no really interesting light on physics." –– Is this something that you would still say today?
Alexander Kurz said:
"By the time I was done I decided that computability shed no really interesting light on physics." –– Is this something that you would still say today?
I've lost interest in these issues so I'm not well-informed enough to have a very worthwhile opinion. There's certainly a lot of work relating computational complexity and physics, and a lot of people are excited about the proof that MIP* = RE.
Nikolaj Kuntner said:
That's interesting, because especially Heidegger gives me vibes of upcoming doom. He describes developments of what we make with natura and what we make of nature (how humans end up perceiving it), and the only folks I ever hear talking about him make me think we're down an unstoppable gloomy path.
if "gloomy path" means the end of the mankind, isn't worrying about that similar to worrying that i will die? seems superfluous. i spent a lot of time reading heidegger (at high school!) and i thought that i understaood him. but i don't think i understand him any more. i think he does not mean what he says. many people don't speak for themselves, but speak for an inner voice. i don't think that inner voice is worth listening to. it has a logic, but is not interested in the same things as we. i think that is the secret of heidegger.
John Baez said:
I'm gonna have to go shopping now, but I was just about to say something relevant. I was gripped with nihilistic despair from about 1978 (when I graduated from high school) to about 1989 (when I met my wife) - the feeling that it was impossible to determine what was really worthwhile. Some of this came from reading Nietzsche and logical positivists, but most was probably due to being lonely. I found it very hard to talk to people because I felt so different from them. I wanted a girlfriend but I was too scared to go after the women I really liked.
Quite an experience!
It occurs to me: we are having some very immersive storytelling here together, from many directions. a new level of narrative being created in these conversations.
John Baez said:
Arthur wrote:
When and how were you introduced to categories in the first place? What drew you to categorical ideas?
I learned a bit about categories when I was learning algebraic topology in grad school, and later as a postdoc when I was trying to understand things like cyclic cohomology and Hochschild homology - since they were important in Connes' work on noncommutative geometry. But this was a fairly pedestrian form of category theory: abelian categories, and a bit of derived functors. It was only much later, when I started understanding n-categories thanks to lots of conversations with James Dolan, that I really got the point about things like homological algebra: like, that chain complexes are strict stable infinity-groupoids, a very tractable sort of infinity-category. Back then I just treated homological algebra the way most algebraists do, as some sort of computational framework.
(sorry, you are by now way ahead, but i would like to understand this.) what is the goal of homological algebra beyond computing some invariants? (please don't use words like "understanding".) i am not arguing that that is what it is. i simply don't know what more could it be.
Todd Trimble said:
John Baez said:
Category theory is not really difficult to learn; the main problem is that most people suck at explaining things.
I doubt that's really the main problem. There are enough decent and accessible resources around by now. Could it be that most people suck at learning?
they suck with respect to what? to another group in a parallel universe who happen to be better at category theory. i think people understand category theory exactly as much as they should. the purpose of category theory is not that people understand it, but to structure things. and it's been doing fine at that.
Todd Trimble said:
Sometimes I do touch upon categorical ideas in a class, but it's almost always obliquely and without using the word "category". For example, making analogies between things they may have seen before (the gcd is like the min, and it's also like the intersection, if you look at it right). I think introducing categorical ideas by looking first at posetal examples is not a bad way to go, but even there a lot is dependent on the audience. (Sorry, is this getting too far off-topic?)
same here. i think we cannot avoid thinking and presenting categorically, but i almost never say the word "category". i built 2 tools where the manuals completely consist of categorical diagrams, but the word "category" is never used :)
John Baez said:
I predict that Lawvere has never said "monoidal category" in his published work, except perhaps in some sort of complaint.
i had a student who said that she disagreed with general topology. i thought that was the funniest thing. now lawvere disagrees with monoidal categories. was there something wrong with my sense of humor in the first case, or in the second case?
how about i start refusing to say boolean algebra. oops. i could make a monty python sketch of it. boolean algebras are immoral. they release boolean rings which they oppress.
dusko said:
how about i start refusing to say boolean algebra. oops. i could make a monty python sketch of it. boolean algebras are immoral. boolean algebras should set their boolean rings free, and then i will respect them.
dusko said:
i think people understand category theory exactly as much as they should. the purpose of category theory is not that people understand it, but to structure things. and it's been doing fine at that.
One thing, from my naive perspective, I see Category Theory as is a translation mechanism. It helps people from different fields communicate, or show them that they are looking at the same thing from a different angle. For example I was amazed to see the Object Oriented Thinking had a CT model, coalgebras, (Bart Jacobs' papers from the 90s) the dual of what Functional programmers like to use - algebras. That suddenly makes both a lot more interesting. Or recently constructive logic and co-constructive logics: suddenly I can see how Popper ties in with type theories via topoi. Before that I would have not given a thought to Paraconsistent logic. But to know that it is the dual of constructive logic! Suddenly it is not an arbitrary thing... There is Maruyama's thesis Meaning and Duality: from categorical logic to Quantum Physics that develops that view, though I don't know how many years it will take me to be able to read it.
There was a whole discussion on the thread Bi-Heyting Algebras that I came to reading Meaning in Dialogue by James Trafford. There is something that was not quite clear there regarding the view of complemenl-topoi he builds on, but the main idea of a co-heyting algebras as the dual of a heyting algebras seems to be solid, and it looks that it is what Popper was working with as explained in Dual Intuitionistic Logic and a Variety of Negations: The Logic of Scientific Research.
As I understand Heyting algebras give rise to constructive logics of verification, Co-Heyting algebras to constructive logics of falsification. In constructive logic one can have and in co-constructive logic .
(or rather "one cannot have" as James Wood remarked below)
Or rather, lack of ⊢ A ∨ ¬A vs lack of A ∧ ¬A ⊢.
There should be a shaky childlike handwritten font for people like me, so that I can express when I am only half sure of what I say.
Should have written "as I remember" because I think I understand it, it's just that was a month ago that I was writing up on co-exponentials, and under pressure it's a bit difficult to remember the details... :-)
John Baez said:
Later I annoyed Lawvere on the category theory mailing list by saying that the generating object in the Lawvere theory of groups was like the "Platonic ideal of a group". He hates Plato, so he launched into some Hegelian attack on Plato.
Interesting, I'm not sure if I recall anyone saying they really don't like Plato?
John Baez said:
2-categories and such are just as bad. All this stuff goes completely against the way he thought categories should be used.
In what sense?
Todd Trimble said:
The nLab has all the potential in the world to be largely ignored, because of problems which remind me of the problems with understanding Lawvere.
John Baez said:
The nLab has done a lot to bring Lawvere's ideas into broader view.
I think the nLab will generally have an impact. There's some concepts that, on the web, only get mentioned there, and so the site has a high googe-ablility or stuble-upon-ability looking into the future.
You know I first heard about n-categories in a chatroom from an 16 year old Italian boy in 2013. He didn't know what a differential equation is, but he could teach me about Russels type theory and lambda calculus.
Nikolaj Kuntner said:
Interesting, I'm not sure if I recall anyone saying they really don't like Plato?
Are you kidding?
dusko said:
if "gloomy path" means the end of the mankind, isn't worrying about that similar to worrying that i will die? seems superfluous. i spent a lot of time reading heidegger (at high school!) and i thought that i understaood him. but i don't think i understand him any more. i think he does not mean what he says. many people don't speak for themselves, but speak for an inner voice. i don't think that inner voice is worth listening to. it has a logic, but is not interested in the same things as we. i think that is the secret of heidegger.
Not really no. I might be not worried that indeed I die one day, but I might be worried that future generation will live in a polluted world. In a similar vain, Marx critique in Alienated Labour points towards a darker impact on human life than "just" death.
And similarly, Heidegger's thoughts on Technik are about how humans perception are molded (sunlight and water becoming perceived as resources, etc.), making experience today uncomparable to experience of previous human lives. In practical cases: The first person who owns a car being in the advantageous position to now even being able to take a working position 100km from his house, and 100 years later the world having changed so much because of this possibility, the people end up living in places where they actually can't go to work anymore. In more philosophical cases: The scientific possibilities resulting in advantages by focusing on that what is measurable (and thus optimizable) and 100 years later, people taking the point of view that what is not measurable is not real. The "cyclic world" perspective, where things eventually always repeat, is a bit more hopeful than the post-Hegelian world-strives-towards perspective, since our current projections are worse than just some people eventually having to pass.
James Wood said:
Nikolaj Kuntner said:
Interesting, I'm not sure if I recall anyone saying they really don't like Plato?
Are you kidding?
I'd say even those who reject his ideas "like" Plato, and continued or incorporated his thought.
And I don't count "corruptor of the youth" arguments against him.
Like John corrupted the youth with all his far too many talks.
Hi John,
thanks for all this. It is really nice and interesting to hear your history and its turns. And physics and maths -- it is an immense span of topics, and very interesting to hear your path through them. And those things about feeling insecure or different -- I think many mathematicians feel like that in one way or another. But it is almost shocking to hear such things from somebody so super-cool and relaxed and communicative like you. That's something to think about. Perhaps we can all learn to be more relaxed and communicative.
James Wood said:
Nikolaj Kuntner said:
Interesting, I'm not sure if I recall anyone saying they really don't like Plato?
Are you kidding?
If someone were influenced by Hegel they would have a quick way to being anti-Platonic I guess.
Nietzsche laid a lot of errors at the feet of Plato. His positing of the eternal heaven of non-mutable abstract objects is one of the reproaches, and its later effects on Christianity. Hegel was perhaps the first in the modern times who saw a thought as evolving dynamically. There were ancients esp. Empedocles who had prototypical evolutionary views of life and the cosmos as I discovered recently in The End of the Metaphysics of Being and the Beginning of the Metacosmics of Entropy. Heraclitus "No man steps in the same river twice" would be another emphasizing change.
I wonder if one can find the following reconciliation between Plato and Heraclitus. Could one think of Plato's philosophy as Algebraic (influenced as Plato was by writing), and Heraclitus as co-Algebraic? (Co-algebras being the mathematics of processes, change, modalities, ...)?
John Baez said:
Because people suck at teaching... including teaching how to teach. :upside_down:
Does anyone teach math professors how to teach?! I get the impression that many learn to teach by trial and error, but stop learning before they've resolved most of the errors...
John Baez said:
In other words, think of it as a mathematical game where you try to build models that have nice properties, and start cataloging them and proving theorems about them and also numerically simulating them...
This sounds like a far humbler version of what Wolfram proposed recently ( :grimacing: ), but with a grander (and so potentially more worthwhile?) scale of structure. I'm all for it, in this modest spirit at least!
John Baez said:
Even when I thought a lot about fundamental physics I found it very hard to remember all the clues and all the constraints at once. This is what real experts are supposed to be able to do. But it could be that as we progress in our understanding of physics we reach a point where no one person can keep all this stuff clearly in view! So we rely on communities... but maybe communities tend to get stuck in specific narrow programs, where basically they're adding on extra constraints of the form "the solution must be like this or we're not interested in it"... so physics gets stuck.
A little while ago on #practice: communication there was a discussion of what the future of publication and academic output should look like. This metaphor about keeping sight of all the clues gives a nice way of expressing the vision that I and others shared there: to have an open network, like a detective's pinboard, with all the clues visible to everyone. Having everything literally in sight would help us all a lot, I believe.
John Baez said:
Brendan Fong and David Spivak are going ahead with their Topos Institute, starting in January I guess, and that will turn up the excitement level an extra notch.
What is this and how do I get in on the ground floor?
dusko said:
Todd Trimble said:
Sometimes I do touch upon categorical ideas in a class, but it's almost always obliquely and without using the word "category". For example, making analogies between things they may have seen before (the gcd is like the min, and it's also like the intersection, if you look at it right). I think introducing categorical ideas by looking first at posetal examples is not a bad way to go, but even there a lot is dependent on the audience.
same here. i think we cannot avoid thinking and presenting categorically, but i almost never say the word "category". i built 2 tools where the manuals completely consist of categorical diagrams, but the word "category" is never used :)
But why not mention categories? Just so someone can pull the wool off their eyes later? At least give these students a hint of what they could discover if they look in the right place!
Morgan Rogers said:
Does anyone teach math professors how to teach?!
Universities seem to like putting on day(s)-long teaching courses for their lecturers, but it's questionable how much these actually improve teaching (as opposed to being easy to organise and giving plausible deniability).
@John Baez Do you have an active interest in work done on the cobordism hypothesis these days? I'm thinking about the work of Ayala-Francis using factorization homology in particular.
John Baez said:
Are Lambek grammars the same thing as Lambek's pregroup grammars? I vaguely heard he had two approaches to grammar.
well the Lambek calculus is from 1958, the pregroup grammars from early 2000, if I remember correctly Lambek, Joachim (2008). "Pregroup grammars and Chomsky's earliest examples" (PDF). Journal of Logic, Language and Information. 17 (2).
both nice systems as mathematical systems. both not good as theories of grammar, for real life NLP(Natural Language Processing) IMO.
John Baez said:
As for Cambridge, David Michael Roberts - I think it's just that Martin Hyland and Peter Johnstone got old, and Hyland retired (did Johnstone yet???), and they didn't manage to get a successor. Years ago tried to hire Charles Rezk (I think that's who it was), but he decided not to go.
Martin and Peter retired at the same time.
Reuben Stern said:
John Baez Do you have an active interest in work done on the cobordism hypothesis these days? I'm thinking about the work of Ayala-Francis using factorization homology in particular.
No, I never tried to follow any work on this hypothesis. Progress on this began right around the time I decided that I needed to leave the subject of n-categories and start thinking about something new: climate change, and what mathematicians can do about that. I've gradually become interested in higher categories again, but it's no fun for me to try to catch up with this line of work. The train has left the station.
dusko said:
(sorry, you are by now way ahead, but i would like to understand this.) what is the goal of homological algebra beyond computing some invariants? (please don't use words like "understanding".) i am not arguing that that is what it is. i simply don't know what more could it be.
Homological algebra is what you get when you take the full force of infinity-category theory and water it down in three different ways. These 3 ways each weaken the subject - i.e., make it less rich and less expressive. But correspondingly they also make it easier to do computations.
These 3 ways are:
1) making all the j-morphisms in your infinity-category invertible (up to higher morphisms).
2) making all forms of composition strictly associative (and obey all other laws strictly).
3) making everything maximally commutative (one says "stable").
That are potentially ways to water down infinity-category theory, based on whether or not we do each of these 3 maneuvers. This gives rise to the cosmic cube.
Joachim Kock said:
... it is almost shocking to hear such things from somebody so super-cool and relaxed and communicative like you. That's something to think about. Perhaps we can all learn to be more relaxed and communicative.
Thanks! Yes, it's good for everyone to realize that even supposedly successful people may have plenty of worries. When I was in my teens and twenties I worried about the meaning of life and whether I'd ever get a girl (more closely connected subjects that I realized). When I was in my thirties and forties whether I'd ever do anything really important. Now I worry about climate change, and what I should as I get old and gradually lose my mental acuity.
But somehow I keep getting more relaxed and jolly as I get older.
Morgan Rogers said:
Does anyone teach math professors how to teach?! I get the impression that many learn to teach by trial and error, but stop learning before they've resolved most of the errors...
The math grad students at U.C. Riverside are required to take a course on teaching. So in theory there is an opportunity now for them to become better teachers than the professors here. I haven't sat in on that course so I don't know how well it works.
For what it's worth, here is my advice on teaching.
James Wood said:
Nikolaj Kuntner said:
Interesting, I'm not sure if I recall anyone saying they really don't like Plato?
Are you kidding?
A lot of people hate Plato. Tweet something supporting Plato and you'll see a mob forming in no time... if anyone reads your tweets, that is. This could be a good way to see if anyone is reading your tweets.
The funny thing is that the number of people who hate Plato exceeds the number of people who've read Plato. A lot of them have only heard of something called "Platonism".
Then there are the people who hate Plato's politics; I sympathize with them more, in part because a lot of these people seem to have actually studied the politics of that era.
Morgan Rogers said:
John Baez said:
Brendan Fong and David Spivak are going ahead with their Topos Institute, starting in January I guess, and that will turn up the excitement level an extra notch.
What is this and how do I get in on the ground floor?
Try this:
I don't know if the building will have more than one floor, but it should be fairly easy to get in on the ground floor.
Valeria de Paiva said:
I wonder what have you learned/approve of/would do differently about your experience of dedicating yourself to climate change in Singapore.
It turns out that the original Azimuth Project idea - getting people to volunteer to help figure out stuff to do about climate change - did not work very well. Academics did not buy into this effort, perhaps because it had no institutional backing and I was an amateur in this area. The people who helped out were mainly computer programmers working in industry. But I was ineffective at harnessing their power.
I lived in Singapore continuously from 2010 to 2012, mainly working on this.
By the end I started realizing that I should work within in academic framework, and develop "applied category theory" as a way of luring mathematicians into doing something practical.
The dream was (and is) that this will get some of them to develop the new ways of thinking that we need to deal with the Anthropocene - a very broad process that has climate change as just one of its most dramatic manifestations.
In case anyone is curious, here are some slides for a 15-minute talk I gave on that:
I'm encouraged by how applied category theory is taking off, and discouraged by how much of it seems to be "more of the same" - developing mathematics in order to help the human race do what we're already doing, but faster and more "efficiently".
So, I want to step back, stop cranking out papers, and think a bit harder about how to nudge things in a good direction. I've stopped taking new students, since working with grad students is like driving a train: it takes a long time to pick up speed, once it's going it's unstoppable, but you can't take sharp turns.
The people who helped out were mainly computer programmers working in industry. But I was ineffective at harnessing their power.
Why were you not effective? What did you task them to do, if that was an approach?
What do sharp turns look like?
John Baez said:
For what it's worth, here is my advice on teaching.
oh boy, the advice is SO good!!!
I think I rediscovered some of the ideas along the way, but I does feel good to have them explicitly said by you!
Do you mind if I translate and post it to the Brazilian logic list?
That would be great, Valeria!
Nikolaj Kuntner said:
The people who helped out were mainly computer programmers working in industry. But I was ineffective at harnessing their power.
Why were you not effective?
A bunch of reasons. One is that these people wanted to write programs, but I had trouble thinking of programs that should be written. I think we could have become fairly good at writing small pieces of software designed to illustrate climate principles, like this:
Perhaps our best effort was our attempt to study and re-code a paper on the El Nino cycle and climate networks:
But I felt that going down this road would have required becoming a climate scientist and finding a team of better, more experience climate scientists to work with. And that did not seem like a good use of my skills. There are already lots of good climate scientists! And I don't just want to understand the climate: I want to help fix it.
I also discovered - the hard way - the incredible power of working within the academic system, by trying to work outside it.
Academia provides me with grad students, money to run conferences, and most of all the incentive for other researchers to work on projects allied to mine.
Nikolaj Kuntner said:
What do sharp turns look like?
Well, last week I spent a lot of time thinking about Galois cohomology for the purposes of understanding the role of division algebras in quantum mechanics. Now I'm going to change gears and write a paper on Noether's theorem. That's a fairly small turn for me - they're both projects in trying to understand the foundations of physics. But if I had a grad student working on these things it'd count as a sharp turn, because they'd need to spend months learning the prerequisites. And just imagine if I told this grad student "next month I want to work on chemical reaction networks".
The great thing about spending a long time studying lots of stuff is that it makes these sharp turns possible. I could never have done this when I was younger!
If you look at the homotopy theorist Jack Morava's recent papers on the arXiv you'll see what I mean by "sharp turns".
"On the canonical formula of C Lévi-Strauss, II"
"Topological invariants of some chemical reaction networks"
"On formal groups and geometric quantization"
"Toward a Galois theory of the integers over the sphere spectrum"
"Diagonalizing the genome II: toward possible applications"
One of the rules we had (and have) on the Azimuth Project is: no discussion of politics, esp. party politics. This is difficult because in some ways the problem of climate change is very much about politics. But we adopted this because forums that discuss politics tend to either become fistfights or adopt a political stance that turns off lots of people (while perhaps attracting lots of others).
The idea of the Azimuth Project was that we'd focus on science, not politics.
I'll check out that talk.
Rongmin Lu said:
The initial goal of category theory was to formally define what a "natural transformation" is. The problem that motivated Eilenberg-McLane came from homological algebra, and once they solved that problem, they were able to do more with homological algebra. In that sense, category theory was no more than a tool to be used in homological algebra.
But it turns out that category theory wasn't just a tool of homological algebra. It turned out that you could do a lot more with it, like set up an alternative foundation for mathematics, structure computer programs, etc.
So you're right that the original goal of homological algebra was to compute invariants, but that doesn't mean homological algebra can't evolve into something else. As it turns out, the chain complexes studied in homological algebra are actually invariants themselves. The new realisation that John's talking about is that the proper objects of study in homological algebra should be chain complexes.
In some sense, the invariants that chain complexes compute are shadows on the wall cast by the chain complexes themselves. Rather than computing those invariants, the people who work with derived geometry and higher category theory want to study the complexes themselves because they're believed to "contain more information".
there is hardly any doubt about any of what you are saying, except maybe the very last part. homological algebra, just like any other method or even any piece of software, can certainly evolve into anything. but i thought john said that he saw that homological algebra has evolved into something different from a method to compute things. so my question was what.
i imagine that that new thing might be the subject of the belief, that you mention, that simplicial complexes "contain more information". what about? the original idea was that they should reduce geometric invariants to combinatorial invariants. fair enough. now there is a belief that they actually contain more information. i don't see how can that be. maybe information about something else? also, if we believe that something contains some information about something, shouldn't we test that belief by spelling out some adjunction? if it does not go, then what are the grounds for the belief? discussing beliefs has led to a lot of confusion, historically.
John Baez said:
That are potentially ways to water down infinity-category theory, based on whether or not we do each of these 3 maneuvers. This gives rise to the cosmic cube.
yes, i think i have seen another set of your slides about this. very nice and elegant. i do love it as a story. what worries me about the strategy, though, is that people usually simplify problems by reducing them to lower dimensions, whereas here the strategy seems to be to push them up. that has been curious with the homotopy hypothesis from the outset. the pictures become beautiful, but they don't become simple. isn't that the direction where we should go? if we are to overcome the era of printing and of command line mathematics, and go back to thinking in pictures and geometry, shouldn't we use pictures that live in our own space?
@John Baez you mentioned that you studied with quillen. did you have a chance to talk to him about the homotopy hypothesis?
Lol, I've never heard "caveat lector", but it sounds great, I'll re-use this for sure :slight_smile:
John Baez said:
He wrote:
In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.
Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.
Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.
Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.
I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.
Source?
@Sayantan Roy You can just google it and find various copies. The original seems to be in Russian (https://baaltii1.livejournal.com/200269.html)
Rongmin Lu said:
No worries. "Caveat lector" is listed in Wiktionary and Merriam-Webster, so it's legit.
Well, it's grammatical in latin, so it's legit anyway :slight_smile:
dusko said:
John Baez you mentioned that you studied with quillen. did you have a chance to talk to him about the homotopy hypothesis?
No, I studied with him at MIT ~1982-1985, and around then he left MIT and got a job at Oxford, and I never saw him again. When I was at MIT I also took two courses on homotopy theory with G. W. Whitehead, but it was basic stuff; I never went deeper so I never attended Kan's seminar. Much later I spent a bunch of time reading Quillen's Homotopical Algebra, which is his book on model categories.
dusko said:
yes, i think i have seen another set of your slides about this. very nice and elegant. i do love it as a story. what worries me about the strategy, though, is that people usually simplify problems by reducing them to lower dimensions, whereas here the strategy seems to be to push them up.
I don't think of this as pushing something up. From the eyes of homotopy theory a 2-sphere is an infinity-groupoid - and a very complicated one indeed: nobody even knows all the groups of equivalence classes of automorphisms of automorphisms of automorphisms... automorphisms of an object in this infinity-groupoid! It's a very rich and fascinating structure. When we "stabilize" the 2-sphere, it simplifies a lot and becomes the sphere spectrum, which is more manageable but still quite rich. When we then "strictify" it, it becomes even simpler and turns into a very simple chain complex. So homological algebra is a great simplifier. It's basically a way of turning infinity-groupoids into slightly glorified linear algebra. It serves as a stepping-stone toward understanding the really interesting stuff in homotopy theory: the world of infinity-groupoids.
if we are to overcome the era of printing and of command line mathematics, and go back to thinking in pictures and geometry, shouldn't we use pictures that live in our own space?
I don't think so. Pictures are good for 2- and 3-dimensional things, but a lot of the fun in topology lives in higher dimensions, where we need to use concepts rather than pictures. Of course algebraic topology uses lots of commutative diagrams, and that's a great way to keep things vivid.
@Sayantan Roy asked for the source of my Voevodsky quote. He did two interviews in Russian and I translated a small portion of one using Google Translate and then fixed grammar and style mistakes:
• Интервью Владимира Воеводского (часть 1), 1 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 1).
• Интервью Владимира Воеводского (часть 2), 5 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 2).
Hmm, now the translation doesn't seem to be working...
John Baez said:
Hmm, now the translation doesn't seem to be working...
If you install Google Translate extension and then open the page then it seems to work (it worked for me at least when I was accessing the links via my PC).
John Baez said:
I was very lucky to have James Dolan explain a lot of category theory to me in a really clear way, and now I can teach other people that stuff.
I'm wondering where James Dolan is now. It seems he is an awesome person with a lot of insight, but I've been having trouble finding his academic webpage etc.
He lives on Long Island now, where he was born and raised. He doesn't have an academic affiliation at the moment.
Yeah, Macquarie didn't really work out for him. Sadly I never got over there while he was in Australia, so we didn't get to meet in person (I chatted with him a couple of times a decade or more ago via Skype, but that's it)
Yeah, forgot about that.
To make it complete you should add something like 'you are supposed to be on the Steering Committee, so ....' :-)
Well, 'tis done: https://ncatlab.org/nlab/show/James+Dolan
Is Urs here? I always felt that understanding his research via the nlab was basically undoable, but maybe an extensive Q&A here may help shed some light over the research he's doing (at least for me!)
To the best of my knowledge, Urs isn't here (and neither is his ally David Corfield). Mike Shulman, who is here occasionally, can probably help to shed light since he is a sometimes collaborator, and David Roberts too perhaps? I don't consider myself too qualified to speak about this, but one big key as I understand it is to wrap one's head around how he develops Lawvere's notion of cohesion for the -topos context and then apply that to a whole array of topics such as differential cohomology.
I could probably say something, depending on what you want to know.
Sam Tenka said:
I'm wondering where James Dolan is now. It seems he is an awesome person with a lot of insight, but I've been having trouble finding his academic webpage etc.
He never got a PhD. He's living on Long Island with his mother, and when I checked with @Todd Trimble and @Simon Burton - both of whom talked to him fairly recently - he is doing okay despite the high prevalence of coronavirus in New York.
Simon recently wrote here:
I also have a semi-regular collaboration going on with James Dolan. James is interested in categorified algebraic geometry, homotopy theory and the Langlands program.
I wish someone would talk to James Dolan a lot about what he's thinking about these days, and write up that stuff in a way that everyone can see. I did that sort of thing from about 1994 to about 2010.
When I was last talking to him a lot, he was working on doctrines in algebraic geometry. He wrote some introductory notes here:
and I scribbled down some more advanced thoughts here:
Some of these ideas were later developed by Martin Brandenburg:
I really like this stuff! These days it would probably be subsumed by some -categorical work, but there's a certain charm to the stripped-down -categorical approach.
Rongmin Lu said:
Q: Why is this discouraging?
I suppose that, for certain definitions of "do what we're already doing", it can seem like we're going down a path of doom and gloom. But one thing that we're already doing is trying to build a "better" world, whatever that means. And quite a lot of that would actually entail doing things faster and more efficiently.
So why is that a "bad" thing?
I can answer this one: it's the Jevons paradox, that greater efficiency typically doesn't result in reduced consumption; rather, it facilitates consumption by making scarce resources more accessible. Doing 'more of the same' therefore often means failing to reallocate resources to the urgent problems while indirectly exacerbating them. :grimacing:
Rongmin Lu said:
John Baez said:
I'm encouraged by how applied category theory is taking off, and discouraged by how much of it seems to be "more of the same" - developing mathematics in order to help the human race do what we're already doing, but faster and more "efficiently".
I'm coming back to this because I have a question.
Q: Why is this discouraging?
I suppose that, for certain definitions of "do what we're already doing", it can seem like we're going down a path of doom and gloom. But one thing that we're already doing is trying to build a "better" world, whatever that means. And quite a lot of that would actually entail doing things faster and more efficiently.
So why is that a "bad" thing?
This is a big subject, which is worth a lot of thought. You mentioned speed and efficiency.
First, the human race is pushing the biosphere to dangerous tipping points by ever-increasing modification of the basic parameters of the atmosphere, oceans, forests, etc. This is the Anthropocene phenomenon I summarized very briefly here (see especially page 3). Speeding up this process doesn't seem like the best way to ensure a happy ending. Of course we need to do some things quickly, but an overall acceleration, caused by improved technologies of a highly generalized sort (e.g. applied category theory), is just as likely to hurt as to help.
Second, it's not at all clear that increased efficiency reduces resource use. Quite possibly it does the opposite. Here's a good article on that:
All this makes me think that instead of merely trying to accelerate technological change through better mathematics, we should be applying mathematics to increase wisdom about the course we're taking.
My one-word buzzword for this is "ecotechnology".
But perhaps instead of focusing on a kind of technology, it would be better to focus on understanding living systems and what makes them thrive.
Seems like the energy consumption topic would make a lot of enemies.
"Do I want to be the person talking about the is and ought of managing population growth?"
"Do I want to be the person talking about restricting the way people go on about business?".
"Do I want to be the person talking about restricting travel?".
Scientists and hackers want to do their thing - bonus if it has an impact that people would like or be interested in them for it - but getting into argument might not seem compatible with that.
Yes, the point of Azimuth was not to design regulations, which is what your three topics are about. It was mainly about collecting and interpreting data, and secondarily about designing technological solutions to problems. Scientists and engineers prefer these two modes of action.
@John Baez wrote
But perhaps instead of focusing on a kind of technology, it would be better to focus on understanding living systems and what makes them thrive.
Ruth Garett Millikan wrote an interesting book Language, Thought and Other Biological Categories in 1984, which starts from a enlarged conception of the category of the biological as that which concern things that need the concept of reproduction to be understood, as opposed to physical objects in which the notion does not appear. That allows her then to define the proper function of bodies parts as given by evolution, and from there placing language in the biological category. One would need to be careful not to reduce thought to the biological and bring in Robert Brandom's point that speaking a language also brings in the notion of inference, and a duty to answer for what one says, and be able to question what someone says. But she uses it to good effect to bring to light an aspect of language that may be overlooked if one only takes the inferentialist view. This way of thinking opens up the view of thought as evolutionary, and so opens up a space to talk about the ecology of ideas, of technics and life itself of course. I don't think she does that (but I have not read all her work).Her notion of a category here is not that from Category Theory, but perhaps something could be developed along those lines.
That book could be interesting. It will take a lot of thought to go through the literature on biology and ecology and "general systems theory" and extract some insights that can be turned into mathematics in a useful way. I want to do it. My work on network theory was supposed to be a first step - but it's only a small step.
Oh, I see lots of stuff on petri nets there. So Ecology and Scala are related via petri nets :thinking: ...
Speaking of systems, I don't see anything there on coalgebras. Having read Universal Coalgebras: a theory of systems I thought it might be relevant?
(It is difficult to see from the article which is purely mathematical, and does not give a history of the concept analyses or why it is related to the concept of systems (or which concept of systems it is related to). I just watched a talk by Catarina Duthil Novaes Carnap meets Foucault: Conceptual engineering and genealogical investigations where she pointed out that Carnap leaves on open space in his work for genealogical thinking. I think this is related to how Carnap explains the evolution of concepts where he states that before giving a definition one should consider the usage of the term, give examples etc... This is usually done in philosophy, but perhaps less in mathematics where I guess the structure is meant to speak for itself. )
Petri nets, electrical circuits and Markov chains are a nice simple examples of networks, so I've been using them as test cases for general theories of networks, like Brendan Fong's decorated cospans or Kenny Courser's structured cospans. (These are students of mine who developed general approaches to networks.)
I haven't thought enough about coalgebras to incorporate them in my thinking on networks. More precisely, I've thought a lot about coalgebras in representation theory (a Hopf algebra is a coalgebra with extra structure), but not about the Turi-Plotkin paper Toward a mathematical operational semantics, which seems like a good starting-point for me to understand coalgebraic methods in computer science.
@John Baez one place this (coalgebra) may emerge in your recent work is the notion of equivalence induced by "bisimulation" -- two processes (resp nets) are bisimilar if each simulates the other, with simulation a relation that sends objects of one to objects of the other such that every transition in the first process has a matching transition in the second process, roughly. This is a pretty fine-grained notion of equivalence, and there are also "up to" methods for weakening it, etc. So you actually get a zoo of different notions of bisimulation. Anyway, the fanciest way for organizing this zoo and understanding the approach in general is to think of it as a categorical equivalence of the coalgebras induced by these processes.
pedantic note: "each simulates the other" is actually weaker than bisimulation
much as "each embeds into the other" is weaker than isomorphism
A bit like the difference between bi-interpretability vs mutually interpretable?
almost certainly
to be precise—there is a definition of what it means for a relation to be a simulation; the relation of "simulates" is defined to be the union of all relations that are themselves "simulations"
then for a relation to be a bisimulation means that both it and its converse relation are simulations; and the relation of "bisimulates" is defined to be the union of all relations that are themselvs "bisimulations"
so if a simulates b and b simulates a, that means there are simulation relations R and S such that a R b and b S a
if a bisimulates b, that means there is a bisimulation relation R such that a R b and b R a
or if you prefer: it means that there are simulation relations R and S such that a R b and b S a, and R = S⁻¹
sarahzrf said:
or if you prefer: it means that there are simulation relations R and S such that a R b and b S a, and R = S⁻¹
I've always felt I didn't get the bisimulation business as deeply as I should have given my work in Petri nets. Shall we make a new topic in #theory: concurrency about this? :slight_smile: I'd love to learn more
What's your take on patents?
Especially in relation with ecoscience.
It's been great talking about my past here... I think there are a few things we haven't gotten into:
1) work with Urs Schreiber on "higher gauge theory", Lie n-algebras and Lie n-groups,
2) work with John Huerta on the octonions, and how to use them to build super Lie 2-algebra and super Lie 3-algebras for use in string theory and M-theory,
3) work with James Dolan on groupoidification in quantum theory and combinatorics: Feynman diagrams, Hall algebras and Hecke algebras.
But I'd actually prefer to think about the future. I plan to start spending more time writing, so I'm not taking new grad students. I want to nicely write up a lot of stuff that appeared here and there in This Week's Finds.
I'll be on sabbatical this fall: my wife Lisa plans to be at the Harvard Center of Hellenic Studies, which ironically is in Washington D.C. (the place with the worst incidence of coronavirus right now), and I'll follow here there (assuming she goes there).
So, I'm starting on a few writing projects that I'll continue until the fall.
Right now I'm writing a paper "Getting to the bottom of Noether's theorem", based on a talk I gave at a conference in honor of the 100th anniversary of that theorem.
There are some technical errors in the talk that I need to fix. But I'm also trying to dig deeper now into the relation between Lie algebras and Jordan algebras, each of which capture one half of the meaning of "observables" in quantum mechanics. So that's being fun.
I've gotten the idea that the Lie algebra structure of observables captures the "internal" meaning of observables - they generate symmetries, e.g. dynamics - while the Jordan algebra structure captures their "external" meaning, namely how you observe them "from outside". I don't know if I can make this precise.
Sometime next year I'm going to teach a course on classical mechanics, and I'll try to use that to help finish my book Lectures on Classical Mechanics.
More about "future stuff":
1) I've applied for a Leverhulme Trust grant to visit Tom Leinster in Edinburgh in the fall of 2021 and the fall of 2022. In the grant proposal the plan is for me to give lectures on topics from This Week's Finds. Stuff like this:
1.1. Young diagrams. Young diagrams are simple combinatorial struc-
tures that show up in a myriad of applications. Among other things
they classify conjugacy classes in symmetric groups, irreducible rep-
resentations of symmetric groups, irreducible representations of the
groups SL(n; F) for any field F of characteristic zero, and irreducible
complex representations of the groups SU(n). All these facts are tightly
connected, and the central idea is that Young diagrams are irreducible
objects in the category of "Schur functors". These are functors that
know how to act on the category of representations of any group, and
other similar categories as well.
1.2. Dynkin diagrams. Coxeter and Dynkin diagrams classify a wide
variety of structures: most notably, finite groups of transformations of
generated by reflections (called "Coxeter groups"),
lattices having such reflection groups as their symmetries, and simple Lie algebras.
The special class of "simply laced" Dynkin diagrams also classifies the
Platonic solids, the quivers with finitely many indecomposable repre-
sentations, and other structures. Thus, a tour of Coxeter and Dynkin
diagrams that doesn't get bogged down in the sometimes rather tech-
nical proofs of the fundamental results is a good way to see connections
between a wide range of mathematical topics.
1.3. q-Mathematics. A surprisingly large portion of mathematics can
be seen as a special case of more general "q-mathematics" arising when
we set the parameter q equal to 1. For example the usual derivative of
a function on the real line is the q = 1 case of something called the \q-
derivative", and there is a subject called "q-calculus" that generalizes
ordinary calculus to other values of q. There are important applications
of q-mathematics to the theory of quantum groups (a generalization of
ordinary groups), and also to algebraic geometry over ,
the field with q elements where q is a prime power. The connections between these
two sources of q-mathematics remains somewhat mysterious, in part
because there is no field with one element. There are, however, many
patterns that suggest something like a field with one element should
exist.
1.4. The three-strand braid group. The three-strand braid group
has striking connections to the trefoil knot, rational tangles, the
group SL(2;Z), and modular forms. This group is also the simplest of
the "Artin-Brieskorn groups", a class of groups associated to Dynkin
diagrams, which map surjectively to the Coxeter groups. Thus, this
group is a good starting-point for examining a range of easily visualized
but important mathematical structures.
1.5. The quaternions. The quaternions are the third and least fa-
miliar of the three associative normed division algebras over the real
numbers: R, C and H. They are also the first really nontrivial example
of a Clifford algebra. Their unit sphere forms a group isomorphic to
SU(2), so they are useful in getting a clear mental picture of the double
cover SU(2) ! SO(3) and the spinor representation of SU(2).
1.6. Clifford algebras and Bott periodicity. The Clifford algebra
is an associative algebra over R freely generated by n anticom-
muting elements that square to 1. We have = R, = C and
= H. The Clifford algebras are important in understanding the
double covers of the rotation groups SO(n), called the "spin groups"
Spin(n). They also provide access to the spinor representations of these
spin groups. Bott periodicity, an important phenomenon in topology,
ultimately arises from the fact that is isomorphic to the alge-
bra of 16 16 matrices with entries in . The quaternions are a
prerequisite for this topic.
1.7. The threefold way. Irreducible representations of groups on real
or complex vector spaces come in three different kinds - a fact that
ultimately arises from the three division algebras R, C, and H. Freeman
Dyson called this fact, and various related results, the "threefold way".
The quaternions are a prerequisite for this topic.
1.8. The tenfold way. When we generalize algebras to "superalgebras"-
that is, algebras in the category of Z/2-graded vector
spaces -we find that the three real division algebras fit into a larger
collection: the ten real "super division algebras". All ten of these al-
gebras are Clifford algebras: eight are real Clifford algebras and two
are complex Clifford algebras. This mathematics has recently found
applications to condensed matter physics, where it is called the \ten-
fold way". The ten real super division algebra are also connected to
the ten infinite families of symmetric spaces. The threefold way is a
prerequisite for this topic.
1.9. The octonions. Besides the three associative normed division
algebras over the real numbers, there is a fourth one that is nonasso-
ciative: the octonions. They arise naturally from the fact that Spin(8)
has three irreducible 8-dimensional representations. The exceptional
Lie algebras can all be constructed using octonions. This is especially
attractive in the case of E8, whose weight lattice can be identified with
the "Cayley integral octonions": a lattice of octonions closed under
multiplication. The quaternions and Clifford algebras are prerequisites
for this topic.
1.10. The exceptional Jordan algebra. A "Euclidean Jordan alge-
bra" is real vector space with a commutative product that is power-
associative (i.e. iterated products have the same value regardless
of parenthesization) and have the property that
implies . Such algebras were developed as a framework for
understanding algebras of observables in quantum mechanics, but also
arise naturally in the study of axiomatic projective geometry. Jordan,
Wigner and von Neumann classified these algebras and found that in
addition to some infinite series, there was one exception, the algebra of
3 3 self-adjoint octonionic matrices. This "exceptional Jordan alge-
bra" gives a simple explanation of the exceptional Lie groups F4 and
E6. The octonions are a prerequisite for this topic.
1.12. Euler characteristic and homotopy cardinality. We all know
what it means for a set to have 6 elements, but what sort of thing has
-1 elements, or 5/2? Surprisingly, these questions have nice answers.
The Euler characteristic of a space is a generalization of cardinality
that admits negative integer values, while the homotopy cardinality is
a generalization that admits positive real values. These concepts shed
new light on basic mathematics. For example, the space of finite sets
turns out to have homotopy cardinality e, and this explains the key
properties of the exponential function. Euler characteristic and ho-
motopy cardinality share many properties, but it's hard to tell if they
are the same, because there are very few spaces for which both are
well-defined. However, in many cases where one is well-defined, the
other may be computed by dubious manipulations involving divergent
series|and the two then agree! We discuss some progress on unifying
them - notably Leinster's Euler charactestic of a category - and also
some obstacles, noticed by John Lerman.
1.13. The Rosetta Stone. In physics, Feynman diagrams are used
to reason about quantum processes. In the 1980s, it became clear
that underlying these diagrams is a powerful analogy between quan-
tum physics and topology. Namely, a linear operator behaves very
much like a "cobordism": a manifold representing spacetime, going be-
tween two manifolds representing space. This led to a burst of work on
topological quantum field theory and "quantum topology". But this
was just the beginning: similar diagrams can be used to reason about
logic, where they represent proofs, and computation, where they rep-
resent programs. By now it is clear that there is extensive network
of analogies between physics, topology, logic and computation. The
concept of "closed symmetric monoidal category" serves as a kind of
Rosetta Stone allowing to translate concepts between these fields.
These are some of my favorite things. It should keep me plenty busy writing them up, though luckily I've already written about them and just need to take that stuff and polish it.
Young diagrams show up in computations of spectral sequences as well, and I'd love to understand more about why/how.
I don't think Young diagrams has a general relation to spectral sequences - I'd be very interested to hear if I'm wrong about this! But Young diagrams could easily show up if one were studying spectral sequences that were connected to symmetric groups, or representations of , or something else that involves Young diagrams.
@John Baez : I've worked quite a bit with Jordan algebras as well. One thing I could never really wrap my head around though is what is special about 3x3 matrices of octonions. Do you know of an intuitive explanation of why 3x3 works (say as a Jordan algebra), while 4x4 fails? I know that on a technical level it is because the 3x3 space is "not big enough" to generate the necessary counter-example to certain identities, but maybe you have some deeper insight here?
John van de Wetering said:
John Baez : I've worked quite a bit with Jordan algebras as well. One thing I could never really wrap my head around though is what is special about 3x3 matrices of octonions. Do you know of an intuitive explanation of why 3x3 works (say as a Jordan algebra), while 4x4 fails? I know that on a technical level it is because the 3x3 space is "not big enough" to generate the necessary counter-example to certain identities, but maybe you have some deeper insight here?
2x2 and 3x3 self-adjoint matrices of octonions are better because the octonions are "alternative": the subalgebra generated by any two octonions is associative. We can also take advantage of the self adjointness: 1) the diagonal entries of a self-adjoint matrix are real, so they associate with all octonions, 2) the subalgebra generated by and is associative for any pair of octonions and .
So, the nonassociativity doesn't kill you until you get to 4x4 self-adjoint matrices of octonions.
Nice! @Joe Moeller and I are writing a paper on plethysm, symmetric functions and stuff.
I really look forward to reading those notes. I like the idea of a sketchy proof of the Riemann hypothesis.
I think it's very important to have some sketchy proofs of it. You need sketchy proofs before you can do actual proofs - except for really easy theorems.
In a way the Weil Conjectures are like a really sketchy proof of the Riemann Hypothesis.
But there are a lot of aspects that seem very hard to extend to the actual case.
Joe and I were motivated by the "field with one element" stuff but our paper doesn't really get into any of that stuff.
John Baez said:
Nice! Joe Moeller and I are writing a paper on plethysm, symmetric functions and stuff.
How many papers (or let's make it more broad and say "texts") do you have in the writings at one time and how long does it typically last from starting the document till putting them out? What's your writing routine in that way?
Nikolaj Kuntner said:
John Baez said:
Nice! Joe Moeller and I are writing a paper on plethysm, symmetric functions and stuff.
How many papers (or let's make it more broad and say "texts") do you have in the writings at one time and how long does it typically last from starting the document till putting them out? What's your writing routine in that way?
Fun/depressing questions!
It's hard to estimate the mean time it takes for me to write and publish a paper because it's a long-tailed distribution. As with the Cauchy distribution, the mean may not actually exist because the integral diverges.
Just yesterday @Joe Moeller, @Blake Pollard, John Foley and I published a paper:
which had been in the works for a fairly long time:
began writing: 25 August 2017
submitted to TAC: 12 January 2018
conditionally accepted: 17 March 2020
revised: 28 March 2020
accepted: 24 May 2020
Since this was Joe's paper, and he's a grad student, this has been going on for most of the time he's been working with me!
But anyway, right now I'm trying to reduce the number of papers I'm writing with grad students so I can start doing other things. I only have 3 grad students now; two years ago I had 7. Writing papers with grad students is very slow and arduous, though it produces good results.
So, right now my load is fairly light. I've been keeping a to-do list so it's easy to show you:
Then there are two books I need to finish, etc.
I love writing. It feels like I've spent my life trying to optimize the process of writing, but I still have a lot to learn.
I find it works best if I completely immerse myself in writing one thing at a time - I get more than twice as much done in 4 hours than I do in 2, and a week focused on one project is a wonderful thing.
It's painful to start any writing project, but once it's halfway done it usually becomes very pleasant: I can open the LaTeX file and instantly see several things I should be doing - and I just start doing them: hard ones if I'm feeling energetic, easy ones like writing the bibliography or drawing figures if I'm feeling tired.
After a few days of just doing this I'm full of energy.
Unfortunately I'm interested in many things, and having coauthors or grad students makes it impossible to finish one paper before I start another.
John, I'm really looking forward to (1)! My first interaction with you was when I emailed you out of the blue ten years ago asking if you knew of any category theoretic treatment of Noether's theorem in some generality, and you politely replied that you did not, but it was an interesting question.
That's funny! When Nick Teh invited me to speak at Philosophy and Physics of Noether's Theorems: A Centenary Conference on the 1918 Work of Emmy Noether, he asked me to speak about a category-theoretic approach to Noether's theorem. I said I didn't know anything about that, but I had some other thoughts I'd like to talk about. And that's what my paper is about.
No category theory per se, but a lot of math and physics. In the last couple of weeks, working on this paper, I'm getting glimpses of some foundational ideas on physics that really intrigue me. I'm not sure I'll have time to solidify them in this paper. But it's exciting.
These ideas are about the dual meaning of "observable" in physics: you can see what I mean in my talk slides at the above link. The big question is whether we can exploit this to make new progress in physics.
John Baez said:
3) work with James Dolan on groupoidification in quantum theory and combinatorics: Feynman diagrams, Hall algebras and Hecke algebras.
I look forward very much to hear about this!
John Baez said:
I plan to start spending more time writing, so I'm not taking new grad students. I want to nicely write up a lot of stuff that appeared here and there in This Week's Finds.
Wow, can't wait! (It's similar to hearing that there will be a Twin Peaks season 4.)
John Baez said:
1) I've applied for a Leverhulme Trust grant to visit Tom Leinster in Edinburgh in the fall of 2021 and the fall of 2022. In the grant proposal the plan is for me to give lectures on topics from This Week's Finds.
My favourite numbers, II :-)
Thanks! Btw, I've been wanting to talk to you more about combinatorics and category theory. I mentioned that once. I'm finally starting to get enough time.
One idea I had was to look at things like homotopy species (I guess you know what I mean by that) but where we replace finite sets by finite "spin sets".
Long after talking to you about "homotopy combinatorics", I found that people already study "spin representations" of the symmmetric group. These are actually representations of certain"double covers" of the spin groups, i.e. nontrivial central extensions
Section 2.1 of the paper I linked to says a bit more about this.
There's a groupoid of finite "spin sets" whose skeleton is
So, a "spin set" is a thing like a set where switching a pair of elements twice isn't the identity; you have to switch them four times to get the identity.
There's a stuff type of spin sets: making a set into a spin set involves putting extra stuff on it, not just extra structure.
Anyway, there should be an interesting subject of "spin combinatorics", where we use finite spin sets instead of ordinary finite sets.
You know a lot more than me about how to deal with homotopy types, spans of homotopy types, and such, so we might have some fun working on a relatively "concrete" project using these ideas.
Hi John, I would like to come back to this. The paper looks very interesting. I will look into it. On the other hand, spin representations of the symmetric groups sound difficult :-( I am already having a lot of trouble with just ordinary representations of the symmetric groups, from the viewpoint of objective combinatorics.
I am surprised you say that you need 4 switches of a pair of elements to get back to the identity. I thought the group here should be , namely of the sphere spectrum... Or is involved too? But I should read some more stuff before polluting the discussion with my misconceptions.
Do these sets somehow inject into physically relevant theories?
Joachim Kock said:
Hi John, I would like to come back to this. The paper looks very interesting. I will look into it. On the other hand, spin representations of the symmetric groups sound difficult :-( I am already having a lot of trouble with just ordinary representations of the symmetric groups, from the viewpoint of objective combinatorics.
What are you trying to do with them? In math the trick is not to get into more trouble than you can get out of.
The more trouble you can get out of, the more you should get into.
I am surprised you say that you need 4 switches of a pair of elements to get back to the identity.
I was mixed up. If are the usual transpositions generating , we have and when . In the "spin" version of the symmetric group we have generators with and when . Here is a new element that commutes with everything and squares to .
I'd gotten mixed up: I thought it was .
So you see I don't know much about this; I just think it's going to be interesting.
Nikolaj Kuntner said:
Do these sets somehow inject into physically relevant theories?
That's one of the questions I'd like to answer someday! There's an injection
where is the orthogonal group, the group of rotations and reflections of . You get this injection just by letting permutations act to permute coordinate axes.
The group has a nontrivial double cover called , the pin group, which describes how spin-1/2 particles transform under reflections and rotations. So we have a 2-1 homomorphism
If we look at the image of in and look at all the elements of that map to it, we should get the group I'm calling .
And reading that link, I'm remembering that actually has two nontrivial double covers, so there are two versions of the pin group. I think this has something to do with my confusion earlier....
Anyway, that's probably more than you wanted to know, but also somehow less. Basically we're taking what physics says about rotating and reflecting spin-1/2 particles and seeing how it impacts our understanding of permutations!
I'm good on the physics, I'm not sure about the homomorphisms of the discrete group onto the Lie groups elements
Well, permutations of coordinate axes are special cases of rotations/reflections, and even permutations of coordinate axes are special cases of rotations, so we've got inclusions
and this sets up a nice bunch of analogies.
Okay, but so for O(3) that's just a small discrete group
On a related note, there's this nice book by Sagan available online
http://math.sfsu.edu/federico/Clase/RepTh/sagan.pdf
The Symmetric Group: Representations, Combinatorial Algorithms, and Symmetric Functions
Nikolaj Kuntner said:
Okay, but so for O(3) that's just a small discrete group.
That's because 3 is a small number. Try a bigger one.
Yeah okay, but then you need an application for SO(9001) :P
I'm not worrying about applications, I'm just doing math.
Thanks for the link to the Sagan book! I'll grab it. I'm always trying to learn more about symmetric groups, symmetric functions, and such.
So this makes we wonder if there are String covers for the symmetric groups. Nora Ganter and a student worked on String covers for the Platonic groups, so passing from the ADE subgroups of SO(3) to the corresponding subgroups of SU(2)=Spin(3) to the corresponding subgroups of String(3). These turn out to be nontrivial 2-group extensions, so... the symmetric group case?
But the 'spin sets' stuff looks really interesting already!
Nikolaj Kuntner said:
Yeah okay, but then you need an application for SO(9001) :P
You want a number such that . :upside_down:
John Baez said:
- Write an article for Nautilus, tentatively called "Why condensed matter physics never fails to awe me".
I just saw this 2019 Quanta report about twisting two layers of graphene to get superconductivity. Apparently there's now a whole new field called twistronics. Going by its history, it's one of those cases where the theory led the experiments as well.
It's awesome, but how did I miss this? :sweat_smile:
David Michael Roberts said:
So this makes we wonder if there are String covers for the symmetric groups. Nora Ganter and a student worked on String covers for the Platonic groups, so passing from the ADE subgroups of SO(3) to the corresponding subgroups of SU(2)=Spin(3) to the corresponding subgroups of String(3). These turn out to be nontrivial 2-group extensions, so... the symmetric group case?
I didn't want to talk about this, because I wanted to develop the theory before anyone else does. But I'm not moving very fast on this project. Wanna help me?
Sure, that would be cool. Once this semester is over, I'm only doing research.
Drop me an email and we can discuss privately if you want.
Another collaboration sparked by the Category Theory Zulip! Hooray!
Okay!
John Baez said:
1.3. q-Mathematics. A surprisingly large portion of mathematics can
be seen as a special case of more general "q-mathematics" arising when
we set the parameter q equal to 1. For example the usual derivative of
a function on the real line is the q = 1 case of something called the \q-
derivative", and there is a subject called "q-calculus" that generalizes
ordinary calculus to other values of q. There are important applications
of q-mathematics to the theory of quantum groups (a generalization of
ordinary groups), and also to algebraic geometry over ,
the field with q elements where q is a prime power. The connections between these
two sources of q-mathematics remains somewhat mysterious, in part
because there is no field with one element. There are, however, many
patterns that suggest something like a field with one element should
exist.
Have you looked at doubly affine Hecke algebras? They seem to unify nicely some aspects of q-mathematics with many facets of Lie theory. Every now and then I read sections of:
https://arxiv.org/pdf/math/0404307.pdf
which is very inspiring and dizzying with the amount of connections going in all directions...
Hi, thanks - no, I haven't looked at these.
I have a lot of trouble understanding algebraic structures (or anything else for that matter) unless I have a clear concept of what they "mean". So that would be my first challenge here. "Dizzying connections" just make me dizzy unless I have my feet firmly planted on some sort of solid ground.
John Baez said:
I have a lot of trouble understanding algebraic structures (or anything else for that matter) unless I have a clear concept of what they "mean". So that would be my first challenge here. "Dizzying connections" just make me dizzy unless I have my feet firmly planted on some sort of solid ground.
Maybe we could start a reading group. That preprint is the introduction to Cherednik's book on double affine Hecke algebras (DAHAs). Cherednik used DAHAs to work with Macdonald polynomials and prove Macdonald's constant term conjecture for all reduced root systems.
Alas, I'm too busy to join any reading groups or even go to the nice online seminars that people are having, like the MIT seminar.
If someone told me what's the point of double affine Hecke algebras, I could maybe slowly think about them a bit and talk about them a bit.
By the way, here's what I'm working on right now... since this is "my thread" I hope it's not too obnoxious to list these things:
John Baez said:
If someone told me what's the point of double affine Hecke algebras, I could maybe slowly think about them a bit and talk about them a bit.
Double affine Hecke algebras (DAHA) are Cherednik's generalisation of affine Hecke algebras, which come up in Macdonald's study of orthogonal polynomials. Cherednik defined them to study the classical and quantum Knizhnik–Zamolodchikov (KZ) equations, as well as Macdonald polynomials, and succeeded in using DAHA to prove Macdonald's constant term conjecture for general reduced root systems.
Cherednik has this picture (Fig 4 in the preprint, Fig 0.4 in the book) that gives an overview of how some of the ideas in his book fit together.
There seems to be some interactions with the topics from TWFs you're planning to revisit, particularly 1.1-1.4. You've also mentioned that you've discussed Hecke algebras with James Dolan, which seems relevant as well.
Nice new paper that disproves Connes' conjecture without passing through all the intermediate results: https://arxiv.org/abs/2006.05629
David Michael Roberts said:
Nice new paper that disproves Connes' conjecture without passing through all the intermediate results: https://arxiv.org/abs/2006.05629
Nice. The proof is based on the authors' reformulation of Connes' conjecture as a computability problem, and uses only MIP*=RE and continuous logic.
From the paper:
In Section 5, we offer a general perspective on embedding problems and point out how our techniques give a stronger refutation of the CEP in the spirit of the Gödel Incompleteness Theorem.
Oh, this is "just" a fixed point theorem then. :upside_down:
John, you had page with notes on how to teach.
Do you also have notes on how to read a text?
No. I have an approach to reading, but I haven't thought about how to explain it, and it's a bit hard to execute.
I read lots of papers this way: I know what kind of things I want to learn about, and I search for those things. This particular style of reading is very ruthless, sort of like ripping open the paper, grabbing the best parts, and ignoring the rest. I think a lot of beginners don't know how to do this.
But then there are other papers and especially books where I take a more submissive attitude: I've decided I want to learn whatever they have to say. I still read these in a very nonlinear way, not at all from start to end. I go for the parts that make sense, then expand out from there. Usually the parts that make sense to me are those with the highest word/math symbol ratio. I often don't bother looking at equations until I know what's going on.
And I guess there are lots of papers and books where I take an intermediate approach: I think they might teach me something interesting, but I'm not sure what it is, so I look through them to see if anything strikes my fancy.
I read a whole lot when I was a student: I tried to spend a couple hours in the library each day, just trawling through books trying to understand what math and physics were all about.
I did that for about 7 years, I guess.
It was very helpful to get a vague feeling for lots of different areas. I recommend doing something like this. Books are much better than arXiv papers for this sort of exploration.
Nowadays I don't have much time for this, or much inclination. But a few days ago I spent hours using MathSciNet to read reviews of about 50 papers on Jordan algebras, and that was very pleasurable. I feel I should do that sort of thing more often: just getting a feel for what people have proved in a given topic.
I guess a key skill to develop is: guessing what someone must be saying, on the basis of limited clues. Not wasting a lot of time on details until you're sure you need those details.
I try to form a broad, somewhat vague but still conceptually clear picture of an area before trying to understand details: when you have a broad picture, you can understand the details a lot faster, because you can tell what they're good for. It's like a scaffolding on which you can hang the facts you learn.
A mathematical fact that doesn't fit into a broad picture is a somewhat irritating thing.
(The irritation can be nature's way of telling you to learn more.)
Thanks for the long answer.
Interesting you say you, if I understand you correctly, get most and also go for the parts with high word/math count.
Any note taking strategy for you (what I might call) big-steps approach?
I actually was never on MathSciNet.
Humans are better adapted to processing words than mathematical symbols.
I hardly ever take notes when I read, unless I'm trying to prove a theorem and writing down specific technical results I read that could help me prove that theorem.
Maybe writing notes could help me learn stuff, but I've never tried.
On the other hand, I find it really helpful to write blog articles explaining things I'm trying to learn. This is slightly different.
Do you ever take literature notes so that you have key points at ready hand for your writing? Or is this not useful to you?
I've never considered it. I usually write a blog article when I feel I know what's going on. As I write, I discover there are things I don't understand - this is the main reason I like to write. Then I often look at papers and books to resolve my confusion.
I guess you could say my blog articles are my notes.
John Baez said:
No. I have an approach to reading, but I haven't thought about how to explain it, and it's a bit hard to execute.
I think this was a good explanation of the approach. It's how I read stuff, and how I imagine my adviser would read stuff. Well, no need to "imagine"... he did actually demo the "ripping open the paper" (not literally, and it was actually a book) bit to me once.
Rongmin Lu said:
John Baez said:
No. I have an approach to reading, but I haven't thought about how to explain it, and it's a bit hard to execute.
I think this was a good explanation of the approach. It's how I read stuff, and how I imagine my adviser would read stuff. Well, no need to "imagine"... he did actually demo the "ripping open the paper" (not literally, and it was actually a book) bit to me once.
Yes, it's more common than you'd think. Somehow everybody converges to this strategy in academia. What @John Baez described is actually laid out in detail (with a check list and everything) in Mortimer Adler's classic How to Read a Book. He calls it Synthetic Reading, because you grab ideas from different sources based on your own interests and agenda, synthesizing knowledge as you go along.
Oliver Shetler said:
Yes, it's more common than you'd think. Somehow everybody converges to this strategy in academia. What John Baez described is actually laid out in detail (with a check list and everything) in Mortimer Adler's classic How to Read a Book. He calls it Synthetic Reading, because you grab ideas from different sources based on your own interests and agenda, synthesizing knowledge as you go along.
Mostly because "ain't nobody got time" to read a book from cover to cover. It's probably a shame, but I think that reading is also a means to an end. Just because an author has structured the material in one way doesn't mean you have to follow that order strictly, or that the presented order will necessarily be exactly what you need. It also goes with the flow of human psychology: if you read with an agenda in mind, you tend to read more actively, instead of dozing off, as many people end up doing when they go in, teeth gnashing, with the "I'm gonna read this entire book" mentality.
I agree. But sometimes, maybe after reading it once, I realize a paper is book is so good I need to read it in a more "submissive" manner, where I trust that it's full of wisdom and I should just try my best to absorb it.
For example, any book byJohn Milnor or Frank Adams eventually has this effect on me, when it comes to topology.
I once tried literally reading a book from cover to cover. The front cover, the copyright page, every page number, the body, etc. I gave up when I was about 5 pages or so from the end of the index. I don't particularly recommend this reading strategy.
John Baez said:
But sometimes, maybe after reading it once, I realize a paper is book is so good I need to read it in a more "submissive" manner, where I trust that it's full of wisdom and I should just try my best to absorb it.
That's also covered in the book of Adler that Oliver mentioned. It's not really mutually exclusive, but I've noticed that many academics perceive that they're time-poor, and so this is the strategy to cope with having to read a large volume of material.
Jason Erbele said:
I once tried literally reading a book from cover to cover. The front cover, the copyright page, every page number, the body, etc. I gave up when I was about 5 pages or so from the end of the index. I don't particularly recommend this reading strategy.
Rumour has it that Mochizuki reads books from cover to cover, or at least I've heard this brought up by people who know him as a reason for why he's so leery of people not reading his works fully.
Rongmin Lu said:
John Baez said:
But sometimes, maybe after reading it once, I realize a paper is book is so good I need to read it in a more "submissive" manner, where I trust that it's full of wisdom and I should just try my best to absorb it.
That's also covered in the book of Adler that Oliver mentioned. It's not really mutually exclusive, but I've noticed that many academics perceive that they're time-poor, and so this is the strategy to cope with having to read a large volume of material.
I feel like most of the time-constraints––ironically––come from thinking you're so time-constrained that you rapidly switch between tasks, getting 10-50% as much done in the same time as a result. At least, that's what I've seen from "busy" professors I know. And given that there have been many professors, like Douglas Walton or Niklas Luhmann, who published thousands of articles and books, in addition to managing department responsibilities, it makes me think the complainers are running at closer to 10% efficiency. I wonder if anyone has ever done a study of how self-perceived business affects academic productivity.
(Also chatting on Zulip or Tweeting too much likely plays a role :sweat_smile: )
Yes, most people who complain about not having enough time are using their time in ways that go against their self-proclaimed goals.
A lot of our self-created problems in life have to do with internal conflicts between different goals.
(There are also non-self-created problems, of course!)
We say we want to write a paper, but instead we do something else, perhaps because it has more immediate payoffs in pleasure. Twitter and Zulip and such have been carefully designed to produce instant rewards.
However, I find for myself that I procrastinate the most on writing a paper when there's some issue in the paper I don't yet understand. Consciously I think I'm ready to write the paper, but unconsciously I secretly know there's something I haven't figured out yet.
The best way to bring these problems into consciousness is to start writing the paper, or at least start mentally composing it, and not get distracted by small chores: focus on the hardest parts. That's where the problems lie.
When I consciously notice the problems, I can start solving them.
Oliver Shetler said:
I feel like most of the time-constraints––ironically––come from thinking you're so time-constrained that you rapidly switch between tasks, getting 10-50% as much done in the same time as a result. At least, that's what I've seen from "busy" professors I know.
Switching between tasks is certainly not productive, but what if you have no choice because you do have a lot of other tasks at hand? Unless you're in the fortunate position of an academic whose career is solely dependent on just one task (teaching or research), some switching is probably inevitable. Of course, one could argue that "busy" academics need better time management, but often the non-research (teaching and/or admin) tasks aren't scheduled by you.
This is also a problem faced by software developers in the industry, which is why there's a visceral hatred for things that have a knack for producing non-coding tasks, such as open-plan offices and the various "agile" methodologies, peddled by management types, that pay lip service to the original Agile Manifesto. People who have kids and were forced to work from home during the quarantine have also found that out the hard way, while the frazzled but single software developers have generally reported loving the WFH arrangements and aren't in any hurry to get back to the distracting office soon.
Oliver Shetler said:
And given that there have been many professors, like Douglas Walton or Niklas Luhmann, who published thousands of articles and books, in addition to managing department responsibilities, it makes me think the complainers are running at closer to 10% efficiency.
I don't doubt that people who complain about a lack of time tend not to be all that efficient at using it, but Walton and Luhmann worked in a different discipline, so it's a bit like comparing apples and oranges. David Michael Roberts and I used to know an applied mathematician who's reputed to pump out one paper a week on top of his editorial duties. It sounds amazing, but he'd been doing research for decades by the time we were around, so surely he must have figured out a mass-production strategy by then.
That's fair. In math, the hard earned fluency comes before the productivity. That's probably why it looks like junior students and professors barely produce anything at first. Math is what learning scientists call a "kind learning environment" where it's "easy" to learn (though grueling repetition) but "hard" to pick up casually.
Also, I don't mean to contribute to the "blame the professor" culture in universities. Career administrators with lots of "ideas" about scholarship are probably the source of most of the research and teaching friction, though I can't know that for sure as a student.
I just checked, and in 2010 he had 25 papers published according to Google Scholar. 430-odd papers in 47 years.
David Michael Roberts said:
I just checked, and in 2010 he had 25 papers published according to Google Scholar. 430-odd papers in 47 years.
Phew! That's only 9+ papers per year on average.
And he's still publishing! How does he do that??
Not sure we are thinking about the same person :-) But if we are, publishing can have a long pipeline. Terry Tao just put on the arXiv a paper with coauthors when the project was essentially finished a decade ago. There's also posthumous papers of Erdos, including one publication in the last ten years.
Of course, then there's Saharon Shelah .... :astonished:
27 papers so far this year, according to Google Scholar.
David Michael Roberts said:
Not sure we are thinking about the same person :-) But if we are, publishing can have a long pipeline. Terry Tao just put on the arXiv a paper with coauthors when the project was essentially finished a decade ago. There's also posthumous papers of Erdos, including one publication in the last ten years.
Of course we are: check Google Scholar and sort by year. Point taken, but I did know that heh. :sweat_smile:
John Baez said:
1.3. q-Mathematics. [...] There are, however, many patterns that suggest something like a field with one element should exist.
Connes-Consani have written quite a bit about . In particular, this paper proposes, using the -spaces introduced in Segal's 1974 Topology paper, "a unifying description of several constructions attempting to model an algebraic geometry over the absolute point", "in relation with the development of an algebraic geometry over symmetric closed monoidal categories":
Alain Connes and Caterina Consani. Absolute algebra and Segal's -rings: Au dessous de , J. Number Theory, 162 (2016), 518–551.
They've recently shown the relation of this with Borger's work on . The introduction to this recent preprint gives a nice overview of the study of so far, and hints at some of the patterns you've alluded to in the above:
Segal's Gamma rings and universal arithmetic
Alain Connes, Caterina Consani
arXiv:2004.08879
I'm gonna talk about what I'm doing. When I'm working with grad students it motivates me to keep talking with them. But staying at home all the time, working mainly by myself, I want to do a bit of extra chatting. Please tune this out if it's boring!
Here's the stuff I finished in the last three weeks:
Here's what I'm doing, or need to do:
The "stalled" ones are stalled just because nobody is working on them now, not because it's hard to finish them.
It's taking me a lot of work to go through and correct Kenny's thesis since it's 267 pages long. I'm only on page 126.
However, pages 154-213 are based on our paper "Coarse-graining open Markov processes", and I carefully worked over every sentence of that, so that part should go pretty quick - only the introduction had to be tweaked.
Pages 225-256 are definitions, and I've put some time already into fixing up those.
So, I think the end is in sight!
I just finished going through the section "structured vs. decorated cospans", where he lays out the general theory connecting these two approaches - joint work with @Christina Vasilakopoulou and me! So, going through this section counts as progress toward finishing our paper on that stuff.
Were you not planning to edit your category theory notes into book form? What about that?
Oh, that'll happen years from now! Before that I need to finish my book on classical mechanics. This to-do list is just things I need to do "right now" - that is, in the next month or two.
Waiting eagerly for that to happen. The notes are simply great.
Thanks! I'm not sure if I want to keep them short or make them much longer and be a full-fledged intro to category theory. Maybe I should do both, and write a 2-part book. I need to keep thinking about this.
Now I worry about climate change, and what I should as I get old and gradually lose my mental acuity.
Are you still moved by the stuff Eliezer Yudkowsky talked to you about in TWF#313? You expressed concern in those couple of blog posts that "all these people who care about the future seem to be talking only to like minded folk rather than each other" (paraphrased, sorry), but I think the effective altruism movement has now marshalled a large number of them together.
Do you count yourself among them? Are you sympathetic to their views?
(Incidentally, they've updated towards worrying a bit more about climate change than they used to)
@Haskell - which stuff Yudkowsky talked about, exactly? He talked about a lot of stuff.
(Sorry for the delay in answering: I just saw your comment now.)
I don't know if I can count myself as part of the "effective altruism movement". I'm in favor of the general idea of effective altruism, but I don't think I'm doing enough about it.
I've made some progress:
I decided to make a PDF file of my online diary from 2003 to 2019. Anyone who wants a copy, here it is:
It's pretty big: 2184 pages.
@Tim Hosgoodcontinues LaTeXing This Week's Finds. By now he has done 250 of the 300 weeks.
It's amusing how they keep getting longer:
weeks 1-50: 241 pages
weeks 51-100: 274 pages
weeks 101-150: 348 pages
weeks 151-200: 438 pages
weeks 201-250: 631 pages
So, Jason Erbele and I guess that when it's done the whole collection will be about 2782 pages long.
It would be so cool if all of TWF was turned into a Roam-Research graph
Did you end up writing "physics of rigs". You (or possibly David, it's not clear from the text) said you'd possibly do it, some 15 years ago. Relating to tropical geometry and exp(-E/T).
No, I never wrote that. You can find a lot of the ideas here:
especially in the lectures from weeks 10-13.
What exactly is the reason for having switched from the weekly html format - given that you blog very regularly now?
I quit This Week's Finds in Mathematical Physics in August 2010 when I went to Singapore, quit work on mathematical physics, and started working on environmental issues. I changed the title to This Week's Finds and wrote 19 more issues that focused on environmental issues. That took me up to April 2012. Then I came back to U.C. Riverside and got a bunch of grad students, and decided this sort of format wasn't right for what I was doing at the time - which was trying to develop "network theory", not explain math to people.
Then a lot of other stuff happened.
When you do something a bunch of times and people like it, they want you to keep doing it, and you want to keep doing it so they like you, but when it becomes a chore to keep doing it you should stop.
In 2010 I had become somewhat disgusted by most of the stuff I'd been talking about in This Week's Finds in Mathematical Physics.
Tim Hosgood continues LaTeXing This Week's Finds. By now he has done 250 of the 300 weeks.
@John Baez , where can I find these LaTeXed versions of This Week's Finds ?
John Baez said:
In 2010 I had become somewhat disgusted by most of the stuff I'd been talking about in This Week's Finds in Mathematical Physics.
Disgusted? can you say why? Bored I can understand. Worried about the big climate emergency too, but disgusted?
https://github.com/thosgood/this-weeks-finds
Yes, I felt I'd been frittering away my life with beautiful abstract mathematics and theoretical physics ignoring the really important emergency we're all in. There's something addictive about mathematics and theoretical physics, at least for me: the concepts that show up are so intriguing and so absorbing. People who get pulled into these subjects - like me - tend to think they're very important and lose sight of other things. At the time I was reacting against a lifetime of this.
That was around 2010, when I went to Singapore and quit working on n-categories, which were becoming a very complicated and absorbing subject.
Now I feel a bit different. I've quit trying to be an expert on n-categories, but I have not been able to break my addiction to mathematics, and nowadays I'm less optimistic about my ability to do anything that has a big practical impact.
The PDFs are here, and you can take a copy:
https://math.ucr.edu/home/baez/twf_tex/this-weeks-finds-master/latex/
But they are not in final form: I have not edited them, I have not written a preface, etc. When I do all that I'll put them on the arXiv.
@John Baez Thanks!
John Baez said:
Tim Hosgoodcontinues LaTeXing This Week's Finds. By now he has done 250 of the 300 weeks.
It's amusing how they keep getting longer:
weeks 1-50: 241 pages
weeks 51-100: 274 pages
weeks 101-150: 348 pages
weeks 151-200: 438 pages
weeks 201-250: 631 pagesSo, Jason Erbele and I guess that when it's done the whole collection will be about 2782 pages long.
It turns out to be only 2610 pages I'm afraid :wink:
Yay, it's getting close to done! (I must have been getting tired out near the end.)
Makes me wonder which people in the history did very abstract math and then found a way to turn it to practice.
Maybe von Neumann comes to mind. He's a bit of a foggy case to me, because on the one hand he did extremely abstract stuff early on - like writing impactful set theory papers - but then there are also quotes of him where he talks quite in disregard of all too abstract math and symbol games.
An interesting chapter in history is when von Neumann was at the Institute for Advanced Studies in Princeton, and he was trying to build a computer, and other people at the Institute were trying to get him to stop because you weren't supposed to be building big messy machines there - you were supposed to be writing papers.
It weighed 450 kilograms:
https://en.wikipedia.org/wiki/IAS_machine
In 1980, Bigelow wrote: 1950 was a year of extreme pressure for the IAS engineering group; our laboratory building was overflowing with applied scientists of various sorts, especially fluid dynamicists, meteorologists, nuclear physicists, and the like. Words like "geostrophic," "Hartree-Fock," and "WKB" were echoing up and down the hallways, and people with actual programs in their hands were going to and from Herman Goldstine's office, and on the way peeking into the machine room.... During the summer of 1951, a team of scientists from Los Alamos came and put a large thermonuclear calculation on the IAS machine; it ran for 24 hours without interruption for a period of about 60 days, many of the intermediate results being checked by duplicate runs, and throughout this period only about half a dozen errors were disclosed.
As an aside to that, has anyone read his book 'The Computer and the Brain'? It seems that towards the end of his life he thought very deeply about the relationship between computation and thought.
I haven't read it!
John Baez said:
The PDFs are here, and you can take a copy:
https://math.ucr.edu/home/baez/twf_tex/this-weeks-finds-master/latex/
But they are not in final form: I have not edited them, I have not written a preface, etc. When I do all that I'll put them on the arXiv.
Well, CONGRATS!!! to all involved. this is great! 2610 pages is an awful lot of work and ideas, enough to keep several people working for a long while. shame we cannot have real "cake and champagne" to celebrate for the time being! we should organize a celebratory Topos workshop once the pandemic is done its horrible work.
Sounds fun!!! Thanks!
:tada: I just submitted the final to-be-published version of this paper to TAC! :tada:
I remember back in December 2018 I was a juror in a case against Ford Motors, and I was working on this paper during lunch breaks. Around this time, sitting on a bench in downtown Riversde, I decided that our "brute force, check a dozen diagrams" proof that structured cospans form a symmetric monoidal double category was too tiresome. So I dreamt up the proof we use now.
That was a long time ago! We submitted the paper to @Joachim Kock in January 2020. We got back a referee's report in September, along with a huge list of corrections and suggestions by Joachim.
In the meantime, an anonymous referee of a paper by @Christina Vasilakopoulou and @Joe Moeller had found serious flaws in the existing applications of decorated cospans - making structured cospans look even better by comparison.
So, we've expanded Section 5 a lot, explaining the problems with decorated cospans and how structured cospans solve them.
Publishing papers takes forever. At least the arXiv lets you get a preliminary version out more quickly! By now @Evan Patterson and @James Fairbanks have already used structured cospans to create compositional models of the spread of coronavirus. That wouldn't have happened without pre-publication.
John Baez said:
In the meantime, an anonymous referee of a paper by Christina Vasilakopoulou and Joe Moeller had found serious flaws in the existing applications of decorated cospans - making structured cospans look even better by comparison.
What was the error? Is it something that's been fixed for decorated cospans in the meantime?
Please read the pop explanation of the error on my blog or the detailed explanation in Section 5 of our paper.
As you'll see in the pop explanation, there are basically three layers of error: 1) a problem that Brendan and I got around in the first place using the axiom of universes - which still seems unpleasant, 2) a worse problem, and 3) a really deadly problem noticed by the referee of Joe and Christina's paper.
There are also a couple of possible "fixes" for decorated cospans, which you'll see in the detailed explanation: 1) a "cheap fix" which unfortunately makes it impossible to point to specific vertices in an open graph, and 2) a "real fix" which uses some 2-category theory.
The "cheap fix" is what Blake and I used in our work on open chemical reaction networks. The "real fix" is something Kenny, Christina and I will explain in a paper "Decorated vs structured cospans" - but you can already see it in Kenny's thesis.
Sorry not to give you any details of what's going on, but they're sort of complicated, and having blogged about them and just finished a paper explaining them I'm loath to explain them yet again!
TL;DR: I suggest using structured cospans.
I can say this, too: the basic abstract theorems on decorated cospans are correct, but when you apply them 1) it's easy to be fooled into thinking the hypotheses hold when they don't, and 2) what they give you is sometimes not what you really want.
Here's my to-do list now:
I like point 1! I too want that paper out asap.
Yes, I'm finally able to start working on it.
Perhaps this is hoping for to much... But can all of the examples in @Brendan Fong's thesis be easily adapted to structured cospans? In particular, I was thinking a lot about Lagrangian cospans before I became aware that there was some problem with them. I have tried reading @Kenny's thesis, but it is quite involved.
Our paper Structured cospans says which published uses of decorated cospans can be adapted to structured cospans, and it says how to do it. See Section 6. Our goal was to fix all the problems.
Thanks!
We did not deliberately go through all the examples in Brendan's thesis, so you can ask me about those if you have questions that aren't answered in Section 6.
We went through most of the published papers we know that use decorated cospans.
So if we're optimistic, more money is bound to flow into climate science.
Any recommendation for what's useful and realistic (and cool) in that general direction?
There are lots of things going on - for example even if we limit ourselves to lowering the prices of solar and wind power and batteries, which is a small piece of the overall picture, there's a whole lot of science and engineering involved.
But what can mathematicians, especially category theorists, do?
One thing is to develop more math needed for the "smart grid":
@Spencer Breiner could say more about where this is going... not just with the smart grid but design in general. My student @Joe Moeller is going to be working with him at the National Institute of Standards and Technology.
I think category theory will be important in making "systems thinking" more precise and powerful:
So, that's one thing I want to work on.
My life is changing now that I've finished all papers with grad students except for one! Here's my to-do list now:
Finish "Structured versus decorated cospans" with @Kenny and @Christina V . Submitted to the arXiv and Transactions of the AMS, but Kenny and I are still writing up all details of the proofs of two theorems, which involve lots of big commutative diagrams. We're meeting once a week on Mondays at 11 to work on that.
Submit LaTeX of "Open systems in classical mechanics" to Journal of Mathematical Physics, who accepted this paper with David Weisbart and Adam Yassine.
Work on "Schur functors" with @Joe Moeller and @Todd Trimble. Stalled, but soon to be revived. It's about 3/4 done.
Prepare a talk for Friday February 5, 2021 on "Structured versus Decorated Cospans" for the Yorkshire and Midlands Category Seminar.
Prepare a talk for Monday March 8, 2021 at the Zurich Theoretical Physics Colloquium, perhaps called "Theoretical Physics in the 21st Century", as part of their Sustainability Week. The idea is to talk about the most exciting areas in physics... but with an emphasis on what physicists can do to save the planet.
Prepare a talk for Thursday March 18, 2021 for the Topos Institute seminar. Maybe something similar to 4. but with applied category theory instead of physics.
Prepare a talk on octonions and the Standard Model for Kirill Krasnov's workshop on this seminar.
Write an article on condensed matter physics for Nautilus magazine.
By the way, I wish everyone here would have a thread where they talk about what they're doing like this!
the problem is that many of us, I suspect, wouldn’t have much to write except for “try really hard to finish the big proof in the paper I’m currently writing” 😉
Well, that could be interesting too. But I bet there are some people who have more than one paper they should be working on, or are going to work on, and also talks they're going to give.
Tim Hosgood said:
the problem is that many of us, I suspect, wouldn’t have much to write except for “try really hard to finish the big proof in the paper I’m currently writing” 😉
For me the problem is more something like "I don't think that what I do is relevant enough for someone else to care". And it's not a matter of self-deprecation, I _really_ think very few people would be interested in the stuff I usually work on. :smile:
Come on. You guys have egos, do you. You want to tell everybody what you're doing, right? You can still let some humility shine through, but even that works much better when you start telling what's stirring you :smirk:
John Baez said:
By the way, I wish everyone here would have a thread where they talk about what they're doing like this!
for people who would want to create such a thread, where should they make it? there's #practice: our papers , but that's not quite the right place, no?
Fabrizio Genovese said:
For me the problem is more something like "I don't think that what I do is relevant enough for someone else to care". And it's not a matter of self-deprecation, I _really_ think very few people would be interested in the stuff I usually work on. :)
Do you think it's boring, or do you think most people have bad taste? :upside_down:
Tim Hosgood said:
John Baez said:
By the way, I wish everyone here would have a thread where they talk about what they're doing like this!
for people who would want to create such a thread, where should they make it? there's #practice: our papers , but that's not quite the right place, no?
I wouldn't worry about that too much... in a minute we can change the title of #practice: our papers to #practice: our work.
This topic is an "AMA". Any suggestions for what the names of the stream over there should be before I take the leap and do what you're suggesting?
Fabrizio Genovese said:
For me the problem is more something like "I don't think that what I do is relevant enough for someone else to care". And it's not a matter of self-deprecation, I _really_ think very few people would be interested in the stuff I usually work on. :)
yeah, I feel the same.
John Baez said:
Do you think it's boring, or do you think most people have bad taste? :upside_down:
Just that other peoples' work seems more interesting/"important".
I'd say that even if that were true, that's not what matters. At the rate that research gets done, it can be hard for anyone (even yourself) to keep track of the bigger picture of what you're aiming to achieve, and to really appreciate all of the things you've got going on. Most days in research it can feel like you're just “[trying] really hard to finish the big proof in the paper [you're] currently writing”, or even something more mundane than that. Not all of us can write as prolifically as @John Baez, but I feel we could certainly add something to the atmosphere here by not only hearing about things when they're finished, but also having some anticipation regarding projects which are still in the works.
These aren't research promises or proposals. No one will hold you to account if you don't deliver the stuff you said you might work on. It's just an extra piece of the experience of research that might be improved by its sharing with the community.
David Michael Roberts said:
Just that other peoples' work seems more interesting/"important".
All other people's work? So you think you're working on the least interesting and important topics of everyone in the world? If so, why are you doing that?
Some other people's work? So the existence of some people doing something more interesting and important makes you unwilling to talk about yours? That seems like quite a high standard to hold yourself to... like the composer who did not release his compositions because he knew Brahms and Wagner. (I forget his name.)
First of all, I know a bit about your work, and it seems interesting and important. Second of all, and more importantly, I have a lot of trouble understanding people who aren't eager to talk about their work. I work on things that seem fun and interesting, so naturally - well, so it seems to me, but apparently it's not so natural - I want to talk about these things, just to let other people share my pleasure. It's like if you take a bite of a great cake at a restaurant you say "yum! this tastes sort of like strawberry, but also..."
Of course it takes longer than saying "yum". I guess one excuse I can imagine for not wanting to talk about one's work is "I'm having so much fun working on this that I don't want to stop and take the time to bring other people up to speed".
Maybe the other explanations are ways of saying this while trying to sound humble?
If someone created a stream for this, I'd be willing to try making a thread for myself. It might even benefit me by shaming me into actually getting done what I think I really "ought" to be doing instead of other stuff that's more fun. (-:
I guess a ‘practice: our work’ stream could work.
I do suspect, though, that the kinds of people who are interested in letting the world know what they're working on now are going to be mostly the same kinds of people who do things like write regular blog entries and edit the nLab. I know I'm a person like that, and I know a lot of people aren't. I've never really figured out exactly what the difference is, but in my experience trying to convince someone to "become" a person like that is kind of like trying to convince an introvert (like me) that they would be happier if they went to more parties.
Let's do it: #practice: our work.
Mike Shulman said:
If someone created a stream for this, I'd be willing to try making a thread for myself. It might even benefit me by shaming me into actually getting done what I think I really "ought" to be doing instead of other stuff that's more fun. (-:
On the other hand, one can waste a lot of time talking about the projects one needs to do. :upside_down:
I have a lot of trouble understanding people who aren't eager to talk about their work.
Oh, I like talking about it, but in my experience people are usually rather nonplussed, since I seem to be falling in the gap between category theory proper and other fields. This is probably a good thing, but it means it seems hard to get serious feedback. And the geographical isolation doesn't help.
John Baez said:
David Michael Roberts said:
Just that other peoples' work seems more interesting/"important".
more importantly, I have a lot of trouble understanding people who aren't eager to talk about their work. I work on things that seem fun and interesting, so naturally - well, so it seems to me, but apparently, it's not so natural - I want to talk about these things, just to let other people share my pleasure. It's like if you take a bite of a great cake at a restaurant you say "yum! this tastes sort of like strawberry, but also..."
But you must have thought about people like Newton, or Gauss, or Perlman. They had no need to tell anyone. For many people, math is not a social process. It is a conversation with god. With Pythagorean god. Why does the sun rise every day. Why do the planets swipe the same areas, and don't move on the spheres circumscribed over the platonic solids? Someone wants to pay money for a law of nature. Or they want an interview. It is so needlessly distracting...
But such people also don't like strawberies I guess.
I eventually remembered that some people have more fun just doing math:
I guess one excuse I can imagine for not wanting to talk about one's work is "I'm having so much fun working on this that I don't want to stop and take the time to bring other people up to speed".
And such people don't all need to be as good as Newton, Gauss or Perelman!
The (coloured/typed) operad whose category of algebras is equivalent to the category of uncoloured operads is defined in Higher-Dimensional Algebra III: n-Categories and the Algebra of Opetopes, Section 3.1. Was this operad known before this paper?