You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm going to ask a naive-sounding questions: Suppose I have a class of things, call them Thing with a 'natural' notion of Thing homomorphism, so that we get a category, Thing. Great. What can we learn about 'things' (the objects) by studying Thing?
One can clearly start looking for certain properties of Thing: does it have products, an initial object, etc. One can do silly things and look at presheaves over Thing and (mistakenly) be amazed that we now have a category with an incredible set of properties and believe lots of structure's been unearthed about Thing (to be clear: all presheaves are quite magical, it's not Thing's fault.)
The reason to ask the question 'naively' is at least twofold: 1) someone might answer that question with much more insight than a more fancy-sounding version, 2) people might get distracted by accidents of the phrasing of the fancy-sounding version.
The motivation for the question is the law of the instrument (if all you have is a hammer, everything looks like a nail). I often roll my eyes when I see people "applying category theory" left right and center. But that is a rather negative way to look at it. The positive way to look at it is to do some analysis of the tool to better understand what the problem space for which it is well-suited. In this case,
[Not sure if this a "theory: applied category theory" or a "theory: philosophy" question. Both?]
I think a textbook answer here is that, if your system is compositional, what you gain with category theory is a way to reason compositionally. similarly, if your system is symmetric, what you gain with group theory is a way to reason symmetrically. in other words, category theory is for axiomatizing compositional systems first and foremost, not arbitrary ones, and that is where the power comes from.
That seems like too high a level an "answer" that has (for me) no new content. Because, to "reason categorically", you're going to need to know more (like some monoidal structure, etc). Just categories doesn't give you that! Similarly, group theory is indeed a great tool in the presence of symmetry but, AFAIK, "symmetry breaking" is still an art, not a science.
Jacques Carette said:
I'm going to ask a naive-sounding questions: Suppose I have a class of things, call them Thing with a 'natural' notion of Thing homomorphism, so that we get a category, Thing. Great. What can we learn about 'things' (the objects) by studying Thing?
Are you looking for "applied category theory" answers or "pure mathematician" answers? I have a lot of pure mathematician answers, since when I hear "what can we learn about 'things'?" that sounds like a pure math question. Pure mathematicians like to go around learning about things. Applied mathematicians like to go around using math to get stuff done.
Among pure mathematicians, category theory was not originally invented to study individual categories of things. It was invented to study when two processes of turning things of one kind into things of another kind could count as "essentially the same". And this was a very practical concern, since people had invented a big pile of ways of turning topological spaces into abelian groups.
Later, as category theory grew, it became important to know whether various categories of things had nice properties. For example it became important to know if a category of things is abelian, since people had cooked up a bunch of great stuff you can do with an abelian category.
I think only later did mathematicians get really interested in questions like whether a category is complete or cocomplete.
And most pure mathematicians still get interested in category theory following this historical route - with ontogeny recapitulating phylogeny. I know I did! Only after I was quite committed to category theory did I decide that it was interesting to know whether categories have limits or colimits.
On the other hand, there are also people who learn category theory mainly because they like it, not because they were working on something else and gradually realized it's important. I don't completely understand such people, but there seem to be a lot around here. For these people, as soon as you tell them about a kind of thing, and a map between such things, they want to know properties of the category of such things: that is their favorite way of getting insight into things.
Something partly implied by John's answer is that if the categorical point of view is condensed in "it's the morphisms, not the objects", then it's iteration one level higher is "it's the functors, not the categories"; I think it's quite rare that studying a category in isolation is enlightening; defining a category tends to be useful insofar as we are also defining some functors in and out of it...
And this is where some questions whose use is somewhat mysterious in isolation find their applicability. Say that we have shown that a category is cocomplete, and say also that it is generated under colimits by some generating set of objects. Lots of naturally occurring examples. But so what?
Well, if we also have a functor coming out of it, and that functor is a left adjoint, this means that we can calculate the functor on any object by just computing it on generating objects, and then computing it as a colimit in the codomain category.
Which I would say is the classical version of "compositionality" from before everyone became obsessed with monoidal categories.
To add on to the above discussion, I'll state how I personally use categories. I think of categories as being instruments for the purposes of comparing mathematical structures. So if your "problem space" involves having two or more kinds of mathematical objects and the questions you are asking are about how to find or understand ways to go between or compare them, then category theory provides an elegant setting with which to view those kinds of problems.
Here's what I mean. Once you have a category Thing, you have automatically identified an object in Cat corresponding to Thing, and as an object in Cat, it will have morphisms- functors- into it and out of it into a whole bunch of other categories in Cat. Each functor is a general way by which you can compare the objects in Thing with the objects in the other category, and studying all possible comparisons of these forms tells you a lot about "things" in Thing. For instance, let's say Thing has a faithful functor to Set. You can now interpret "things" as sets with extra structure. Or more generally if you have a faithful functor from Thing to C, then you can interpret "things" as objects in C with "extra structure". In addition, if you have an embedding or better yet a fully faithful functor from Thing into some category D, you can view "things" as special cases of objects of D, with objects in D as "generalizations" of "things".
I hope this helps!
I might not be well-suited to comment, as I'm still new to the topic, but I will try to speak to a few things that I haven't seen mentioned. Also, to contrast with @John Baez , I'm one of the people who likes category theory as an end and not as a means. I believe that there are three fundamental values one gains from looking at categories, with each of these corresponding to a categorical principle at a given layer.
Anyways, it took me long enough to write this that both @Amar Hadzihasanovic and @John Onstead have said some very insightful things that overlap with a lot of this.
To be a bit more grounded, focusing on bullet point 1. I come from a bit of a computer science background, and programming only really "clicked" with me once I learned functional programming (before this, programming was the chance to work on fun puzzles, whereas after I started to appreciate it for how rich it is conceptually). The idea that you can discuss things purely based on how they act on other things corresponds to the notion of "point-free" programming, and designing code in this way can often provide clarity through simplicity, and lends itself to a more "diagrammatic" way of thinking about code. As a programmer, I don't care about or ---I want to be able to do things (with maps and folds, goddammit!)
@John Baez I put the question under 'applied' because I'm one of those people who believes that to get things done, it is extremely helpful to really understand the theory [and yes, I do drive a car whose details I don't understand - but that's different from what I do in my research!] So understanding what questions are sensible to ask can be classified as pure mathematics, fine! Knowing this, I can be more efficient at getting things done because I don't ask stupid questions...
@Amar Hadzihasanovic I quite like it's the morphisms, not the objects and it's the functors, not the categories. It's kind of a tell-tale sign of shallow understanding when someone's focusing real hard on the objects. But then I'm baffled by 'representable functors'! Not their definition, that makes sense, but their importance. Because they sure seem to focus real hard on the objects!
You definitely got closer to the meaning of my question with your story on adjoints, as that's a concrete example of (question to ask / what you'll learn / why that's useful) triple that I'm interested in.
@John Onstead I already 'knew' this (as I said, my question is naive sounding, I've published a number of things that use a fair amount of category theory, on top of having formalized a lot.) What I'm looking for is a more concrete version of what you say.
This is a bit of a side note, but I think it's important to point out that objects ARE important - inasmuch as they are used to decide what morphisms exist.
For example, it's possible to dream up a lot of categories of "structured things" and "structure preserving maps between them". If you decide your morphisms should be structure preserving maps, the way in which you set up the structure in your objects will determine what morphisms you get.
But if you have an object with features that don't impact the morphisms to or from - then those features of aren't really being modelled in your category, I'd argue.
But if you have an object with features that don't impact the morphisms to or from - then those features of aren't really being modelled in your category, I'd argue.
Which is why I always tend to find categories with a single point, or even those where the objects are (like more categories of matrices) extra-odd.
Jacques Carette said:
I quite like it's the morphisms, not the objects and it's the functors, not the categories. It's kind of a tell-tale sign of shallow understanding when someone's focusing real hard on the objects. But then I'm baffled by 'representable functors'! Not their definition, that makes sense, but their importance. Because they sure seem to focus real hard on the objects!
I don't think it's good to get too carried away with the "objects bad, morphisms good" philosophy.
But we could argue that representability is all about morphisms: morphisms from all objects to some particular object. As you know, an object represents a functor iff is naturally isomorphic to .
That's formally very attractive, and almost magical: a big fat functor is being distilled down to a tiny little object . This opens up all sorts of new possibilities.
But since I'm a humble fellow who first got interested in category theory through applications (to pure math, and mathematical physics), this only clicked with me through examples. It's an amazing fact that in topology, all sort of interesting functors are representable. This means that all sorts of concepts we consider, like "the set of isomorphism classes of principal -bundles on a space", or "the 2nd cohomology group of a space", can actually be distilled into spaces.... which we can then use topology to study!
So, abstractly, we're saying: this scary-looking set is really just a set of morphisms, .
(In fact these spaces I'm talking about are really homotopy types, since the category I'm talking about is not really Top, but some category of homotopy types. And thus homotopy type theory was born... but that's another story.)
I might answer the original question going in a different direction. It's not necessarily that you might learn about properties of things by looking at , it's just it turns out that a lot of things can be framed as categories, so people have spent a lot of time working on mathematical tools for making your life easier when you learn to speak their language. For instance, I often use @Joe Moeller's monoidal Grothendieck construction to build new categories out of old categories, which saves me a lot of work. So while you may not learn anything new about things, you may find that you can refer to a wide variety of other people's work when you are trying to prove theorems.
Talking about representability is clearly a tangent... but since I started it, I can't complain! Anyways: my feeling is that representability seems worthwhile when you've got some kind of careful balance between your objects and morphisms (i.e. how much information is in each); then representability seems super useful indeed. Topological spaces are rather 'information rich' in that sense.
@Owen Lynch I'm a big fan of transferring theorems from one place to another, so obviously category theory is extremely useful in that setting. I'm still trying to wrap my head around "modelling with categories": how can I tell when the people doing this are serious, and when are they deluding themselves? [Both obviously happens.] I was trying to tease out tell-tale sign of 'serious'.
Broadly on this topic, I think this is an interesting question: Given a bunch of things of interest, how can one come up with sensible notions of morphisms between those things?
(Hopefully that is not too much of a tangent!)
Given a bunch of things of interest, how can one come up with sensible notions of morphisms between those things?
The most famous approaches are:
Or, generalize the concept of doctrine still further....
Another way I have found very fruitful of "finding the right morphisms":
If the objects have an "underlying object of ", and I have some construction that turns an object into an object of , ask
(Typically is sets and functions, or some other concrete category over sets and functions.)
Also, sometimes you get two interesting notions of morphism by trying to make first covariant, then contravariant.
If you have a category whose objects are things, one productive thing you can do, that might not be so obvious without using category-theoretic tools, is to define open things as some class of cospans of thing-homomorphisms
Amar Hadzihasanovic said:
Another way I have found very fruitful of "finding the right morphisms":
If the objects have an "underlying object of ", and I have some construction that turns an object into an object of , ask
- what are the morphisms of the underlying -objects that I can turn into -morphisms between the -images, so that will be a functor?
Isn't this the essence of why 'displayed categories' are so nice?
Jules Hedges said:
If you have a category whose objects are things, one productive thing you can do, that might not be so obvious without using category-theoretic tools, is to define open things as some class of cospans of thing-homomorphisms
Now that is seriously not obvious and new information for me! Thank you.
Jacques Carette said:
Jules Hedges said:
If you have a category whose objects are things, one productive thing you can do, that might not be so obvious without using category-theoretic tools, is to define open things as some class of cospans of thing-homomorphisms
Now that is seriously not obvious and new information for me! Thank you.
This also works with relations in some cases.
To give a concrete example: in quantum computing, it is well known that (odd prime dimensional) stabilizer states are projectively represented by affine isosotropic subspaces over finite fields, and that the evolution by Clifford operators corresponds to taking the action of affine symplectomrophisms on these subspaces. However, if you take the complement and work with affine coisotropic subspaces, then the relational composition corresponds to usual composition of linear operators in finite dimensional Hilbert spaces. And in this setting, the Clifford evolution, the states, and the effects are all just these kinds of subspaces!
So the correspondance between the "affine isotropic subspace" semantics, and the "Hilbert spaces" semantics is formalized by doing essentially what Jules is talking about. And the structure of affine isotropic subspaces that theorists usually work do not compose in the right way, so instead you actually need to take the complementary representation of affine coisotropic subspaces! So in this case, the right notion of "open thing" is actually the dual of what people normally work with... without category theory, this subtlety would be very hard to pick up on.
I think the relational perspective works best when the states of systems are already known to be represented by some sort of subobject. It is a completely natural thing to regard subobjects as relations, and sometimes this can offer a structural perspective which may not have been known otherwise.
Maybe this is more obvious than the cospan semantics which Jules is referring to, but these two kinds of approaches are very conceptually similar in my opinion.
(Jokingly) Thanks @Cole Comfort for the reminder that even though I'm co-author of 3 papers that deals with quantum, I still know approximately nothing about the topic. (They are all very focused on PL-oriented views.)
More seriously: solid food for thought.
Jacques Carette said:
(Jokingly) Thanks Cole Comfort for the reminder that even though I'm co-author of 3 papers that deals with quantum, I still know approximately nothing about the topic. (They are all very focused on PL-oriented views.)
More seriously: solid food for thought.
Yeah maybe I am really pushing what is meant by "well-known" :joy:
Luckily you don't need to know or care about quantum mechanics, affine isotropic subspaces, etc. to see nice examples of the philosophy Jules mentioned: treating "open" things as cospans of things. Jules used this philosophy to define "open games", and I've used it to define "open electrical circuits", "open Petri nets", "open chemical reaction networks", "open Markov processes" and other things. Basically whenever engineers or scientists use network-like diagrams, this is a thing you can do with them.
A good intro to this philosophy may be Brendan Fong's The Algebra of Open and Interconnected Systems.
I'm sure someone pointed out that this use of cospans has appeared in software architecture 20+ years ago? [What's new to me is not the technique but the philosophy].
John Baez said:
A good intro to this philosophy may be Brendan Fong's The Algebra of Open and Interconnected Systems.
Three things which @Brendan Fong does in his thesis is that he works with decorated cospans, decorated correlations, as well as categories of relations.
For example, he works with "Lagrangian cospans," "Lagrangian corelations," and "Lagrangian relations" to give categorical semantics for some idealized class of electrical circuits. To me, all three of these categories feel like, "open things" and it seems to me like "Lagrangian corelations," and "Lagrangian relations" should actually be equivalent. But in general, I get the feeling that "decorated cospans" are the central construction which he uses to model open systems.
Obviously, you are not Brendan himself, but do you agree with my reading that all three of these approaches are different ways in which one can construct categorical semantics for "open systems?"
In this specific example, because the morphisms can be regarded as subobjects, I don't see the advantage of using decorated corelations or cospans over relations, because you can concretely define composition to be given by relational composition and you don't have to know any more complicated categorical constructions. But maybe I am missing the point and decorated cospans are inherently more "open" for some reason.
What I was trying to argue for in my earlier response is that if you have a system whose behaviour is represented by some sort of subobject (for example some kind of linear subspace), then it is natural to ask if these kinds of subobjects are closed under relational composition. In some sense given a thing, subobjects are one of the simplest kinds of thing-homorphisms. And to me, this feels like a very concrete way in which one can obtain a categorical semantics for these kinds open systems. In the example of electrical circuits or stabilizer states, it feels as if physicists treat these states as closed systems only because the are neglecting that they can be composed, which is probably only because of the pervasive historical bias towards function-composition. The perspective which reveals that such things can be composed makes them manifestly open in my view.
I tend to focus on decorated (and structured) cospans since these let us describe the syntax of various kinds of open systems, which is where I like to start. For example, an electrical circuit with m input wires and n output wires is a morphism from m to n in a suitable decorated or structured cospan category
But then it's sometimes good to describe the semantics by a functor from this category to some category of relations or corelations. More precisely, it's good to do this when your semantics ignores the 'internal workings' of your system and only describes the 'externally visible' way your system sets up a relation (or corelation) between the inputs and outputs. I call this general sort of semantics 'black-boxing'.
But in software we're often interested in semantics that describe the 'internal workings' as well - e.g. we might have a semantics that gives a set of differential equations for how all the currents and potentials in our circuit change with time, not just those observable from the input and output wires.
Then we'd describe the semantics using a functor from one decorated (or structured) cospan category to another.
As for when to use relations vs when to use corelations, I haven't thought about this much beyond what was in my paper with Brendan, since I haven't been doing much black-boxing lately: in our epidemiology software we want to simulate the whole system, including its 'internal workings'.
John Baez said:
I tend to focus on decorated (and structured) cospans since these let us describe the syntax of various kinds of open systems, which is where I like to start. For example, an electrical circuit with m input wires and n output wires is a morphism from m to n in a suitable decorated or structured cospan category
But then it's sometimes good to describe the semantics by a functor from this category to some category of relations or corelations. More precisely, it's good to do this when your semantics ignores the 'internal workings' of your system and only describes the 'externally visible' way your system sets up a relation (or corelation) between the inputs and outputs. I call this general sort of semantics 'black-boxing'.
I see. I suppose my motivation is different, because I often like to give generators and relations presentations for these kinds of categories of open systems. So I usually want to present a syntax in terms of an equational theory and prove it is equivalent to the semantics, with the end goal of being able to treat both the syntax and semantics as being essentially the same.
Morally, I feel like my motivation is in some sense the opposite of black boxing. Which is nice, because you guys have revealed lots of open problems for me when I view stuff from this dual perspective.
"Open problems". :laughing:
Yes, we need a detailed study of the (typically) props that provide targets of black-boxing semantics, and it's really interesting to get presentations for these, not only for the obvious reason (presentations make easier to define maps out of these props), but because throwing out some of the relations gives interesting props of a more 'syntactic' flavor.
An example is our rather detailed study of various props related to the most boring electrical circuits: circuits made only of purely conductive wires. From a very 'black-boxed' perspective all such a circuit gives is a corelation saying whether current can or cannot flow between two input or output wires (or better, 'terminals'). But finding a good presentation of the prop of corelations can shed light on the internal workings of such a circuit.
I think this is the sort of thing you were talking about.
John Baez said:
I think this is the sort of thing you were talking about.
Pretty much. But correlations is much easier to present (at least with respect to the coproduct) than most of the other categories in Brendan's thesis.
The question of openness is more of my own spin, but if you posed it explicitly it would make these things easier to publish :joy:. We evocatively said that we opened up one of your guys' black boxes in our recent paper and the reviewer was not happy with this wording, haha
If you feel like giving presentations is not the same as opening the box, we will refrain from saying that. Maybe I was being too poetic. In the introduction I usually try to be more florid than the rest of the article, borrowing from my masters supervisor Robin's style.
I'm all for exciting introductions! I like the idea of presentations as "opening the box". If you give me your referee's email address, I'll go argue with them. :upside_down:
John Baez said:
I'm all for exciting introductions! I like the idea of presentations as "opening the box". If you give me your referee's email address, I'll go argue with them. :upside_down:
One must respect the sanctity of the peer-review process ;)
John Baez said:
Jules used this philosophy to define "open games"
Sadly not true, open games are weird by comparison to all the other things John mentioned which are cospanish. It might be possible to get a confusingly-different category of "open games" by using cospanish methodology, I've idly wondered about it a bunch of times but never put in enough work to actually figure it out
Jacques Carette said:
But then I'm baffled by 'representable functors'! Not their definition, that makes sense, but their importance. Because they sure seem to focus real hard on the objects!
One perspective on representability I quite like is the algebraic one. You can see a category as an algebraic structure, and presheaves over it as its actions. A representable presheaf then is a particularly simple action: it's free, meaning you generate the presheaf by choosing some generators on each object and then 'closing under functoriality', and its generators are as simple as they can get: you just start with one generator on the representing object. So showing a functor is representable is a way to show it can be constructed with very little extra data.
Jules Hedges said:
John Baez said:
Jules used this philosophy to define "open games"
Sadly not true, open games are weird by comparison to all the other things John mentioned which are cospanish. It might be possible to get a confusingly-different category of "open games" by using cospanish methodology, I've idly wondered about it a bunch of times but never put in enough work to actually figure it out
lenses and parametric lenses are spanish instead of cospanish, so it's the other side of the open coin :)
It's more different than just spans vs cospans. Categories of cospans are generally network theories, where both and composition are spacelike. But open games is a process theory, where is spacelike but is timelike
(I never wrote that in any papers because it's not a theorem or anything I know how to define, I just know it when I see it)
Jules Hedges said:
It's more different than just spans vs cospans. Categories of cospans are generally network theories, where both $;$ and $\otimes$ composition are spacelike. But open games is a process theory, where $\otimes$ is spacelike but $;$ is timelike
I think Matteo is talking about the fact that fibre-wise opposites of fibrations are equivalent to categories of spans where one leg is a cartesian map and one is a vertical map.
So in the case of lenses, they are equivalent to spans (of charts) where each leg has one component as the identity
Total digression: I'm always curious about people who post comments containing quotes where all the double dollar signs have been converted to single dollar signs so the math doesn't work anymore.
How does that happen?
John Baez said:
How does that happen?
I think it's a bug with the Zulip mobile app. It previously didn't have the 'quote' option at all. Now it seems to strip away certain punctuation when you quote things
Interesting! I never use that app: even when I'm on my cell phone I access Zulip using Firefox. So I didn't know that.
The issue doesn't seem to have been reported on GitHub; I've now done so.
What's the name of the mobile app? Is it good?
(a colleague of mine tried one and was quite disappointed, so I never bothered to try)
Peva Blanchard said:
What's the name of the mobile app? Is it good?
(a colleague of mine tried one and was quite disappointed, so I never bothered to try)
It's just called "Zulip". It has some rough edges, but it's usable.
Great, I'll give it a try. Thank you!
Jules Hedges said:
It's more different than just spans vs cospans. Categories of cospans are generally network theories, where both $;$ and $\otimes$ composition are spacelike. But open games is a process theory, where $\otimes$ is spacelike but $;$ is timelike
This is OT but I'm on the aforementioned bad mobile app so cant fork the thread rn, but: what do you mean by spacelike and timelike? My intuition is that as long as ; interchanges with tensor everything is spacelike, and you need some genuine premonoidality to tell the difference (related lit is Mario Roman's recent papers on effectful streams, as well as earlier ones I forgot the name of; but also a neat work of Spivak and Shapiro on 'normal duoidal' cats).
I think spacelike has a dagger, whereas timelike doesn't. We can go left-to-right or right-to-left in space, but we can only go yesterday-to-today in time.
Or maybe: the more you can bend wires around, the less difference the distinction between the ; direction and the direction really makes, so that categories of spans or cospans end up just wanting to be wiring diagrams instead of string diagrams.
Yes, as soon as you have a compact structure, the flow of information between different parts of some composite morphism no longer follows a dag-like shape, thus no specific operation can be thought of as happening before any other in a meaningful sense. Now, this is an informal statement, because you could always treat the cups and caps of your compact structure as boxes like any other, forget that they satisfy special properties, and recover a dag-like structure. However, this can be an unnatural perspective if your category is indeed self-dual compact closed: it forces you to come up with an interpretation of the cups and caps that is consistent with the time-like intuition about what your morphisms represent. It's not impossible though: adding partiality (caps) and nondeterminism (cups) to some smc in which morphisms represent total deterministic processes usually gives you a self-dual compact closed category.
Yeah, I think all these intuitions are fine as long as you don't look at quantum processes, which "should" be timelike but behave as though they're spacelike in most ways
One of the reasons for creating this thread was to try to understand, ahead of time, when people are doing BS with 'categorical modeling'. I did not have UMAP in mind when I start the thread, but apparently I should have.
This is an issue with category theory, IMHO: it can equally reveal that you have found valuable structure and serve as obscuring that what you're doing is BS.
Warning: over-generalization. Seems to be that the positive parts are mathematical (i.e. giving names to the 'right' abstractions) and the negative parts are largely sociological (doubling-down on 'abstract nonsense' as a source of pride, and many of the other cultural ills already well documented.)
[I am well aware that there are a number of people trying to be pedagogical and working hard at de-mystifying category theory.]
I think of this issue with category theory as part of a more general issue with mathematics. Lost in Math discusses how the quest for mathematical beauty has led particle physics and quantum gravity astray, and the misuse of mathematics in theoretical economics is probably even worse.
Jacques Carette said:
One of the reasons for creating this thread was to try to understand, ahead of time, when people are doing BS with 'categorical modeling'. I did not have UMAP in mind when I start the thread, but apparently I should have.
FWIW, I don't think you should have UMAP in mind because I don't think it's representative at all of how people in the applied category theory community are trying to use category theory for modeling.
You're right, Evan. UMAP was not created by people "in the applied category theory community".
The issue with UMAP is that this very useful algorithm was first described using a little category theory - a little, but it was sufficiently unfamiliar to the people who actually use UMAP that it caused a lot of head-scratching, with people wondering whether category theory provided the "secret sauce" that makes the algorithm work. I read the paper and wrote an article about this:
If category theory helped the creators of UMAP come up with good ideas, it served its role. But they probably should have written at least one paper explaining it without mentioning anything about categories.
I have no idea what UMAP is, so what follows is not intended to throw any shade at these people, because it may be great work. However, it seems to me that "category theory" has recently become trendy among non-pure-mathematicians.
One consequence of this is that more people want to learn categorical methods to solve their problems, which is great for them because they can solve their problem, and great for everyone already applying categorical methods because there is strength in numbers.
But along the same lines of what I think Jacques is saying, sometimes people slap the words "category theory" on their work to give them mathematical street-cred. In some cases, they do not even employ categorical tools at all, and a great number of these papers are completely nonsensical. And I have seen people in our community who should know better endorse papers or people peddling this kind of nonsense. I fear that the apparent openness of "applied category theorists" to welcome people into their ranks is too indiscriminate, because if people outside of categorical circles realize it is nonsense, it would reflect poorly on people trying to actually use category theory to solve problems in applied domains.
I have had similar worries, and you phrase them better than I could.
I don't really know exactly what it means to "welcome people into our ranks", and in what way Cole thinks we should be doing less of that. I think we'd need to be a bit more specific to make progress on this question, even though it's bound to be uncomfortable.
By welcome into our ranks, I mean to involve people in conferences, workshops and give endorsements. I am not referring to the secret badge that people get when they officially become category theorists.
But I am being intentionally vague so as not to cause offense/harm to myself.
Someday I hope somebody writes a book with a title roughly in the spirit of "So, you want to apply category theory to your research problem...". I'm imagining this book hopefully doing the following:
Cole Comfort said:
By welcome into our ranks, I mean to involve people in conferences, workshops and give endorsements.
Okay, that's clear enough. I was wondering about this:
A bunch of people with grandiose hopes and dreams come into this Zulip. I'm never sure which ones it's good to engage with, but I generally start off by optimistically engaging with them. I would not call this "welcoming them into our ranks", though maybe some of them feel that's what is going on - at first. Many are unable to focus their thoughts on one topic for long enough to make serious progress, and the rest of us eventually realize that, and they eventually leave. A few stick around and become quite serious.
I am not referring to the secret badge that people get when they officially become category theorists.
:shush:
I am not referring to the secret badge that people get when they officially become category theorists.
Damn, so this tattoo artist I met was a con ... now wondering how I will scratch "I <3 Yoneda" off my back :disappointed:
Joke aside. It is a hard balance to find for a community: staying true to the craft, and being open and welcoming to foreigners. A social form of the safety vs liveness dilemma. Drop either one and the community is dead.
There are social means to address it. Here are, on the top of my mind, some practical actions.
I'll start with the obvious one. For journals, conferences inside CT/ACT, well ... just don't lower the bar: have a decent program committee, select good reviewers, etc.
For publications about CT/ACT, but outside CT/ACT community, it's harder. But it is still possible to reach out to the journal/conf orgs/paper authors. Maybe they used the words "category theory" wrongly because they want the street-cred, or just because they just don't know enough about it. It's also possible to publish a paper/survey/blog article/books, like the one about umap, to rectify/correct things. Or like what @David Egolf suggests (btw, David, you may be interested in Seven sketches in compositionality if you don't already know it).
All of this has a cost, time-wise, but also social-wise: for young researchers this might not be the first thing to do. But it's also possible to team up with more established people.
Now, regarding Zulip: in other communities, there is usually a chart/banner describing the preferred way to behave, and moderators to enforce it. But more specifically, regarding this Zulip instance, and as a newcomer/learner/tourist myself, I really like @John Baez position: both open and true to the craft. This, I think, reflects in the kind of feedback I've seen (and received) here: positive encouragements when appropriate and firm corrections when necessary.
Jacques Carette said:
One of the reasons for creating this thread was to try to understand, ahead of time, when people are doing BS with 'categorical modeling'.
This is a really hard question, and my approach has always been to not worry about it, because the downsides of false positives are much worse than the downsides of false negatives
It seems everyone is worrying about non-mathematicians talking BS about category theory, but my experience is that it's more common that category theorists talk equal amounts of BS about an application domain (but of course both happen quite often)
Applied category theory is applied mathematics so the ultimate test is the usual one: you're not doing BS if you're solving a problem
But of course stuff takes time. I've been working on compositional game theory full time since 2015 and we're only recently starting to solve actual problems with it. In the meantime there's no really objective way to tell.
Jules Hedges said:
It seems everyone is worrying about non-mathematicians talking BS about category theory, but my experience is that it's more common that category theorists talk equal amounts of BS about an application domain (but of course both happen quite often)
I think the difference often boils down to the paper appearing completely unmotivated, versus being incoherent. A paper "being unmotivated" is somewhat subjective as you said, but there is no such ambiguity when the paper makes no sense.
For example, I have heard stories about some of the older generation of category theorists not liking what is often known now as "applied category theory," which is more of a matter of taste, rather than rigour.
My surprise is immeasurable
Sure there are incoherent papers, those are the ones that should just be desk-rejected and nobody could reasonable call it gatekeeping. I think I see unmotivated papers coming in equal measure from both category theory experts and category theory beginners. Both have an equal chance of containing something interesting
An interesting paper in the latter case is "here is a good description of my problem domain and why it's really difficult, now here is some very naive category theory that probably has no chance of being useful". An interesting paper in the former case is "here's some good category theory, now here is a very naive description of an application that it probably has no chance of being useful for"
Jules Hedges said:
but my experience is that it's more common that category theorists talk equal amounts of BS about an application domain (but of course both happen quite often)
Do you have some examples of the sorts of things you have in mind (if it's possible to give examples without calling anyone out specifically)?
imo there's an xkcd about this phenomenon, just replace "physicist" with "applied category theorist" :-) https://xkcd.com/793/