Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Calculations in Category Theory


view this post on Zulip Ruby Khondaker (she/her) (Jan 05 2026 at 22:20):

This is definitely a vibes-based question, but I'm interested to hear what people think counts as "doing calculations" in category theory.

As an example from my own field, when I'm solving a physics problem I often end up needing to solve some integral. There might be nice conceptual and physically-motivated reasons for why I end up with that integral, but in the actual process of solving it - throwing integral tricks at it and the like - I'm not really trying to find a physical interpretation for each step. It's like I'm using a different part of my brain, or a different mode of thinking, one that's a lot more "computational".

I'm sure there's an analogous phenomenon in mathematics. In category theory, I've sometimes found manipulating ends and coends using fubini and yoneda reduction to count as a kind of "calculation". What else might fall under that umbrella?

view this post on Zulip Rémy Tuyéras (Jan 05 2026 at 22:28):

Universality properties and transfinite constructions (e.g. small object argument, Kelly's treatment of transfinite constructions on endofunctors, or categories of continuous functors by Freyd & Kelly)

view this post on Zulip Peva Blanchard (Jan 06 2026 at 09:00):

I guess we could add string diagram manipulations.

More generally, I feel I'm calculating when I've formalized my problem into a syntactic representation, and I am "blindly" applying some legal transformation rules.

view this post on Zulip fosco (Jan 06 2026 at 09:54):

I like this question!!! I asked myself the same some time ago and I keep wondering what counts as "computing" in our field.

view this post on Zulip Ruby Khondaker (she/her) (Jan 06 2026 at 09:58):

In part this was inspired by reading Borceux’s “Handbook of Categorical Algebra” recently, particularly the various results on split/effective/regular/strong/extremal flavours of monos and epis. The proofs there felt very “computational” to me, I guess?

view this post on Zulip fosco (Jan 06 2026 at 09:59):

Certainly following a chain of isomorphisms counts. And moving string diagrams around. But also computing with the operations of orthogonal classes (that can be "algebraicized" to a certain extent). Splitting a big diagram in smaller diagrams each of which commutes.

Some people, who I owe a great debt as teachers, showed me that category theory can be less handwavy and more calculation-oriented. For example Jacobs' books or the very recent book of Adamek and Milius on algebras and coalgebras.

view this post on Zulip fosco (Jan 06 2026 at 10:03):

At the opposite end of the spectrum I feel that some results tied to the adjoint functor theorem are less computational: a single example I recently re-learned is this, in order to appeal to Theorem 6.1 here https://ncatlab.org/nlab/show/transfinite+construction+of+free+algebras one has to rely on a concrete step (build a certain pushout) and then a more blackbox-y step; the answer is the colimit of such-and-such, whatever it is.

view this post on Zulip fosco (Jan 06 2026 at 10:04):

(of course "concrete" and "abstract" are ill-defined concepts, someone could chime in and say well no, the colimit of a chain is a very concrete way to describe something and I can compute with it, look, there's an initial cocone, and the universal property says that...)

view this post on Zulip fosco (Jan 06 2026 at 10:09):

Polya used to say "a tool is a trick I use twice"; I can easily see tricks in category theory (the backbone of the idea in the proof of the fact that every category admits a conservative functor into Set), and tools (the examples given in this list)...

view this post on Zulip fosco (Jan 06 2026 at 10:09):

Rémy Tuyéras said:

Kelly's treatment of transfinite constructions on endofunctors

Interesting that I mentioned it as well as an example of something that is not a calculation :smile:

view this post on Zulip David Corfield (Jan 06 2026 at 10:28):

Not quite the same, perhaps, but I was wondering about a distinction @David Jaz Myers is looking to draw in an online tutorial, [2-torial] David Jaz tells Brendan about a topos-theoretic interpretation for conceptual modelling, between data and computation in ACT modelling (beginning around 32:40).

This distinction is then related to positive/negative polarity from type theory:

image.png

view this post on Zulip David Corfield (Jan 06 2026 at 10:44):

I get the sense that the idea here is not far from what @Joachim Kock is conveying back here:

Altogether we have split the notion of category into an algebraic part and a geometric part. The algebraic part is composition and identity arrows. The geometric part is all the underlying bookkeeping, about source and target, how arrows are required to be lined up (glued together) to a chain before it makes sense to compose them.

view this post on Zulip David Jaz Myers (Jan 06 2026 at 11:58):

David Corfield said:

I get the sense that the idea here is not far from what Joachim Kock is conveying back here:

Altogether we have split the notion of category into an algebraic part and a geometric part. The algebraic part is composition and identity arrows. The geometric part is all the underlying bookkeeping, about source and target, how arrows are required to be lined up (glued together) to a chain before it makes sense to compose them.

I don't really think this is quite the same. What Joachim is referring to there is the algebraic pattern structure on the simplex category: a factorization system that splits any map into its "inert" and "active" parts --- these are his "geometric" and "algebraic" parts respectively. This is because a presheaf on inerts which satisfies the Segal condition is the same thing as a graph --- which Joachim is taking to be a kind of generalized space --- while a presheaf on inerts and actives satisfying the Segal condition is a category because the actives correspond precisely to the generalized composition operations in the essentially algebraic theory of categories.

In my above story, both of these things are in the positive side of the picture. There is an essentially algebraic theory of categories which presents a simple dependent type theory generated by the judgements that there is a type of objects, that for two objects there is a type of morphisms between them, that for any object there is an identity arrow, for two composable arrows there is a composite, and the laws.

The difference between the inerts and the actives is reflected in this type theory as follows: the inerts are, roughly speaking, the weakenings: the maps between contexts which reflect forgetting about some of the variables in them. The actives correspond to the substitutions which use all of the variables in the context. I don't know when type theories admit active-inert factorization systems on their categories of contexts, but I think it's an interesting question.

Anyway, for me the "geometric part" is not "all the underlying bookkeeping" --- that's what the type theory is for. Rather, the "geometric part" (or, I would prefer, the "spatial" part, since there's really no metric or measurement involved) concerns spaces of functions, streams, or types. These come with a natural "topology", sometimes literally as in the Cantor space of functions $$\mathbb{N} \to \{0, 1\}$ or equivalently bit-streams; or the Dedekind reals, which are a subset of the space of type families varying over a pair of rationals.

view this post on Zulip David Corfield (Jan 06 2026 at 12:53):

Thanks for the clarification, David! Could you continue from that last paragraph to say why you take computation performed on "spatial" data to be on the negative side? This is the sense of right universal properties ([[negative type]])?

In type theory, a negative type is one whose eliminators are regarded as primary. Its constructors are derived from these by the rule that “to construct an element of a negative type, it is necessary and sufficient to specify how that element behaves upon applying all of the eliminators to it”.

view this post on Zulip James Deikun (Jan 06 2026 at 14:03):

David Jaz Myers said:

The difference between the inerts and the actives is reflected in this type theory as follows: the inerts are, roughly speaking, the weakenings: the maps between contexts which reflect forgetting about some of the variables in them. The actives correspond to the substitutions which use all of the variables in the context. I don't know when type theories admit active-inert factorization systems on their categories of contexts, but I think it's an interesting question.

For essentially algebraic theories, or their \infty-analogues, this basically happens when there is a presentation as a monad of the sort called "strongly cartesian", or sometimes "polynomial". (Neither of these terms is really ideal--both of them could mean too many things.)

Syntactically, this reflects "rigidity"--in the 1-pov this is simply the assertion that equations between terms don't make use of any structural operations (including weakening) to align their contexts. This means that every term-in-context has a "trace" of which structural rules have been used on it, and this "trace" is invariant under permissible rewritings.

What this corresponds to in the higher POV is less clear to me. The "rigidity" is definitely less strict in that situation, as witnessed by the fact that EE_\infty is a "polynomial" \infty-monad presented by an algebraic pattern.

view this post on Zulip Damiano Mazza (Jan 06 2026 at 17:08):

Peva Blanchard said:

I feel I'm calculating when I've formalized my problem into a syntactic representation, and I am "blindly" applying some legal transformation rules.

As a computer scientist, I am sympathetic to this interpretation and I would just go on and say that calculation = rewriting.

Note that it doesn't matter that the structures we are calculating on are badly uncomputable in the sense of computability theory (for example because we are manipulating some infinite data and what not). As long as we have symbolic representations and formal rules to transform them, it's rewriting and it counts as calculation. Conversely, I don't think I would count anything else as "calculation". This of course has nothing specific to category theory.

view this post on Zulip Rémy Tuyéras (Jan 07 2026 at 00:27):

Damiano Mazza said:

Peva Blanchard said:

I feel I'm calculating when I've formalized my problem into a syntactic representation, and I am "blindly" applying some legal transformation rules.

As a computer scientist, I am sympathetic to this interpretation and I would just go on and say that calculation = rewriting.

Note that it doesn't matter that the structures we are calculating on are badly uncomputable in the sense of computability theory (for example because we are manipulating some infinite data and what not). As long as we have symbolic representations and formal rules to transform them, it's rewriting and it counts as calculation. Conversely, I don't think I would count anything else as "calculation". This of course has nothing specific to category theory.

I'm usually inclined to agree with this view.

One thing I always have in mind, though, is that calculus should literally feel like handling pebbles to figure out the shape of something. To me, this is like "trying to decompose" a structure in category theory.

For example, expressing an object as a coend, end, limit, colimit, sum, product, etc. feels like rearranging the pieces to better see the overall structure.

view this post on Zulip Rémy Tuyéras (Jan 07 2026 at 00:37):

So when you solve something like:

2+x=52 + x = 5

you can think of it in terms of pebbles :rock::

:rock::rock: + :package: = :rock::rock::rock::rock::rock:

[:rock::rock:] + :package: = [:rock::rock:]:rock::rock::rock:

:package: = :rock::rock::rock:

Now, the deduction rules are more like memorizing or mimicking the steps without actually matching the pebbles. They let you reproduce the same results symbolically, so you can do calculus without thinking about the individual units at all.

view this post on Zulip Rémy Tuyéras (Jan 07 2026 at 00:40):

I guess that's when calculus (latin for small stone or pebble) becomes logic (latin logica, meaning reasoning or the art of reasoning)

view this post on Zulip Mike Shulman (Jan 07 2026 at 00:46):

A friend asked me this question when I was in grad school and starting out in a career in category theory. However, she phrased it specifically as "do you ever compute anything in category theory and not know in advance what the answer will be?" At the time I didn't have any good examples for her, and I still don't. Unless I missed something all the kinds of "computation" that I've seen mentioned above are more about verifying something we already expected to be true rather than calculating a result that we didn't know in advance.

view this post on Zulip Rémy Tuyéras (Jan 07 2026 at 01:02):

@Mike Shulman Contractibility questions are usually like that: you do not know in advance whether something reduces to something smaller. You try to build a contraction by decomposing it into smaller contractible pieces and checking that the pieces glue in a contractible way. If it works, you learned something real. If it fails, the failure is informative, and if you really need a contractible result you might add an axiom (or restrict the setting) that forces the missing contraction step to exist, then try again.

It feels like physics: you propose a model that should match a phenomenon, you check whether it closes on the data, and if it does not you enrich the model by adding a variable or assumption, then test again.

view this post on Zulip Spencer Breiner (Jan 07 2026 at 01:26):

I would say that calculations of exponentials or subobject classifiers in (pre)sheaf topos can yield unexpected insights.

view this post on Zulip Mike Shulman (Jan 07 2026 at 02:52):

@Rémy Tuyéras I dunno, that still feels very different to me from a calculation that, say, H3(RP5)=Z/2H_3(RP^5)=\mathbb{Z}/2. In the latter case there is a well-defined and interesting object but we don't know what its "canonical form" is, there are infinitely many possibilities, and no matter what the answer is it doesn't mean that we "got the definitions wrong", rather the answer itself tells us something important about the world.

view this post on Zulip Mike Shulman (Jan 07 2026 at 02:54):

If you allow binary results like "is this space contractible" as the output of a "calculation", then trying to prove or disprove any theorem would count as a "calculation": we don't know in advance whether the theorem is true, and we try to build a proof by putting together logical rules, if it works we learned something real, if it fails the failure is informative, and if you really need a true theorem you might add a hypothesis, etc.

view this post on Zulip Alex Kreitzberg (Jan 07 2026 at 03:06):

If I understand your meaning, I think I'm partially drawn to category because it gives strong results on "self evident" things, including how to define certain basic definitions.

It's sort of ironic, It seems like the subject can get more important if the definition is less significant because you wouldn't take the time to get it sorted otherwise.

view this post on Zulip Alex Kreitzberg (Jan 07 2026 at 03:08):

I feel like that happens with duality a lot, the duals somehow contain the same information, but are superficially much less important.

view this post on Zulip David Michael Roberts (Jan 07 2026 at 04:57):

Mike Shulman said:

A friend asked me this question when I was in grad school and starting out in a career in category theory. However, she phrased it specifically as "do you ever compute anything in category theory and not know in advance what the answer will be?"

I would say calculating to see if a (pre)sheaf is representable is close to this.

I can offer an example of when I had to rewrite a limit of diffeological spaces in a bunch of different ways to the point where the limit manifestly existed as a smooth manifold because I could arrange that a bunch of maps were submersions, and so there was iterative construction of the limit purely via pullbacks of submersions, which do of course exist. Sometimes it really wasn't clear if I could get rid of equalisers in the construction, which are problematic in Mfld.

In similar but simpler cases this is working with calculating a set-builder notation expression with generalised elements through various isomorphisms so that one has a simpler expression for a limit, or a more conceptually clear limit.

view this post on Zulip daniel gratzer (Jan 07 2026 at 10:37):

I feel like determining the behavior of an adjoint can often feel very calculational in the in the sense @Mike Shulman described above. Often I find myself knowing that some functor has an adjoint but not knowing anything specific about it and using various elementary manipulations to try and produce a more recognizable and concrete form. And often I go into this process not having a super clear understanding of whether or not a more satisfactory description exists, let alone what form it might take. Perhaps that's merely a testament to not having finely honed by adjunction skills...

It's a bit different in that I can give a mathematically precise definition of what the canonical form of a finitely-generated abelian group is, but I have no such criteria for what a "good" description of a functor.

view this post on Zulip David Corfield (Jan 07 2026 at 11:44):

From the discussion, we might say there are three (non-orthogonal) aspects: the degree to which there's a method to crank out an answer rather than to search for one; the degree to which the form of the answer is prefixed; and, the degree to which the answer is unknown rather than anticipated.

A paradigmatic example of a calculation is where one cranks out an unknown answer in a prefixed form.

view this post on Zulip fosco (Jan 07 2026 at 13:54):

There are nine model structures on the category of sets. Not two, not fifteen.. We didn't know the answer in advance before computing!

view this post on Zulip fosco (Jan 07 2026 at 13:55):

I agree that these statements are rare, to some extent because category theory goes against the structure of these problems/answers.. But they exist! And exactly because of their scarcity it's interesting to collect the few examples.

view this post on Zulip Martti Karvonen (Jan 07 2026 at 14:05):

The result and proof has some counting, but is the proof calculational? I haven't looked a it myself.

view this post on Zulip fosco (Jan 07 2026 at 17:06):

@Martti Karvonen

https://www1.aucegypt.edu/faculty/hebert/files/alogicofinj.pdf

it is a bit of a leap to justify a "yes" with this paper, but you can see what I'm trying to suggest

view this post on Zulip fosco (Jan 07 2026 at 17:07):

together with @Damiano Mazza idea that "calculation = rewriting"

view this post on Zulip fosco (Jan 07 2026 at 17:17):

An example that comes from my recent notebook is:

what is the terminal coalgebra of the functor 1+A×()1+A\times(-)?

Certainly there are conceptual ways to argue that the answer must-be-such-and-such.

But there is a way to "shut up and calculate". A SS-coalgebra is a partial function α:XMaybe(A×X)\alpha : X \to Maybe(A\times X) which, when it doesn't fail, yields a pair α(x)=(a,x)\alpha(x)=(a,x'), a situation that I depict as xaxx \overset a\rightsquigarrow x'.

To prove that the final coalgebra has carrier the set of finite and infinite lists one has to make an educated guess based on the intuitive idea that in order to satisfy the universal property the terminal object must be "big in a certain way", but then, with the educated guess that the terminal object is [provided I didn't make a mistake] Aω:=A+ANA^\omega := A^* + A^{\mathbb N}, one has to build a bijection Aω1+A×AωA^\omega\cong 1+A\times A^\omega, because of Lambek theorem, and exhibit the unique coalgebra map XAωX \to A^\omega corresponding to the behaviour of α\alpha: the list (a1,,an)(a_1,\dots,a_n) represents (or is represented by) the nn-step transition

a1a2a3an * \overset {a_1}\rightsquigarrow * \overset {a_2}\rightsquigarrow * \overset {a_3}\rightsquigarrow \dots * \overset {a_n}\rightsquigarrow *

and some of these sequences do not stop.

This is a proof in category theory; the statement requires (elementary, but this is not the point) category theory.

I find it hard to debate that all these (prove that a function exists, that it is bijective, prove that another function exists, that it is unique, finding that neither AA^* nor ANA^{\mathbb N} are the correct answer) are "calculations". The computation just involved the Cartesian 2-rig of sets, instead of the semiring of natural numbers, potayto, potato :-)

view this post on Zulip fosco (Jan 07 2026 at 17:25):

I am a pure mathematician at heart; I understand (like, get along with..) mathematicians better than computer scientists in all but one respect: that they don't brush off as a menial task the obligation to exhibit these arguments explicitly. It is hard, when you are young and you are trying to learn how to do this job, to understand what exactly is accepted as proof, what is the allowed level of handwaving. None, 10%, 50%? I still remember thinking "I must be very stupid, because these things that people pull off with no effort or dirty hands instead require, to me, a lot of computation!"

view this post on Zulip fosco (Jan 07 2026 at 17:26):

(Maybe, surely, at the time I also didn't know how, and what, to read. Take the last paragraph simply as a purely anecdotal reference, that I see one behaviour in one community and the opposite stance in the other)

view this post on Zulip Alex Kreitzberg (Jan 07 2026 at 18:09):

I've wondered about when it's okay to omit calculations in math as well, I don't think it's completely arbitrary.

I think statements of the Yoneda lemma would be clearer if the underlying calculation were shared in the statement for example. However, tools like synquid (https://github.com/nadia-polikarpova/synquid) can automatically infer programs from types for many non trivial examples, with the yoneda type signature analog being an especially easy case.

This, and other tools like this, make me wonder if it's possible to be precise about when it's okay for mathematicians to leave out calculations.

In particular, I suspect many standard results in category theory could be automated easily. That seems to be a big point of it is my impression, it's supposed to make theory easy.

I guess implicit in this is how does this transition from "making easy results thoughtless" to "making truly hard results possible" happen in a subject like calculus? And could that happen more in category theory?

I've been bitten enough by "easy" problems I'd be fine if that were all category theory helped with.

A related thought, although not calculational, I think Mac Lane's coherence theorem is truly difficult but then you can dodge it by working with unbiased definitions. So my takeaway was "hard" results in category theory might be a clue that that the definitions should be reworked.

So by analogy I'm expecting "good" calculations should be "easy" in category theory. And to get too hung up on that being a flaw, I worry might miss the point.

view this post on Zulip John Baez (Jan 08 2026 at 10:06):

I often try to use category theory to avoid calculations, replacing them by verbal arguments that allow us to see that a result is true. This usually works best for results that have snappy, attractive statements.

However, there are many other results in math that seem to require calculations. Category theory sometimes gets used as an infrastructure for these calculations, e.g. computing co/homology groups with the help of homological algebra resting on the infrastructure of abelian categories.

view this post on Zulip fosco (Jan 08 2026 at 10:35):

A wise man, expert of homological algebra, once told me that there is essentially only one way to compute in homological algebra: spectral sequences. The kind of proofs one sees in elementary H.A. can be reduced to degenerate cases of spectral sequences

And spectral sequences (another wise man told me) escape much of the usual "it's just an X in the category of Y" business. They truly are a machinery, so to speak.

Now I wonder: is it possible to prove the second wise man wrong?

view this post on Zulip David Corfield (Jan 08 2026 at 11:00):

I guess it won't be hard to give examples of calculation taking place within specific categories, even ones using category-theoretic ideas, since these may concern things like the image of a specific functor on a specific object, or a specific hom-space.

[[Cohomology]] may be construed as machinery for such computations:

One can then understand various “cohomology theories” as nothing but tools for computing π0H(X,A)\pi_0 H(X,A) using the known presentations of (,1)(\infty,1)-categorical hom-spaces

One place concerning categories in general (well, finite ones and some others) where we might arrive at something unknown but informative of a specified type is when calculating The Euler characteristic of a category. But then perhaps one wouldn't count this as a calculation in category theory.

view this post on Zulip Joe Moeller (Jan 08 2026 at 23:10):

I think people have been using double categories to think about what spectral sequences "really are", but I'm not in a position to look up references at the moment.

view this post on Zulip David Michael Roberts (Jan 09 2026 at 11:26):

I'd be interested to see those

view this post on Zulip Joe Moeller (Jan 10 2026 at 00:49):

Hmm, I may have mixed a few things up. Looking in my notes, I came up with this paper from 1991:

which is obviously not at all recent, but I was reading it recently. This paper does connect double categories to spectral sequences in some way, but I didn't read that section in detail. I've also mentioned before that Maru Sarazola has used double categories to think about the Waldhausen construction:

view this post on Zulip John Baez (Jan 10 2026 at 23:44):

If I were ever going to seriously try to understand spectral sequences, which would mean reinterpreting them as something that pleases me more, I'd start with the Serre spectral sequence, since this seems like a simple and 'conceptually nice' example. The idea is that you're trying to compute the cohomology of the total space of a fibration when you know the cohomology of the base and the fiber. You try to do it by studying what happens when you restrict the fibration to the n-skeleta of the base.

But I have always avoided getting into this, and I expect I may manage to avoid it forever.

view this post on Zulip Mike Shulman (Jan 10 2026 at 23:57):

John Baez said:

If I were ever going to seriously try to understand spectral sequences, which would mean reinterpreting them as something that pleases me more, I'd start with the Serre spectral sequence, since this seems like a simple and 'conceptually nice' example.

That's what I did.

Although in the process I discovered that the Atiyah-Hirzebruch spectral sequence was actually easier to understand first, and that you don't have to muck around with skeleta of the base (which was fortunate for me, since we don't have them in HoTT).

view this post on Zulip John Baez (Jan 11 2026 at 01:02):

I guess your work will help me a lot if I ever take the plunge. Alas, watching someone jump off the diving board is not the same as doing it.

view this post on Zulip John Baez (Jan 11 2026 at 17:12):

At the end of your article, @Mike Shulman, you wrote

However, if we had a fully constructive definition of the whole exact couple and spectral sequence, as we would obtain by implementing the definition in a hypothetical fully constructive version of homotopy type theory, then the definition would be a program, and we could simply ask the computer to evaluate it on any input we cared about.

Do you know of any progress along those lines?

view this post on Zulip Mike Shulman (Jan 11 2026 at 19:32):

We do have fully computational version(s) of HoTT now. Cubical type theories are known to be so, and higher observational type theory is conjectured to be. But I don't know of anyone who's working on formalizing spectral sequences in a proof assistant that implements these theories.

view this post on Zulip Kevin Carlson (Jan 12 2026 at 18:19):

Barakat's CAP software can perform some proofs involving spectral sequences, though it's not based on type theory.