You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
It seems like we often refer to the Hom-sets of a category individually, but do not often talk about the collection of all Hom-sets as a single named entity.
I’m wondering if this makes it awkward to formally define the composition operation or not. Normally, since the composition operation applies to the type of arrows, you would think it could be defined as ; but it would have to be a partial function, and it would need further conditions to specify which arrows are composable.
It seems like this is suggesting that there are some very different ways to define and conceptualize about partial monoids. You can think of the binary operation as partially defined, or you can group the elements into sub-collections (like Hom-sets). Even then, it seems hard to escape the need for partial functions.
The nlab shows another very cool way to define partial monoids, where it seems like you introduce a binary relation to mean “ is defined”, and a zero element. [[effect algebra]]
None of these seem perfectly elegant, though.
There is also the way of defining categories as two objects, and , three arrows , with the conditions expressed via pullbacks.
Is there a theory which helps us look at all the different ways we can define partial monoids, to gain insight into them?
Julius Hamilton said:
It seems like we often refer to the Hom-sets of a category individually, but do not often talk about the collection of all Hom-sets as a single named entity.
This is because there are often "too many" morphisms when we try and put them all together. Consider the category of sets and functions. Between any pair of objects, we have a set of morphisms. But the collection of all morphisms doesn't form a set (we run into a variant of Russell's paradox). This is an instance of the kind of "size issues" that one sometimes has to be wary of in category theory.
However, for a "small" category (where there is only a set of morphisms by definition), this problem goes away, and your questions are still relevant :)
Julius Hamilton said:
Is there a theory which helps us look at all the different ways we can define partial monoids, to gain insight into them?
What kind of theory are you after here? Would it be interesting to know the different contexts which these perspectives generalize toward?
Morgan Rogers (he/him) said:
This is because there are often "too many" morphisms when we try and put them all together. Consider the category of sets and functions. Between any pair of objects, we have a set of morphisms. But the collection of all morphisms doesn't form a set (we run into a variant of Russell's paradox).
I'm not convinced that's why. There are already "too many" objects in most categories like this, so why should it matter whether there are also too many morphisms?
I would tend to say rather than it's just that very often, when we consider a morphism, we already know what its source and target must be. So it makes the most sense to have a set of morphisms for each fixed source and target, rather than one big amorphous set of morphisms and have to constantly impose conditions on sources and targets being equal to some object we already know.
If there is a set-theoretic issue, I would say it's that the "natural" sets of morphisms with fixed source and target may not be disjoint, so you can't just take their union to be the set of all morphisms. For instance, under the usual set-theoretic encoding of functions, the set is both a function and a function and a function (assuming that ), but in the category of sets those must be three different morphisms (and many more besides).
But this "dependently typed" definition of a category does not cause any problems in defining composition; you just have one composition function for each triple of objects.
Mike Shulman said:
Morgan Rogers (he/him) said:
This is because there are often "too many" morphisms when we try and put them all together. The collection of all morphisms doesn't form a set (we run into a variant of Russell's paradox).
I'm not convinced that's why. There are already "too many" objects in most categories like this, so why should it matter whether there are also too many morphisms?
Good point :sweat_smile:
Morgan Rogers (he/him) said:
Julius Hamilton said:
Is there a theory which helps us look at all the different ways we can define partial monoids, to gain insight into them?
What kind of theory are you after here? Would it be interesting to know the different contexts which these perspectives generalize toward?
Probably.
I feel like I’ve often been made aware of how there are all these quite different ways of establishing a certain structure, but they are all logically equivalent, describing the same thing. I’m interested in a set of all logically equivalent definitions of a given structure. I don’t know if there could be a “definition algebra” where we show how one definition can be transformed into another. Or if quotienting the category of sets in a certain way could collapse equivalent definitions into their single representing categorical definitions?
That’s a fun idea! By and large mathematical logic doesn’t have the definition-making process as a central topic of interest, though it’s quite arguable that it should. I know there are some relevant programs but references are escaping me at the moment.
Thanks.
A specific scenario regarding this could be the mathematical formalization of JSON.
The tree-structures of JSON can be defined as graphs in terms of . Viewing JSON as nested dictionaries, JSON is a series of nested functions.
Is there a mathematical argument for why it would be better to find a universal property the equivalent definitions have?
I don't have anything to say about partial monoids, but if we're talking about categories, then Jonas Frey has observed that each of the two definitions induces, and can be recovered from (in a suitable sense), a weak factorization system on the resulting category Cat. For the two-sorted definition, the wfs is (all, isos), while for the dependently-typed definition it is (injective on objects and relatively free, full and surjective on objects).
There are also at least two intermediate wfs of the sort that are "induced by a definition" ("clan-algebraic" wfs): (relatively free, full and bijective on objects), and (injective on objects, fully faithful and surjective on objects). To describe the corresponding "definitions", we have to first observe that the two-sorted definition is "equivalent" (in the sense of inducing the same wfs) to a definition that augments the dependently-typed definition by explicit equality relations on both objects and arrows. Then these two other wfs arise by instead adding only an equality relation on objects, or only an equality relation on arrows.
This is probably higher-powered than you had in mind, but I do think it can be regarded as an example of studying a sort of "algebra of definitions" -- the poset of clan-algebraic wfs on a given locally presentable category is a sort of "extensional" representation of "all the possible generalized-algebraic definitions" of the objects of that category. And it has interesting uses, e.g. all these equivalent definitions of Cat become inequivalent when generalized to -categories.
Forgot to tag @Jonas Frey.
That sounds interesting but difficult.
So in that paper, Frey presents a two-sorted definition, and a dependently typed definition, of Cat, each of which induces a weak factorization system on Cat?
Mike Shulman said:
But this "dependently typed" definition of a category does not cause any problems in defining composition; you just have one composition function for each triple of objects.
I was thinking about this, how the composition operation could be seen as a family of operations.
Julius Hamilton said:
So in that paper, Frey presents a two-sorted definition, and a dependently typed definition, of Cat, each of which induces a weak factorization system on Cat?
Yes, I believe that's right.
You might be also interested in [[single-sorted definition of categories]] which basically lets you define a category only based on a collection of morphisms, a partial composition map (and source and target maps that send a morphism to the identity morphism on its source and target respectively)
Mike Shulman said:
Julius Hamilton said:
So in that paper, Frey presents a two-sorted definition, and a dependently typed definition, of Cat, each of which induces a weak factorization system on Cat?
Yes, I believe that's right.
Can you show me how the two-sorted definition induces the “all isos” weak factorization system?
Since the two-sorted definition is essentially algebraic, its corresponding "clan" is actually a finite-limit category in which every morphism is a fibration/display-map. In general, the opposites of the fibrations in a clan are the generators of the left class of the factorization system, so here the left class consists of all maps.
Mike Shulman said:
Since the two-sorted definition is essentially algebraic, its corresponding "clan" is a finite-limit category in which every morphism is a fibration.
Why?
That's the way that essentially algebraic theories are embedded in clans.
Can you prove it?
It's kind of a definition.
What’s a fibration?
Have you checked the nLab article on clans, at least? It's a bit unreasonable to ask people to repeat definitions to you; if the motivation or properties of fibrations aren't clear after looking at the definition, that's a better moment for a question to a human.
Kevin Carlson said:
Have you checked the nLab article on clans, at least?
Yes
Fibrations are defined on there--they're just any chosen class of map satisfying certain properties. So it's not entirely clear what would count to you as an answer to the question "what's a fibration?"
This is a perfectly natural situation to get into; it's very mathematician-brained to actually believe you know what something means just because you've seen a definition. But it would still help if you could be any more specific about what kind of response you're looking for.
To answer these questions properly would be to write a course on locally presentable categories, weak factorisation systems and the semantics of type dependency. Most of the answers are contained in the paper: https://arxiv.org/abs/2308.11967.
I feel annoyed by the response to a request to narrow your question being instead a dozen questions. As Nathan said it would take hours to even begin to answer them. In practice I'd need a whole day talking to you in person to have a reasonable expectation of sufficient bandwidth. If you're learning remotely/asynchronously, the cost of getting to talk to a wide range of people who know a lot about your subject is that you only get a relatively narrow communication band. That's best used by putting in relatively more effort on your end and asking questions primarily after you've made a reasonable effort to understand something by reading on your own, and then communicating as specifically as you're able where it is you're stuck and what it is you'd like to know. In contrast this list comes across as if you looked through the nLab article under discussion and wrote bullets for every sentence that wasn't immediately clear, following no links and looking up no references on your own, and providing no context for whether you know anything about anything nearby the topic. This isn't a very effective way to learn.
I suggest reading the guidance on how to ask a good question at Math StackExchange. This venue's not quite as formal but most of it is still relevant. https://math.stackexchange.com/help/how-to-ask