You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Wikipedia has information on Ring, CRing and Rng: but I wonder if there's anything special about CRng.
Is it safe to say it's got small limits that match all of Ring/CRing/Rng, and probably has small colimits too, although probably not matching CRing?
Is there anything else noteworthy that comes to mind?
Take any algebraic gadget that's a set with finitary operations obeying some equational laws. The category of these gadgets has small limits, small colimits, and the forgetful functor to Set is a right adjoint. So we don't need to separately check this fact for rings, commtutative rings, rngs, commutative rngs etc.
See [[Lawvere theory]] or [[finitary monad]] for details.
I think commutative rings w/o unit come up in [[Gelfand duality]] for locally compact spaces (as opposed to the more classical case of compact spaces): this duality associates to each locally compact space the crng (more precisely: commutative non-unital C*-algebra) of functions that vanish at infinity.
@John Baez sure, but the spirit of the question is more along the lines of “does CRng do anything more unusual than CRing and Rng independently.” ( my fault for phrasing what I already 80% knew as a question.)
Ideals in a ring are subobjects in Rng, no?
To me the interesting question is why commutative rngs are "worse" than commutative rings - why commutative algebraists and algebraic geometers, for example, have settled on commutative rings with identity as the concept worth pouring thousands of pages of work into. Ideals are important, but I get the impression most algebraic geometers find it conceptually best to think of them as modules rather than rngs. Commutative C*-algebras without identity correspond to noncompact locally compact Hausdorff spaces, but the proof usually involves throwing in a "point at infinity" in your space, making it compact, which corresponds to throwing in an identity in your rng, making it into a ring.
I don't completely know what I mean by saying commutative rngs are "worse" than commutative rings. "Worse" is a value judgement, but there should at least be some ways in which they're quite different, and this should probably translate into some different properties of the category of commutative rngs, which would be good to look at.
For example, the fact that the empty set is a commutative rng may have some strange effects. The initial commutative ring is , while the initial commutative rng is .
The whole issue reminds me of why semicategories (categories that may not have identity morphisms) are "worse" than categories.
A ring is a one-object category enriched over , while a rng is a one-object semicategory enriched over . So perhaps the reasons people prefer categories to semicategories translate into reasons why people prefer rings to rngs.
The reason why semicategories are worse than categories is that in semicategories one does't have a good form of Yoneda lemma/embedding.
This analogy, which I was about to repeat, was taken seriously in this paper by Borceux and Berni-Canani: https://numdam.org/item/CTGDC_2002__43_3_163_0.pdf
Among other interesting things, see the passage
Regular presheaves on a regular semi-category verify the Yoneda lemma precisely when the semi-category is an actual category.
I'm wondering how the failure of the Yoneda lemma manifests itself in rng theory - especially commutative rng theory. Since algebraic geometers know category theory, they should somewhere secretly (or not-so-secretly) be using the fact that rings are one-object -enriched categories, and using the enriched Yoneda lemma for these.
Don't rngs still need an underlying additive group and hence a 0?
In particular, doesn't this rule out the empty set from being a rng?
Duh, I'm a dope. So the initial rng is 0? So it's also the terminal rng?
If this is right, then the category of rngs has a [[zero object]] - an object that's both initial and terminal. And that means that for any pair of rngs there's a special morphism factoring through the zero object. This is the rng homomorphism that sends everything in to .
This all seems correct. If so, @Ryan Schwiebert, here's a special extra feature of rngs, or commutative rngs.
fosco said:
The reason why semicategories are worse than categories is that in semicategories one does't have a good form of Yoneda lemma/embedding.
I'm not personally convinced by this reasoning as it's a bit too teleological. The Yoneda lemma is a bit of an advanced result for that to be the motivation for this.
Personally I would say that the question is equivalent to why the list monad, whose algebras are monoids, is a more natural or fundamental monad to study than the non-empty list monad whose algebras are semigroups. For categories rather than monoids, we would use generalized operads rather than monads but the principle is the same.
For any , is the initial algebra for the endofunctor . I wonder if this can be translated into any kind of initiality property of the list monad itself which would characterize it as an initial or free monad. I am not sure.
A good property of the list monad could be a nice reason for the superiority of categories over semicategories. I think it's good to have lots of reasons, so I also think the failure of the Yoneda lemma for semicategories counts as a good reason categories are better.
One of my favorite reasons is that I want "isomorphism" to be an equivalence relation on objects, and this fails for semicategories. But someone might argue that this is subtly circular.
Getting back to the point of this thread, any reason that categories are better than semicategories could be a reason rings are better than rngs. But I haven't seen how this works.
John I'm not really sure what you mean by the comment about isomorphisms. What is an isomorphism in a semi category?
https://math.mit.edu/~poonen/papers/ring.pdf
Bjorn Poonen has written this essay and I think his argument is related to the argument I expressed above: the list monad captures the "right" notion of an associative multiplication, a "totally associative" multiplication defined for arbitrary lists.
Patrick Nicodemus said:
John I'm not really sure what you mean by the comment about isomorphisms. What is an isomorphism in a semi category?
Well, one attitude is that the concept makes no sense so of course isomorphism is not an equivalence relation in a semicategory.
But I prefer to say that in a semicategory, being an identity morphism is a property of a morphism: namely, that it serves as a left and right unit for composition. Then, being an isomorphism is another property of a morphism : namely, that there exists a morphism such that and are identity morphisms.
This lets us talk about objects being 'isomorphic', and this relation on objects is an equivalence relation iff our semicategory is a category.
Of course, you may reject the idea that being an identity is a mere property, since for categories we treat it as a structure: we demand that functors preserve identities. For semicategories we wouldn't.
I can note one difference between what Bjorn said and what I said. Bjorn says that if you have a binary operation on a set then it provides an obvious way to provide multiplication on sequences with 2 or more elements but leaves unspecified the unary product and the nullary product. On the other hand from the monad point of view, the unary product is determined by the unit law for a monad algebra, so only the empty list is "negotiable" in its definition.
John Baez said:
Patrick Nicodemus said:
John I'm not really sure what you mean by the comment about isomorphisms. What is an isomorphism in a semi category?
This lets us talk about objects being 'isomorphic', and this relation on objects is an equivalence relation iff our semicategory is a category.
Thanks John. That's clear now. It should be symmetric and transitive right? Just not reflexive, so a PER.
fosco said:
Among other interesting things, see the passage
Regular presheaves on a regular semi-category verify the Yoneda lemma precisely when the semi-category is an actual category.
Unrelated but this is an interesting thing that I see from French writers, "verify" as a synonym of "satisfy".
Patrick Nicodemus said:
I'm not personally convinced by this reasoning as it's a bit too teleological. The Yoneda lemma is a bit of an advanced result for that to be the motivation for this.
I don't understand what you mean here!
fosco said:
I don't understand what you mean here!
The Yoneda lemma is important to me, but not that important. Of course it is an important result, but not so important, that the argument "the definition of category must be chosen that way in order for the Yoneda lemma to hold" is persuasive to me.
Suppose is a semi-category, let be objects in , and we have and . I can define a function . I guess this is probably not injective or surjective in general; if is a category, then it is. That's an interesting result, it states that certain kinds of natural transformations between these presheaves are "representable" (the word "representable" is meant loosely here.)
To me this kind of result is analogous to saying that if is a vector space with inner product, then the map from is a linear isomorphism when is finite dimensional, so the linear functionals are representable, or the Riesz Representation theorem for Hilbert spaces. But I would not accept an argument that linear algebra is fundamentally about finite dimensional spaces and Hilbert spaces because we have this representation theorem, there are lots of interesting theorems about Banach spaces.
I also tend to think of the Yoneda lemma as a characterization of what are the "free" presheaves generated by a singleton over a single object. In other branches of mathematics, do we appeal to a characterization of the free objects in order to argue that our axioms are right? In linear algebra, indeed we do: many people would say that the most important and fundamental theorem of linear algebra is that every vector space has a basis. So this is an argument in your favor.
(Deleted the last paragraph as I need to think about the math a bit.)
Okay. If is a semicategory, and is an object in , then I think there is still a free presheaf generated by a singleton over . That is, the evaluation functor given by "evaluate at " has a left adjoint. For , is ; . So, there is some characterization of what the free presheaves are over a semicategory, and this can be regarded as the equivalent of the Yoneda lemma for semicategories.
(Someone should check my math to see if I've mischaracterized the definition of . If the evaluation functor didn't have a left adjoint I would be a little worried.)
(It would be interesting to consider an alternative definition of natural transformation which associates to each a morphism such that for composable we have . Such natural transformations aren't composable, I guess, but one could still look at bijections between this hom-set and other hom-sets.)
I don't have much time to go back at the paper and check, but I remember the point that Borceux and Berni-Canani make is that there is no Yoneda embedding from the semicategory C to semifunctors C^op to Set, which is the category of presheaves over the category obtained from C freely adding identities
For what concerns the importance of Yoneda lemma I disagree, but this is philosophy, not mathematics; half-jokingly, category theory is indeed the class of consequences of Yoneda lemma!
in linear algebra, the fact that is a compactness result; the analogy with Yoneda lemma is instead the weaker, always true statement that embeds into , as much as embeds into . But again, this is not something one can make precise; it's akin to magical thinking: useful to fuel rituals, and religious wars!
This is an aside, but I find it a bit strange that the subject with the most tools for structural analogies is powerless to explain its own analogy with linear algebra.
(Side rant) Actually Poonen's paper was what recently jogged my mind back into this arena.
Maybe it's a failure of the messaging, but I'm instinctively repelled by the proscriptive statements
Why all rings should have a 1
But it is nice to understand why certain definitions should be favored over others.
I think my issue is mainly that "natural" is going to be highly subjective depending on the speaker. I would find arguments for "the natural numbers should include 0" or "all rings should be associative" similarly off-putting.
I think the most charitable reading is that we're talking about what should be used when introducing these things to students.
But even then... is there anybody who really needs convincing? In the past 25 years I can't recall running into a single person online, in text or IRL who strongly insisted that "rings should not have identities." I cannot imagine anyone (today) authoring an article promoting the opposite viewpoint.
It's a good collection of rationalizations for why rings with identity make a nice system, but that is about it.
Alex Kreitzberg said:
This is an aside, but I find it a bit strange that the subject with the most tools for structural analogies is powerless to explain its own analogy with linear algebra.
we know too little category theory to explain the mysteries of linear algebra!
@Ryan Schwiebert category theorists spend a lot of time looking for the "right" definitions, so this phrasing is not uncommon here. There are a number of heuristics that determine such a judgement. Foremost amongst them would be eliminating an arbitrary exclusion that makes a category of gadgets have fewer nice properties: in model theory, models are often required to be inhabited, but this assumption is regularly dropped by category theorists so that we always have initial models of theories.
A second reason (perhaps more convincing) is a relative one: if you want to do X (cohomology, say), you need to place Y in a context with feature(s) Z. If we don't know how to do X without exploiting features Z, then this is pretty strong motivation that may be expressed normatively.
Finally, I recall @John Baez wrote a blog post considering groups without identities a while back, comparing what extra things arise. The point is not to make a final judgement one way or another about whether this should replace the established definition, but rather to provide evidence for potential answers to questions like your original one: why did people choose one definition over another, and what would it have changed if they had made an alternative choice?
@Morgan Rogers (he/him) I get all that, but for me the article (possibly unintentionally) went beyond "this is why this definition is best in these contexts" to something more absolute.
Considering groups without identities, that's exactly something that came to mind. My impression is that semigroups have an even livelier presence than rings without identity. I can't imagine anyone saying "their definition is not correct" or "all semgroups should be monoids." But that's what it sounds like, although I suspect it's not the intention. Like I said, maybe a problem with messaging. I like Keith Conrad's version better.
Anyhow, i don't want to derail things further with my side rant. It has just been a way to process my impression of it!
John Baez has marked this topic as resolved.
John Baez has marked this topic as unresolved.
Ryan Schwiebert said:
Maybe it's a failure of the messaging, but I'm instinctively repelled by the proscriptive statements
Why all rings should have a 1
But it is nice to understand why certain definitions should be favored over others.
Those look more prescriptive than proscriptive - a proscriptive statement would be something like never use rings without an identity element.
But anyway, mathematics is full of design decisions. As we build the structure of mathematics, definition by definition, theorem by theorem, proof by proof, we are constantly trying to figure out the best way to do it. We constantly discuss this, to arrive at some consensus. We make moral and esthetic decisions and try to persuade other mathematicians of them. There's never total consensus, and I don't think there should be. Besides the intrinsic value of freedom, diversity is important for growth. But mathematicians are like tiny organisms building a large coral reef: most of us like to cooperate, because we know what we do individually is tiny compared to what we're building collectively. So we are constantly sharing our value judgements with each other, to seek the most vibrant, clear and beautiful mathematics possible.
fosco said:
Alex Kreitzberg said:
This is an aside, but I find it a bit strange that the subject with the most tools for structural analogies is powerless to explain its own analogy with linear algebra.
we know too little category theory to explain the mysteries of linear algebra!
I was going to say something similar. It's not that the subject is powerless to do this: it's just that we haven't yet figured out the right way to do this. Math isn't done yet!
Fosco and Todd Trimble are working on a theory of categorified rigs, and that's probably part of what we need.
The simplest reason, to my mind, why categories are better than semicategories is that categories are Cartesian closed, whereas exponentiable semicategories are almost nonexistent: they're those semicategories in which every arrow admits a unique decomposition
The Cartesian closedness of the category of categories seems more and more miraculous to me the more I think about it. It's not true for semicategories, (symmetric, etc) multicategories, (virtual) double categories, or basically any other "higher" or "wider" notion of category other than the not-usually-what-you-want strict -categories, and in the appropriate sense for -categories (which might be the exception that proves the rule.)
This is somehow related to the Yoneda lemma question, in that you can often reduce exponentiability to the existence of an exponential by some object playing the role of although I'm not sure of the exact words I want to say here.
Do people talk about "semigroup objects in a semicategory" like the do monoid objects in a category? I'd expect rings without identity to be the former. I'd also expect it to be vanishingly rare...
Pfft nevermind, it says exactly that in nlab doesn’t it https://ncatlab.org/nlab/show/nonunital+ring
That article also points out that CRng is equivalent to the category of commutative rings equipped with a homomorphism to . That's very nice - the slice category of a nice category over a nice object should be nice, so one should be able to go far by thinking about commutative rngs this way.
Finally time for me to read "the best way to adjoin a one" again. It's been a long time and now I can appreciate it more. https://www.cambridge.org/core/journals/journal-of-the-australian-mathematical-society/article/characteristic-ring-and-the-best-way-to-adjoin-a-one/54D3BF23DE6255AE67EC8D7C373F8DA6
It advertises unitization over a characteristic ring that's defined there, which I guess would split up Rng into sections that share the same characteristic ring.
The category of non-unital rings is to rings and pointed sets are to sets. As @John Baez pointed out above, the category of non-unital rings is the coslice category at the initial object. This is dual to the slice category at the terminal object, which takes 'pointed objects'. This also explains why non-unital rings are a pointed category but rings are not. Just as sets are usually viewed as more fundamental than pointed sets, unital rings are more fundamental than non-unital rings. It's also easier to go from the non-pointed case to the pointed case than conversely.
Also, @Jonas Frey mentioned Gelfand duality. Note that this is again more natural as a correspondence between commutative unital -algebras and compact Hausdorff spaces, since the non-unital version uses a non-obvious choice of morphisms on the topological side. However, taking pointed objects with respect to the unital version gives a duality between non-unital commutative -algebras and pointed compact Hausdorff spaces. Now the choice of morphisms on the topological side is obvious. The story about locally compact Hausdorff spaces is obtained by taking the complementary subspace of the distinguished point.