You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I've recently been learning a little bit about commutative rigs. These are commutative rings that don't necessarily have "negatives". For example, the natural numbers are a rig, but not a ring.
The nLab notes that we can make a ring from any rig by "adding in negatives". We apparently can do this by utilizing a "group completion functor", and we end up with a functor . This functor is left adjoint to the forgetful functor that sends each ring to its underlying rig.
I am curious if we can go in the opposite direction: instead of adding in negatives, can we remove negatives from a commutative ring to make a commutative rig (that is not a ring)? This makes sense at least in some cases. By starting with , or we can create the following rigs by deleting negatives: the natural numbers, the non-negative rational numbers, and the non-negative real numbers.
Are there are any other commutative rings in which we can "remove negatives" in some sense, and thereby obtain a commutative rig that is not a ring? More generally, I'm also interested in any strategy that removes elements from a commutative ring and thereby produces a commutative rig that is not a ring.
The common point of , and is that they are all ordered commutative rings.
Let be a ring which at the same time a poset with order denoted . We say that is an ordered ring if the following are satisfied:
I think you can prove that the set is a rig for the operations induced from if .
If the order is total, then is automatic, right? I would kind of expect it to be added as an extra axiom if you remove totality.
instead of adding in negatives
Beware that group completion doesn't always just "add in negatives": some things may get squashed together in order to make way for negatives. For example, consider the Boolean rig , with as addition. If you "group complete" this, then in the newly formed ring, leads to : you have to squash together. The group completion winds up having just one element.
(If addition satisfies a cancellation law, then you are adding in elements.)
More generally, I'm also interested in any strategy that removes elements from a commutative ring and thereby produces a commutative rig that is not a ring.
Can you say why you're interested?
Mike Shulman said:
If the order is total, then is automatic, right? I would kind of expect it to be added as an extra axiom if you remove totality.
Yes: if then for every . If the order is total, then either or . If , then , thus . Thus in every case.
Another way to get a rig from a ring is to consider the sub-rig generated by some subset. For instance, is the sub-rig of generated by the empty set. However, and are not generated as sub-rigs of and by anything much smaller than themselves; I guess for you can get away with . Although is generated by the empty set as a sub-division-rig (semi-field).
Todd Trimble said:
Beware that group completion doesn't always just "add in negatives": some things may get squashed together in order to make way for negatives.
Wow, I didn't expect that! I had assumed one could just "freely" add in negative versions of existing elements. This also illustrates that a left adjoint functor can involve some "squashing", which goes against the intuition I had.
Jean-Baptiste Vienney said:
I think you can prove that the set is a rig for the operations induced from if .
Very cool! That also nicely handles the example of I had in mind. This process tells us to keep just the first quadrant (where and are both non-negative), and that indeed works. (I'm using this ordering: iff and ).
Mike Shulman said:
Another way to get a rig from a ring is to consider the sub-rig generated by some subset. For instance, is the sub-rig of generated by the empty set. However, and are not generated as sub-rigs of and by anything much smaller than themselves; I guess for you can get away with . Although is generated by the empty set as a sub-division-rig (semi-field).
Nice! That sounds like a fruitful source of rigs!
Todd Trimble said:
More generally, I'm also interested in any strategy that removes elements from a commutative ring and thereby produces a commutative rig that is not a ring.
Can you say why you're interested?
A few reasons:
David Egolf said:
Wow, I didn't expect that! I had assumed one could just "freely" add in negative versions of existing elements. This also illustrates that a left adjoint functor can involve some "squashing", which goes against the intuition I had.
For this it's good to think about the left adjoint of the forgetful functor from monoids to commutative monoids (or from groups to abelian groups). These left adjoints only squash things down!
In general a left adjoint between algebraic categories (like the ones we're talking about) will freely throw in new elements generated by the extra operations and new equations between these elements, due to the extra equational laws.
New equations squash things down. Compared to "rig", the definition of "ring" has a new operation but also a new equation.
Jean-Baptiste Vienney said:
Let be a ring which at the same time a poset with order denoted . We say that is an ordered ring if the following are satisfied:
An equivalent definition (adding in the presumption that ; i.e., that the values are closed under n-ary multiplication for all natural numbers n and not merely for positive n) is that an ordered ring is precisely a ring along with a sub-rig of values considered . (Then more generally, we say just in case ).
John Baez said:
For this it's good to think about the left adjoint of the forgetful functor from monoids to commutative monoids (or from groups to abelian groups). These left adjoints only squash things down!
In general a left adjoint between algebraic categories (like the ones we're talking about) will freely throw in new elements generated by the extra operations and new equations between these elements, due to the extra equational laws.
New equations squash things down. Compared to "rig", the definition of "ring" has a new operation but also a new equation.
Oh, that sounds like a helpful conceptual perspective! Freely throwing in not just elements but equations! To illustrate, if we started with some monoid with distinct elements and , when we try to build a commutative monoid from this we are forced to add in the equation . And this "squashes" these two elements together.
I'm trying to think how this squashing can occur when we make a ring from a rig. We have a new operation of negation, and a new equation for all in our rig.
Right! This perspective becomes more natural if you make a very simple but powerful conceptual move: think of any set as a category with only identity morphisms. The elements of the set correspond to objects; the equations between elements correspond to morphisms.
Then, "freely throwing in new elements and new equations" becomes "freely throwing in new objects and new morphisms".
This trick also clarifies other things. For example, a function between sets is onto iff it's surjective on objects (= elements) and one-to-one iff it's surjective on equations (= morphisms).
As a puzzle for anyone who enjoys it: use this perspective to make the concepts of faithful, full, and essentially surjective functor seem very similar to each other.
David Egolf said:
I'm trying to think how this squashing can occur when we make a ring from a rig. We have a new operation of negation, and a new equation for all in our rig.
Here's a clue: a rig can contain nontrivial idempotents for addition, but a ring can't.
Todd has already given another clue, namely a nice small example:
Todd Trimble said:
Beware that group completion doesn't always just "add in negatives": some things may get squashed together in order to make way for negatives. For example, consider the Boolean rig , with as addition. If you "group complete" this, then in the newly formed ring, leads to : you have to squash together. The group completion winds up having just one element.
John Baez said:
Right! This perspective becomes more natural if you make a very simple but powerful conceptual move: think of any set as a category with only identity morphisms. The elements of the set correspond to objects; the equations between elements correspond to morphisms.
That's pretty cool! I guess we can do something like this for any transitive and reflexive relation. For example, we can make a category of open sets for any topology (in the usual way) using this strategy with the relation .
Thinking about the case of requiring commutativity, we would add a morphism from to and from to . These will be inverses of one another. So I suppose that we can get the elements of our new algebraic structure by taking the isomorphism classes in the resulting category.
Mike Shulman said:
David Egolf said:
I'm trying to think how this squashing can occur when we make a ring from a rig. We have a new operation of negation, and a new equation for all in our rig.
Here's a clue: a rig can contain nontrivial idempotents for addition, but a ring can't.
I see... if we have in a ring, then and so . Thus all idempotents for addition in a ring must be the zero element of the ring. So if we are starting in a rig with a nonzero element such that , then this element should get "squashed" to zero as soon as we add in negatives.
In the example that @Todd Trimble gave above, we have a nonzero idempotent. Namely . And so we expect to get squashed to when we make a ring from this Boolean rig.
John Baez said:
This trick also clarifies other things. For example, a function between sets is onto iff it's surjective on objects (= elements) and one-to-one iff it's surjective on equations (= morphisms).
I'm currently trying to see why a function between sets is one-to-one iff it is surjective on equations/morphisms. For example, let be a set with two elements, where neither element is equal to the other. And let be a set with a single element. Then a function is not injective but it is surjective on morphisms/equations when viewed as a functor.
Interestingly there's also way to see the Booleans as a ring, not just a rig: use 'exclusive or' as addition and 'and' as multiplication. This idea leads to the theory of [[boolean rings]].
By "surjective on morphisms" John actually meant [[full]].
John Baez said:
Interestingly there's also way to see the Booleans as a ring, not just a rig: use 'exclusive or' as addition and 'and' as multiplication. This idea leads to the theory of [[boolean algebras]].
Interesting! That makes me wonder if there might be a general strategy to adjust the addition of a rig to get a related rig without nonzero additive idempotent elements - which would then presumably support adding in negatives without as much squashing. (Analogous to moving from 'or' to 'exclusive or').
By 'boolean algebra' I meant [[boolean ring]] - I corrected this, but not fast enough.
That article, and the corresponding one on [[boolean algebras]], comes as close as I know to the "general strategy" you want.
Boolean algebras are certain rigs with additive idempotents; the corresponding boolean rings don't have additive idempotents.
Interestingly, that nLab article says that the category of Boolean rings is equivalent to the category of Boolean algebras.
Right, that's what "corresponding" meant. :wink:
What do you call a rig where the addition is cancellable?
Because I think those rigs are the ones where one can add negatives to get a ring.
John Baez said:
Right, that's what "corresponding" meant. :wink:
By the way, the correct usage of "corresponding" in mathematics is tricky. I try to use it only when the map alluded to is invertible in some sense: a bijection, an equivalence of categories, etc. But this means not saying things like "given a rig, the corresponding ring formed by throwing in additive inverses..." or "given an operad, the corresponding symmetric monoidal category...". Some people say "corresponding" in cases where the map is not invertible!
Huh, that's interesting. I helplessly think of imaging/sensing, where it can often be the case that different states can be observed and yield the same observation. I think in this case I would still be tempted to refer to resulting observations as "corresponding" to the state that we are sensing. But I can also see wanting "corresponding" to refer to a situation where we can go back and forth between two things.
Madeleine Birchfield said:
What do you call a rig where the addition is cancellable?
I don't know a short name for that; people tend to discuss this issue at the level of the underlying additive monoid. So we say a commutative monoid is cancellative, or something like that, if .
Because I think those rigs are the ones where one can add negatives to get a ring.
I think a commutative monoid is cancellative iff its canonical map to its group completion is injective.
So yeah: I bet what you're saying is true.
I think I'm much looser with my use of "corresponding". I've never thought about it much, but in principle it seems not unreasonable to use it whenever there is a [[correspondence]].
Although if I were going to put a "the" in front of it, I'd expect the correspondence to be functional in that direction.
John Baez said:
Madeleine Birchfield said:
What do you call a rig where the addition is cancellable?
I don't know a short name for that; people tend to discuss this issue at the level of the underlying additive monoid. So we say a commutative monoid is cancellative, or something like that, if .
Because I think those rigs are the ones where one can add negatives to get a ring.
I think a commutative monoid is cancellative iff its canonical map to its group completion is injective.
So yeah: I bet what you're saying is true.
As mentioned before. I would just call it a cancellative rig (with a spelling option of removing an 'l', I guess).
While we're on the topic: there's a left adjoint to the forgetful functor from cancellative commutative rigs to commutative rigs. The unit of the adjunction, evaluated at a rig , is by definition a rig map from that is universal among maps from to a cancellative rig. The map is surjective, and Schanuel calls it the Euler characteristic. It takes to the equivalence class where iff for some .
Isn't "cancellative rig" a bit ambiguous as to whether it's the addition or the multiplication that's cancellative?
A bit.
Although not really, since you essentially never have global cancellation for the multiplication on account of for all .
So there's just one rig that multiplicatively cancellable (for those who didn't get the conclusion).
Okay, but cancellation for nonzero elements is an interesting and important condition. In a ring, that's called being an integral domain.
Does the notion of ideals (maximal, prime, etc) make sense/play a significant role in rig theory?
I used to wonder what was 'integral' about an integral domain, but just now I read that a field used to be called a rational domain:
Hilbert mentions this in section 1 of the Zahlbericht, attributing it to Dedekind and/or Kronecker.
This makes the term 'integral domain' make a bit more sense.
Peva Blanchard said:
Does the notion of ideals (maximal, prime, etc) make sense/play a significant role in rig theory?
Prime ideals and maximal ideals play an important role in lattice theory, for example for Boolean algebras, distributive lattices, and so on. The nLab has a little bit on prime and maximal ideals in these generalized settings, for example [[prime ideal]] and [[prime ideal theorem]].
Returning to the original question, you can generalize from ordered fields to fields equipped with a "convex cone" of sorts: a subset that is closed under binary addition and multiplication, contains 1, and doesn't contain 0. Then you take these elements and throw in 0 to get a subrig.