You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Let be a category that has some properties that make it suitable for doing logic in. For concreteness, let's say that is a Boolean algebra. So, has all finite products, all finite coproducts, and an appropriate notion of negation. That means we can talk about the "and" the "or" and the "not" of objects. I'm intuitively thinking of each object of as a predicate, where there is a morphism from to under this condition: whenever some input to the predicate produces a true proposition then providing that same input to the predicate also yields a true proposition.
I'm interested in taking a setting like the above that is nice for doing logic, and obtaining a new setting that also lets us talk about the "and", "or" and "not" of objects. One idea that occurs to me is to consider sequences of objects in , with the aim being to model different levels of confidence. For example, we might want to consider a category where objects are 3-tuples of objects in , say , where we are "not confident" in , "fairly confident" in and "very confident" in . This leads me to consider the functor category , where is the thin category with three objects having this shape: .
I am wondering if the operations "and", "or", and "not" available in are also available in . I would like to be able to apply these operations in this setting, in order to obtain a strategy for reasoning in presence of multiple of levels of confidence.
I'm also interested in a more general question. If is some small category, and is a Boolean algebra, then when (if ever) does the functor category inherit the operations of "and", "or" and "not" from ?
(By the way - I'm more interested in understanding how to analyze questions like this than knowing the answer for this specific case. So, general concepts and techniques related to this kind of question are of interest to me!)
The operations of conjunction and disjunction in a Boolean algebra are examples of limits and colimits, respectively. If is any category with (co)limits of some shape then always has the same (co)limits computed pointwise. You might have seen an argument like this in a case of functors into a bigger category like $$\mathbf{Set}.$ $
Negation is more interesting! In about the simplest possible case, we see that no longer has negations, i.e. is no longer a Boolean algebra, in general. Indeed, given an arrow in its negation must be an arrow (since and are computed levelwise, so such an arrow is the only thing that could intersect at and union with it to ) But such an arrow won't generally exist--in fact, will only be true if ! You have a better bet of this functor category being a Heyting algebra, but perhaps that's enough to say for the moment.
Yes, not only does inherit all the limits and colimits that has, it also inherits all the -ary covariant operations has, obeying all the same equations. And the idea is the same: you compute these things "pointwise".
By an -ary covariant operation I just mean a functor
If you play around a bit you can see how this gives a functor
You're probably familiar with this in the case where is the set of real numbers and - both sets, which can be viewed as discrete categories. Then is the set of k-tuples of real numbers. The real numbers have a binary operation called "addition", so we get a binary operation on k-tuples of real numbers, called "addition of vectors".
But it all works in a vastly more general way.
@Kevin Arlin pointed out the counterexample of negation. A boolean algebra is a special kind of category, "and" and "or" are binary covariant operations, but negation is not a unary covariant operation.
The last statement is just a fancy way of saying that "P implies Q" does not imply "not(P) implies not(Q)". I suppose people who slip and act like it does are implicitly hoping that negation is a covariant functor. :upside_down:
Kevin Arlin said:
You have a better bet of this functor category being a Heyting algebra, but perhaps that's enough to say for the moment.
In case that isn't enough to say...
A Heyting algebra is a poset (with finite limits and colimits, I guess) that is a [[cartesian closed category]]. In logico-posetal terms this means there is an "implication" operation such that if and only if . From this you can define a "negation" as , where is the bottom element ("false"). But unlike in a Boolean algebra it won't satisfy , so the "logic" you get is [[constructive logic]]. (Conversely, in a Boolean algebra you can define to get a Heyting algebra.)
In the case of a functor category, is cartesian closed as long as is small and is complete and cartesian closed. So, for instance, if is a small category like your and is a complete Boolean (or Heyting) algebra, then is a Heyting algebra. In this case the formula for the implication in becomes
Thanks everyone! It will take me some time to carefully read and respond to your responses, but I appreciate them very much!
Unfortunately, my chronic fatigue has been giving me a hard time the last few weeks. So, it's been tough for me to make speedy progress on understanding this stuff. To encourage myself, I thought I'd share my progress so far!
My first order of business is to try and understand this:
Kevin Arlin said:
The operations of conjunction and disjunction in a Boolean algebra are examples of limits and colimits, respectively. If is any category with (co)limits of some shape then always has the same (co)limits computed pointwise. You might have seen an argument like this in a case of functors into a bigger category like
Specifically, I want to first understand the argument alluded to in the final sentence. I believe I found a good starting point in Proposition 8.7 of "Category Theory" by Steve Awodey. After changing the notation a bit, the first part of this proposition states:
For any locally small category , the functor category is complete.
I believe the idea here is that the category of functors from to inherits all small limits from .
(The second part of Proposition 8.7 mentions the "evaluation functor", and so I spent an enjoyable time learning about that, the concept of "exponentials", and something very cool called the "bifunctor lemma". But I'm not sure that stuff is really needed to understand the proof of the first part of Proposition 8.7).
Now, let me try to understand the proof of the quoted proposition provided by Awodey. We want to show that has every small limit, assuming is locally small. To do this, we consider an arbitrary small diagram in , and try to show it has a limit. That, is we start by considering a functor , where is a small category.
If the limit of exists, it corresponds to some object in , together with a bunch of morphisms from to the objects in the image of . Note that is a functor .
The Yoneda lemma tells us that , for any object in , where is the Yoneda embedding functor. This functor sends an object to the representable functor .
Next, the proof makes use of the fact that "representable functors preserve limits". Understanding a proof of that will probably be my next step! But I should stop for now, because I'm getting tired.
Here's something that someone may have told you already, but it's really good intuition for what's going on here:
If is a ring, then for any set the functions is also a ring! Why? Because you can define the operations pointwise: , , and so on.
Indeed, for any algebraic structure , the functions into form an algebra of the same type, just by working pointwise in this way.
Now, one enlightening way to view limits/colimits/etc is as certain "algebraic" structure on a category -- in much the same way that a "ring" is structure on a set. And with this intuition, you might guess that functors into a category will have all the same "algebraic structure" (read: limits/colimits/etc) that does. This intuition is true and the proof is exactly the same!
If has products, then we can compute the product in as . But now everything is a functor, so I also have to tell you what happens to the arrows. But it's the same thing! If then works!
More generally, you can compute limits (and colimits) pointwise too:
The correct way to make this precise is with the language of "2-monads" but you can go pretty far just taking it as intuition, and proving whatever special cases you need by hand ^_^
Thanks @Chris Grossack (they/them) ! Your explanation makes a lot of sense to me. Thinking of limits (or colimits) as specifying some kind of algebraic operation seems like quite a nice perspective.
I'll still want to work through the details of the proposition I discuss above, but having a big-picture explanation should be helpful!
Absolutely! Take your time, and there's a ton of us here willing to help out whenever you get stuck ^_^
Reading over your comment again @Chris Grossack (they/them), I realize that I don't follow what you said about arrows in your example with products. You indicate that we can describe how acts on an arrow by setting . Unfortunately, I don't know how to compute the right-hand side here, which is the product of two arrows and !
I know how to compute a kind of product on two arrows with the same source, in a category with products. If and , then we can set as the unique morphism to from specified by the universal property of products.
But and aren't in general going to have the same source object. So I'm not sure what kind of product you had in mind for computing ! I'm guessing this might be a kind of product I'm not familiar with.
If you could elaborate on this, I'd appreciate it!
Edit: I think the answer by Randall here might explain this.
That is, if we have a morphism and a morphism , we can form a morphism as follows. We first need to make a morphism from to , and a morphism from to . The first of these can be acquired as and the second as . Then we can use the universal property of products to get a morphism from to , which is defined in terms of and .
This seems like it could be a good way to take the product of morphisms. Although, I'm not 100% sure this is what was meant above.
@David Egolf
Of course!
If you have maps and , then we want a map . This says that the product operation itself is a functor from . Informally, what should this arrow do?
The only reasonable choice is , and so we would like to express this using arrows. (There's a notion of "internal logic" for certain categories that lets you use this kind of set theoretic manipulation rigorously, but let's not get into that here. For now we'll take it as a motivating example). Unsurprisingly, when we express this operation in terms of arrows we get exactly the arrow that you built! ^_^