You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello all! Here is the official thread for Mike Shulman's talk.
Title: Conservativity of duals.
When: Thursday, May 14, 12 noon EDT.
Youtube live stream: https://youtu.be/GwtChA9btgU
Zoom call: https://mit.zoom.us/j/280120646
Meeting ID: 280 120 646
Hello all! We start in 5 minutes.
30 seconds!
In the youtube broadcasting, the top right corner is covered by the head video
In motivating the conservative map - why do we want to prevant extra morphism (traces etc.) in the first place? Is it because the (type-)system gets more complicated and harder to control?
is that gluing construction a sketch of a proof or can this be carried out as a precise argument in some generality?
thank you !
what's a good reference for this fact about lax morphisms, colax morphisms and comma objects? (The abstract nonsense that you can sometimes use to prove the glueing category has the right structure)
this is the page on the comma-like construction he mentions but there are no references: https://ncatlab.org/nlab/show/colax%2Flax+comma+object
https://arxiv.org/abs/1104.2111
That's the Lack-Shulman paper he mentioned
Thanks!
the Laxhulman paper :)
great, thank you! I have wondered why the glueing category tends to have the structure you want, it would be nice to see a general reason for that
In those algebraic manipulations of Hom's, I lost track about how this relates to parallelism talking about in the motivation part. There we got the intution that the Star is about creating channel types. Does this approach relate to the existing pi-calculus approach to processes studied by some type theorists?
Is there a way to say something like "CT is conservative over PA" (or, perhaps an analogous statement that is actually true), by saying some category is conservative over another?
BTW here's the paper this talk is based on: https://arxiv.org/abs/2004.08487
I don't know about the categorical semantics of pi-calculus.
https://en.wikipedia.org/wiki/%CE%A0-calculus
(But I know it's used by some more or less academic approaches to distributed systems)
@Brian Pinsky His mentioning of Linear vs. Intuitionistic logic wihtout negation indeed sounded like there's a logical reading like this to it.
Nikolaj Kuntner said:
I don't know about the categorical semantics of pi-calculus
I googled proof nets and pi-calculus and this paper came up:
https://perso.ens-lyon.fr/olivier.laurent/picppn.pdf
after the talk: does anyone have a link to Lafont's Phd thesis? It's not linked on his website: http://iml.univ-mrs.fr/~lafont/publications.html
thx
Max New said:
after the talk: does anyone have a link to Lafont's Phd thesis? It's not linked on his website: http://iml.univ-mrs.fr/~lafont/publications.html
The title of his thesis is ``Logiques, catégories & machines - Implantation de langages de programmation guidée par la logique catégorique,'' so I suspect it might be in French.
Max New said:
after the talk: does anyone have a link to Lafont's Phd thesis? It's not linked on his website: http://iml.univ-mrs.fr/~lafont/publications.html
I've asked around before without much success... last time I asked someone got round to emailing Lafont, but got no response
Well Mike clearly has a copy because references specific pages in the paper
conservative truth and peano arithmetic
or coputational truth
I forget; it's PA plus a truth predicate that satisfies induction
I'm just thinking of the definition of conservativity I'm familiar with from model theory
Max New said:
Well Mike clearly has a copy because references specific pages in the paper
in Mike's paper there's page numbers in the bibliography back-referencing to where the citation is, but I can't see a page number in the thesis being referred to? (I might be missing something!)
the most precise citation of the Lafont argument I've seen is something like "Annexe C" in his thesis. It gets cited for all these kinds of arguments so I'd quite like to see what it actually says...
oh he was referring to his own paper. I asked in the zoom and he says he's never been able to find it himself
Yeah, if anyone can find a copy of Lafont's thesis I would love to see it.
Hey all, here's the video! https://youtu.be/uD1VGWJXX0M
If anyone is still around, I was wondering the following: if we have a D2-calculus that defines a unique morphism in the D1 category, is there a way of knowing what this morphism is exactly in D1?
For example, if the *-autonomous calculus defines a morphism in a closed SMC (using for example tools available in the *-autonomous setting such as linear distributors), is there a way of knowing what this morphism is in the closed SMC (where there are no such linear distributors)? Is this easy to figure out, or does it require us to go through the construction given in the paper?
I don't know of a generally applicable way to do it except to beta-reduce the construction in the paper. You don't have to go through the entire gluing construction though, you can just interpret your D2-syntax into the envelope and use the fact that the envelope is fully faithful (which is just a Yoneda-like argument, so going backwards is just "evaluating at identities"). The gluing argument is just to show that the result doesn't depend on the particular choice of the envelope (rather than some other fully-faithful embedding into a D2-category).
I wonder if we can come up with a concrete example of a situation where there is a fully faithful morphism to a D2 model but the unit is not fully faithful.
And another basic question: I've seen this trick a few times to restrict to presheaves that preserve certain limits in order for the Yoneda embedding to preserve certain colimits. Is this equivalent to using sheaves? Does every collection of colimits correspond to some Grothendieck topology? What about the converse?
There's two things "certain limits" could mean: either "certain shapes of limits", like finite products, or really some specific limits (maybe a finite list of them, but not necessarily).
are you asking me a question?
If you take a small category and pick some specific colimits in the second sense and look at those presheaves which take the the specified colimits to limits, in general you get a locally presentable category.
Also, it doesn't really matter whether you start with specified cocones that are actually colimits, or just some arbitrary collection of cocones.
Any Grothendieck topos can arise in this way, but for the result to be a topos is a pretty strong additional condition--roughly speaking, you need the specified cocones to be stable under pullback. Lots of locally presentable categories are not topoi, for example, the category of rings.
https://ncatlab.org/nlab/show/sketch
Thanks @Mike Shulman for this really interesting talk! I was curious to known more about a small comment you made, linking duals in SMCs and logic programming. Could you say a bit more about this here? Is there a paper you could point me towards?
Robin Piedeleu said:
Thanks Mike Shulman for this really interesting talk! I was curious to known more about a small comment you made, linking duals in SMCs and logic programming. Could you say a bit more about this here? Is there a paper you could point me towards?
I don't know what the best reference is, but using the sequent calculus for multiplicative linear logic, duals allow one to shift objects between the antecedent and consequent part of the sequent. When an object changes sides it becomes its dual.
Here is a description of the sequent calculus:
https://plato.stanford.edu/entries/logic-linear/#SeqCal
So if you interpret objects in the antecedent as resources for the proof and the objects in the consequent as byproducts this gives a resource sensitive type system of sorts.
@Robin Piedeleu I was thinking of A typed foundation for directional logic programming by Uday Reddy. (Not duals in SMCs, but duals in -autonomous categories.)
Max New said:
I wonder if we can come up with a concrete example of a situation where there is a fully faithful morphism to a D2 model but the unit is not fully faithful.
Yes, I've pondered that too, but haven't gotten anywhere.
Just read that Reddy paper, very nice! Note that it leaves out additive units because they "seem to play no role in a programming language" I wonder if this is related to the conservativity theorem not holding for units! Maybe the semantics can actually be presented as a Chu construction
It also reminded me of the obvious question from a PL perspective: what would it take to extend your proof to (1) inductive/coinductive data and (2) exponentials ! and ?
That additive units play no role in PL seems like a premature optimization: why have empty types, since obviously they don't contain any values? Mizar made that error as well. But then there are control operators that don't type, and so on. [Also: which Reddy paper? I didn't obviously see a link in the above discussion, and I couldn't attend the seminar]
This one: "A typed foundation for directional logic programming" that Mike just mentioned.
I'm saying there might actually be a good reason in this case to leave it out. By "no relevance" he might not have meant "useless" but instead meant "I don't see how to extend my operational semantics"
Thanks (for the paper title and the potential reason).
Basically, the chu construction should imply that certain predicates in the language of that paper (those that correspond to an sequent A1,...An |- B consisting of only intuitionistic formulae when you get rid of all the duals) are implementable as linear functions. This property does not hold if you include also the additive units! Though it seems like you would just need to put this in as a caveat of the interpretation, not that they should have no interpretation at all
so the problem is the additive units don't act like intuitionistic additive units, so you might want to leave them out
Max New said:
It also reminded me of the obvious question from a PL perspective: what would it take to extend your proof to (1) inductive/coinductive data and (2) exponentials ! and ?
For (2) I believe the answer is that it works just fine. I've half-written up a draft that does it, using a notion of "LNL polycategory" that's a semantic correspondent of split-context deduction systems for linear logic.
For (1) it's much less clear to me. One thing I've played around with a bit is saying if I have an inductive datatype that's a fixed point of some functor that can be expressed in terms of the tensor products and nonempty (co)limits, e.g. or , then its image in the envelope should still be a fixed point of the corresponding functor, e.g. or , where we use an envelope that preserves binary coproducts and cartesian monoidal structure . It won't be a least or greatest fixed point in the envelope, but I'm not sure exactly what sort of inductiveness Reddy is assuming for his recursive datatypes. If you're interested in talking about this more I would be too!
wow, LNL polycategory is something I'd be very interested in seeing! Since I'm partial to CBPV/LNL it's something I've asked people more familiar with classical LL for but never got anything
Re: the inductive/coinductive stuff, I don't see anything in the Reddy paper that indicates that the inductive types are a LFP/GFP, just that an isomorphism exists, so that interpretation might work fine
Oh but it does allow things to be defined recursively duh
Mike Shulman said:
Yeah, if anyone can find a copy of Lafont's thesis I would love to see it.
a small update on this. I asked Hugo Paquet, who did some asking around in turn. According to Lafont himself the thesis was never scanned so there are only physical copies, one of which should be in Paris Diderot's thesis archives.
So it looks like it's going to stay mysterious, unless someone fancies a trip to Paris for some scanning.
Well there are of course people at Paris Diderot that might be interested or have students that would do it (once the archives are physically accessible): https://www.irif.fr/~petrisan/ https://www.irif.fr/~mellies/
good idea - I'll email Paul-Andre once the lockdown in France loosens up
@Mike Shulman is your conservativity result constructive? My sticking point is the use of the adjoint functor theorem, which I've been informed on twitter in general uses axiom of choice
Well, if you heard it on twitter I suppose it must be true...
The definition of the adjoint in an AFT is pretty concretely given in terms of limits/colimits. The only context I can think of in which an AFT requires AC is if your metatheory is set-theoretic (as opposed to univalent) and the relevant limits/colimits in your categories "exist" only in the weak sense of rather than being specified by a function. That happens occasionally, but I believe the vast majority of complete/cocomplete categories arising in practice do have a function that specifies their limits/colimits even if you don't assume AC.
Thanks, I've been trying to understand the proof of theorem 6.3 part 1 that shows the (co)tensor preserving modules form a reflective subcategory, which uses the AFT and I've never learned the AFT before.
I think of the 3 properties (locally presentable, closed under limits, and accessibly embedded) I only understand why it's closed under limits.
Thanks for bringing this up. I realized after my previous post that what I wrote in the proof of that theorem is, while technically correct, not really helpful for an analysis of constructivity. The natural way to see (at least, for me) that is locally presentable is to view it as the models for a sketch. But the way we prove that the category of models for a sketch is locally presentable, in general, is to show that it's a reflective subcategory of a presheaf category, and in the case of the relevant presheaf category is ! So this is "morally" a circular argument.
If we fully beta-reduce things, the way we prove that is reflective in is to construct a reflection by a more or less explicit inductive argument, successively adding in new elements that have to be there and identifying elements that have to be equal, over and over again. This is indeed something that we can't always do in constructive set theories like CZF or IZF (or even ZF), although one might argue that this isn't morally an issue of constructivity but rather an indictment of the impoverished tools for inductive constructions available in those theories -- for instance, we can always do it in HoTT with HITs.
In the specific case of , I think that because all the "operations" are finitary, we should be able to construct the reflection by a mere countable iteration, which should work in pretty much any constructive context. But it would be more of a problem for , if contains infinite limits or colimits; in that case we might need either ZFC or HITs.
One last thing to note is that even the statement of 2-conservativity refers to "the free Y generated by an X". This is already something that generally requires some kind of inductive construction to produce! Again, if there are no infinite limits included in the doctrine I think that countable iteration should suffice. But in general, I would expect that any reasonably natural framework that has the tools to state a 2-conservativity theorem should also have the requisite tools to prove it.
I should probably add some discussion of this point to the paper, since there may be readers who care about constructivity. Thanks again for bringing it up.
Thanks Mike, do you have a reference I could look to for this explicit construction of the reflection for models of sketches?
Also, constructivity here is very important because you should be able to extract a normalization algorithm from this proof
Hmm, for some meaning of "normalization algorithm" I guess. I can see extracting an algorithm that computes (e.g.) an IMLL proof from an MLL proof, but it's not clear to me how to call that normalization; can you explain?
For a reference, I think it's described in Adamek-Rosicky "Locally presentable and accessible categories". The original reference might be Freyd-Kelly "Categories of continuous functors I".
I guess you could call it "cut elimination" as well but basically that it gives you a procedure to take a term of ILL type that uses CLL stuff internally and "reduce" it to only contain ILL stuff. The resulting term I think does end up being in some kind of $\beta$ short $\eta$ long form, but I meant it informally
How can you tell that it ends up -short -long? I thought we were only at the stage of wondering whether the proof is constructive so that you get an algorithm in the first place?
Well I could be wrong, I'm still trying to understand the proof. I was trying to write out the proof as an explicit logical relation and without accounting for the reflector it certainly looks like that . I.e., a proof of a tensor is a pair of proofs etc. However maybe once you account for the reflector all of that is lost in quotients
I'm not sure whether I understand the conversation here, but note that you can also constructively form the group described by a certain finite presentation by generators and relations, at least if you have quotients, but it doesn't mean you get out an algorithm to solve the word problem in that group.
@Max New Yes, the reflector is the main issue. I can definitely see an algorithm that's easy to extract in the case where no reflection is necessary. I suppose you could also try to formulate everything with setoids.
@Reid Barton I'm not certain, but I think your point is relevant and has to do with the reflector once again: when it gets into the picture the "normal forms" that come out will be elements of some quotient set. Again, you could try to use setoids, or the other tricks that people use to extract normalization algorithms from gluing proofs (e.g. https://arxiv.org/abs/1809.08646), but that would be a more substantial undertaking.
I've just learned that @Christine Tasson has uploaded a copy of Lafont's thesis. Now time for me to learn French :rolling_on_the_floor_laughing:
To continue reviving this thread: In PL applications we have not just coproducts but also inductive constructions such as NNOs/W types. Does using the coproduct-preserving modules already work for 2-conservativity with NNOs or is it more subtle? It seems that using coproduct preserving modules would already ensure that you get a module satisfying but I'm not sure if you get that it is an NNO.
I don't think that works. isn't enough to make it an NNO. It's related to this question.
I'll read the elephant section when I'm back at work, but does "standard" mean that it is the colimit of the -chain If that's the case then I guess you can just require your modules to preserve this colimit as well.
I don't remember for sure, but I think standardness is weaker than that. That basically says that the NNO is the coproduct , but I think an NNO can be standard even if infinite colimits don't exist.
I didn't know about this "standardness", but it sounds interestingly reminiscent of the, umm, standard notion of "standard" model of the natural numbers.
In classical logic one sometimes sees a hand-wavy useless notion of "standard" model of the natural numbers, meaning something like "the true natural numbers according to God".
But there's also a more precise and useful notion, and that's what I'm talking about here: namely, an initial model of PA in the set theory we're using - say ZFC. We can define the ordinal as a set in ZFC, and this gives an initial model of the PA.
This is remiscent of the answer on MathOverflow, which says that a natural numbers object is standard if the morphisms (for external natural numbers ) are jointly epimorphic.
In both cases we're invoking some external set theory to set the standard. But in quite different ways, it seems to me.
If I understand correctly, the "standard natural numbers" are a specific model of PA (in some model of ZFC), whereas this notion of "having standard natural numbers" is a property of a topos (which is like a model of ZFC) -- a property that relates to "the" ("standard"?) natural numbers in that topos.
It reminds me a bit of the notion of whether a model of set theory can contain extra ordinals.
And also of omega-inconsistency.
Mike Shulman said:
If I understand correctly, the "standard natural numbers" are a specific model of PA (in some model of ZFC), whereas this notion of "having standard natural numbers" is a property of a topos (which is like a model of ZFC) -- a property that relates to "the" ("standard"?) natural numbers in that topos.
That sounds right.
These things can get even more slippery; I spent a lot of time blogging with Michael Weiss about what happens when you take a model of PA in ZFC, and then look at a model of that in some set theory.
Since, Max, you mentioned PL applications, I found the following observation (from nlab) about CCC's with NNO's also helpful.
The free category with finite products and a parameterized NNO only has the primitive recursive maps . But any CCC with NNO has at least the -terminating maps (e.g. Ackermann). So it's another reason that we can't hope to conservatively complete with function spaces in general, while preserving NNO's. This is quite disappointing!
About sheaf categories. What is still surprising to me is that the O'Hearn-Riecke model of higher-order recursion is basically a sheaf model with NNO but still manages to obtain full abstraction. We explored it here but I still think it's mysterious.