Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: our work

Topic: Conor Mc Bride


view this post on Zulip Conor McBride (Mar 24 2021 at 23:40):

Which "indexed containers" are profunctors by construction?

view this post on Zulip Conor McBride (Mar 24 2021 at 23:52):

I mean, a big part of my work is (in a programming languages datatype declaration context) to notice and exploit what is already under my nose.

view this post on Zulip Tom Hirschowitz (Mar 25 2021 at 10:11):

I'm not sure I'm parsing the question correctly. Would you mind expanding a little?

view this post on Zulip Conor McBride (Mar 25 2021 at 11:05):

I don't mind mind expanding. My motivation comes from dependently typed programming, where our notion of "inductive datatype" amounts to "proof-relevant relation", i.e. data consistent with some explicit indexing discipline (e.g. terms which are well typed and in scope by construction). These data structures are fixpoints of polynomial functors, but not in an exciting enough way, to my taste. In particular, the types used as indices often have categorical structure giving rise to actions on the indexed data (e.g., scopes form a category with embeddings as morphisms, in a way which well scoped terms respect).

That's to say, these indexed data structures very often are intended by their designers to have nontrivial presheaf/profunctor structure, but the current technology allows that intent to be communicated laboriously, after the fact. I write my datatype of terms as if scopes have no structure; I implement the action of scope embedding on terms; I prove that it respects identity and composition; I die of boredom. I despair of pointing out that structure happens to arise as some sort of outrageous coincidence, when in fact I put it there deliberately but couldn't say so.

Our starting point is polynomial functors on Set. The data are given by diagrams O <- S <- P -> I, giving rise to functors between slices, Set/I -> Set/O. Very often, we take O to be I, and then take the initial algebra, giving us an indexed inductive family of sets, in Set/I, or Set^I if you like. That's a good start, but it's not alert to structure other than equality in I. If my indices have interesting categorical structure, I want to get my hands on nontrivial functors from curly I to Set, and I'm wondering how to refine the notion of the underlying polynomial functors to get that structure for the asking.

view this post on Zulip Jacques Carette (Mar 25 2021 at 12:08):

Two (very obvious) comments:

  1. if your II has structure, and you don't ask that that structure already be present in PP and preserved, then you're going to have to reconstruct all of that tediously. Same for OO.
  2. Just like with identity-on-objects functors, which were discussed recently on the Category Zulip, I=OI = O is not some kind of post facto thing you discover. By then it's too late, and life is tedious. [Same thing arises when dealing with 2 monoid structures on the same carrier set; one way to say this brings much pain, and the other no pain. Just because we now have equalities that compute doesn't mean that that's the pleasant way to do things when you can avoid all of it altogether.]

I think the problem lies in the starting point: it's like starting with block-of-words as your building pieces. Way better than bits, that's for sure, but still way too unstructured. [Which is probably exactly what you're saying in your last line!]

If I were to attack this, I'd figure out all the nice properties I want in my functors Set/ISet/O\mathsf{Set}/I \rightarrow \mathsf{Set}/O, then figure out what diagrams give rise to those functors. Maybe it won't be quite polynomials that you'll end up with.

view this post on Zulip James Wood (Mar 25 2021 at 13:41):

Jacques Carette said:

Same thing arises when dealing with 2 monoid structures on the same carrier set

I suppose there must also be something like this for models of bunched implications, right?

view this post on Zulip Tom Hirschowitz (Mar 25 2021 at 13:57):

Taking the untyped λ\lambda-calculus as an example, we may construct it following the Fiore-Plotkin-Turi yoga, using the endofunctor on [𝔽,𝐒𝐞𝐭][𝔽,𝐒𝐞𝐭] defined by

Σ(X)(n)=X(n)²+X(n+1).\Sigma(X)(n) = X(n)² + X(n+1).

This functor is polynomial in the general sense of familial, hence, I think, so is the free monad on it, say 𝐓𝐓, and the λλ-calculus is 𝐓(I)𝐓(I), where I(n):=nI(n) := n.

Thinking of λλ-terms as proofs in natural deduction (for very basic sequents, of the form nn ⊢), one may think of 𝐓(X)(n)𝐓(X)(n) as partial proofs with axioms in XX.

Does functoriality in XX and nn satisfy your needs, or do you want more?

view this post on Zulip Conor McBride (Mar 25 2021 at 14:41):

Thanks! This is a good start, but I'm hoping for more. For this example, I'd be trying to get my hands on a functor [Δ+,Set][\Delta_+,\mathbf{Set}], (where by Δ+\Delta_+ I mean the category of order-preserving injections on finite sets), so that I can shift terms to anywhere that their support is in scope. Crucially, +1\cdot + 1 is still a functor in Δ+\Delta_+. And it's that sort of functoriality requirement that I'm interested in bottling, in general.

That is, where in polynomials on sets, OSPIO \leftarrow S \leftarrow P \rightarrow I we have functions, we now need a richer functoriality, but I'm struggling (productively) to put my finger on precisely what.

There are lots of situations like this (type preservation explanations spring to mind) where we currently prove things but could potentially just enjoy their truth, if only we knew how to be more deliberately structure-respecting.

view this post on Zulip Conor McBride (Mar 25 2021 at 14:53):

I clearly need to read the markdown instructions more carefully. Sorry.

view this post on Zulip Jacques Carette (Mar 25 2021 at 15:44):

You need two dollars for LaTeX. You are able to edit your posts too.

view this post on Zulip Jacques Carette (Mar 25 2021 at 15:47):

@James Wood I was alluding to the bundling/unbundling "problem", that is common to any kind of structure where you may want to expose certain fields as fixed on the interface, some of the time but not all the time.

view this post on Zulip Jacques Carette (Mar 25 2021 at 15:57):

Reminder to @Conor McBride that those are exactly Joyal's L\mathbb{L}-species, covered quite well in chapter 5 of the Bergeron, Labelle, Leroux textbook. It's no wonder you seek these, as they have all sorts of beautiful properties.

view this post on Zulip Conor McBride (Mar 25 2021 at 15:57):

Thanks!

view this post on Zulip Tom Hirschowitz (Mar 25 2021 at 16:17):

But can't you do that in the Fiore-Plotkin-Turi setting? Functoriality in yields in particular 𝐓(I)(n)𝐓(I)(n+1)𝐓(I)(n) → 𝐓(I)(n+1), doesn't it? I'm probably misunderstanding your post, sorry!

view this post on Zulip Cody Roux (Mar 25 2021 at 16:55):

Conor McBride said:

Our starting point is polynomial functors on Set. The data are given by diagrams O <- S <- P -> I, giving rise to functors between slices, Set/I -> Set/O.

I feel stupid, but can anyone explain the OSPIO\leftarrow S\leftarrow P\rightarrow I yoga to me?

view this post on Zulip Spencer Breiner (Mar 25 2021 at 17:03):

Hi Cody! Start with a bundle XIX\to I on the right. First pull back, then \prod, then \sum. That gives you a new bundle over OO. That's a sum of products, hence "polynomial".

view this post on Zulip Spencer Breiner (Mar 25 2021 at 17:04):

Sorry, I think that's backwards. Start with the bundle over II.

view this post on Zulip Spencer Breiner (Mar 25 2021 at 17:05):

Fixed

view this post on Zulip Valeria de Paiva (Mar 25 2021 at 17:44):

Thanks for expanding, I have not read about containers (indexed, higher-order or any kind really) to make sensible observations. But I like the motivation and would like to understand it better. you say:

My motivation comes from dependently typed programming, where our notion of "inductive datatype" amounts to "proof-relevant relation", i.e. data consistent with some explicit indexing discipline (e.g. terms which are well typed and in scope by construction).

ok, this (proof-relevant relations) explains why profunctors.

These data structures are fixpoints of polynomial functors, but not in an exciting enough way, to my taste.

why? Being fixed points of a polynomial functor is good. and being so in a non-exciting way is also good, you're just setting up the scene, saying what's need, for whatever happens later. so, I don't understand

I despair of pointing out that structure happens to arise as some sort of outrageous coincidence, when in fact I put it there deliberately but couldn't say so.

but I guess I should go try to read the papers instead of simply going why, why, why like a broken record

view this post on Zulip Cody Roux (Mar 25 2021 at 18:16):

Oh I think I get it. Thanks @Spencer Breiner ! After all this time, I still find it hard to stare at diagrams instead of typing rules.

view this post on Zulip Cody Roux (Mar 25 2021 at 18:17):

@Valeria de Paiva @Conor McBride I think an example of what the "extra work" is needed to be done might be revealing as well. Do you happen to have an Agda gist handy?

view this post on Zulip Conor McBride (Mar 25 2021 at 18:32):

I expect I could scrape something together. Watch this space.

When you live in type theory, you can turn lots of these arrows around and work dependently. Output sorts O:SetO : \textbf{Set}, shapes S:OSetS : O \rightarrow \textbf{Set} depend on what sort of container you're trying to build. Positions P:(o:O)SoSetP : (o : O)\rightarrow S\:o \rightarrow \textbf{Set} say where you can put the stuff in the shape. And then you request r:(o:O)(s:SO)PosIr : (o : O)\rightarrow(s : S\:O)\rightarrow P\:o\:s \rightarrow I the input sort of the data you want in each position. The pattern of dependency reflects the flow of time. You end up with a functor on indexed sets in (ISet)(OSet)(I\rightarrow \textbf{Set})\rightarrow(O\rightarrow \textbf{Set}) mapping X:ISetX:I\rightarrow \textbf{Set} and o:Oo : O to (s:So)×(p:Pos)X(rosp)(s : S\:o)\times(p : P\:o\:s)\rightarrow X\:(r\:o\:s\:p). That is, you choose one of the shapes available at sort oo, then you fill up its positions with XX data of the input sorts requested.

What's bothering me is that the sorts OO and II are here treated has having only discrete structure, but I want to build functors (then take fixpoints) between nontrivial presheaves.

view this post on Zulip Conor McBride (Mar 25 2021 at 19:54):

Here is an example in Agda of the presheaf structure of simply typed lambda terms (a functor from the dual of contexts with selection).

It turns out that in a past life, I made some progress. This rather gnarly and inscrutable lump of Agda gives a syntactic presentation of some polynomials which are presheaves by construction. But I'm keen to get my hands on the generic presentation.

view this post on Zulip John Baez (Mar 25 2021 at 20:09):

Conor McBride said:

I clearly need to read the markdown instructions more carefully. Sorry.

Just replace all your $'s with double dollars. Math is twice as expensive here.

view this post on Zulip Conor McBride (Mar 25 2021 at 20:12):

Looks like swearing in Asterix now.

view this post on Zulip Cody Roux (Mar 25 2021 at 20:27):

I hear some areas of category theory are only possible to understand with magic potions.

view this post on Zulip Jacques Carette (Mar 25 2021 at 21:52):

Conor McBride said:

Here is an example in Agda of the presheaf structure of simply typed lambda terms (a functor from the dual of contexts with selection).

At the bottom of that code, you say

But Agda doesn't let me construct functors
(Bwd Ty, <=) -> (Set, ->)/Ty
so I'm forced to play stupid, work in
|Bwd Ty| -> (Set, ->)/Ty
and then work hard to "discover" the functoriality
I already knew I wanted, in advance

But category theory wouldn't let you do that either; more precisely, it would, but give you exactly the same as Agda.

Question: why are you squishing all that information down into a category? Both on the left and on the right. This data all feels "higher dimensional" (whether it's monoidal, bicategories, double cat, whatever, I don't have the time to puzzle it out atm). And it seems that you're running into problems because you're squishing it down into something too small to make all the bits obvious.

view this post on Zulip Nathanael Arkor (Mar 25 2021 at 21:56):

Conor McBride said:

Here is an example in Agda of the presheaf structure of simply typed lambda terms (a functor from the dual of contexts with selection).

It turns out that in a past life, I made some progress. This rather gnarly and inscrutable lump of Agda gives a syntactic presentation of some polynomials which are presheaves by construction. But I'm keen to get my hands on the generic presentation.

I don't suppose you could elaborate what you feel is the shortcoming of the approach @Tom Hirschowitz described? It seems to me that this has the properties you are after.

view this post on Zulip Jacques Carette (Mar 25 2021 at 22:13):

@Nathanael Arkor I think you're using "this has all the properties you are after" very differently from @Conor McBride . Your notion seems to be that of "it's all obvious to a seasoned category theorist", while my understanding is that @Conor McBride wants "it's so obvious that even an interactive theorem prover plainly sees it without needing extra proofs." Those two are radically different things! [At least, that's my understanding...]

view this post on Zulip Conor McBride (Mar 25 2021 at 22:16):

Jacques is right. I want the machine to see this is obviously the right thing and has all these properties, without me lifting a finger beyond giving the structure of the data. Moreover, I want the machine to deploy all these properties, without me lifting a finger at all. And I shall make it so.

view this post on Zulip Cody Roux (Mar 25 2021 at 22:28):

I'm going crazy, but I remember a paper, from years ago where you could prove univalence for specific types of inductives defined using polynomial functors. I think it was Coquand?

For the life of me, I can't find the paper though.

view this post on Zulip Cody Roux (Mar 25 2021 at 22:29):

It seems to me, that one might want to apply univalence here, at least to transport isos. Not sure i'm making any sense though.

view this post on Zulip Conor McBride (Mar 25 2021 at 22:35):

Function extensionality is certainly indispensable in this cause. Univalence is not the only way to obtain function extensionality. But for sure, univalence would improve what's mechanically provable in this direction.

view this post on Zulip Cody Roux (Mar 25 2021 at 22:39):

I mean, in this case, univalence was a theorem: every "reasonable" datatype satisfies some form of transport.

view this post on Zulip Cody Roux (Mar 25 2021 at 22:40):

eh, maybe I'm mis-remembering.

view this post on Zulip Conor McBride (Mar 25 2021 at 22:42):

That sounds right. It's a crucial link in the story. Replace "match on refl" transport by per-type transport. I'm in the business of noticing when that's possible under weaker assumptions than on-the-nose equality.

view this post on Zulip Nathanael Arkor (Mar 25 2021 at 22:44):

@Jacques Carette: ah, I see, thanks. I remember being frustrated with similar shortcomings when I was using Coq, so I would be very happy to see progress on this front too :)

view this post on Zulip Jacques Carette (Mar 25 2021 at 22:45):

You still have to careful. (Meant as a reply to @Cody Roux)

data Thing = Boom | Bam
data Other = Red | Bycicle

are entirely equivalent as types, and there's no good reason (in a univalent setting) to distinguish Thing from Other. But, well, there are two isomorphisms. And which you chose does matter. You have to be careful that you're not just moving the deck chairs around.

view this post on Zulip Cody Roux (Mar 25 2021 at 22:48):

Sure, but isn't the whole buisness of this brave new type theory to always keep the isos around?

view this post on Zulip Conor McBride (Mar 25 2021 at 22:49):

What's worse, there might be no reason whatsoever to consider these types cognate, despite the outrageous coincidence in the number of bits required to represent their values.

view this post on Zulip Conor McBride (Mar 25 2021 at 23:04):

@Nathanael Arkor There's so much room for improvement. In most proof assistants based on type theory, the typechecker doesn't even know that concatenation of lists is associative, so you have to prove it and point it out all the time.

view this post on Zulip Conor McBride (Mar 25 2021 at 23:28):

My take on being an intensional type theorist is that I structure the division of labour between people and machines. For reasons of taste, I prefer if it is easy to form simple mental models of what is "machine work" (definitely no human intervention needed) and what requires humans to deliver bits. The division of labour is expressed via the judgements: people provide terms but machines check that the terms are correctly composed up to some weak but decidable equality (that's the "disappointing accident"). The engineering of that disappointing accident is a subtle task that most mathematicians cannot be bothered with, and why should they? What's the point in worrying about the difference between "true" and "so true even a stupid computer can see it"?

But as I am, at heart, a sideshow performer ever in search of clockwork monstrosities with which to captivate my punters, I'm fascinated by the business of walking that line, and finding theory that makes richer classes of theorem require minimal machine time and no human time.

view this post on Zulip James Wood (Mar 26 2021 at 00:01):

Jacques Carette said:

there's no good reason (in a univalent setting) to distinguish Thing from Other.

We still have a judgemental inequality between them, which in practice is one we notice a lot.

view this post on Zulip Jacques Carette (Mar 26 2021 at 00:54):

@Cody Roux Yes, absolutely. But many want to downplay that as much as possible. To me, that is an error.

@James Wood That's kind of the trouble, isn't it? We're told they're the same and yet they're also not. And indeed, the whole point is that (as Conor says), they might not be cognate at all. Intent is the important part, outrageous coincidence is something that the compiler can take advantage of post facto. Our theory shouldn't thrust that upon us with no way to escape. Damn it, we went out of our way to write down our intent by using 2 different names!

view this post on Zulip Tom Hirschowitz (Mar 26 2021 at 06:00):

Conor McBride said:

Jacques is right. I want the machine to see this is obviously the right thing and has all these properties, without me lifting a finger beyond giving the structure of the data. Moreover, I want the machine to deploy all these properties, without me lifting a finger at all. And I shall make it so.

Thanks, and sorry for taking so long to get it. @Ambroise has implemented a generalisation of the Fiore-Plotkin-Turi approach in Unimath, so technically I'd say you can just push a button to get all those properties. But the downside is that you don't get an inductive datatype, it's implemented as a directed colimit. I guess this is not good enough for you?

view this post on Zulip James Wood (Mar 26 2021 at 08:20):

Probably AACMM20, section 10.3, would be a good starting point for the discussion about the Fiore-Plotkin-Turi work.

view this post on Zulip Tom Hirschowitz (Mar 26 2021 at 10:35):

I don't understand this:

By contrast, we are working in an implemented type theory where the encoding can
be understood as its own foundation without appeal to an external mathematical semantics.

which I guess means I should go read the paper.

view this post on Zulip James Wood (Mar 26 2021 at 19:43):

I also don't understand some bits like this.

view this post on Zulip Valeria de Paiva (Apr 01 2021 at 02:01):

hi, I'm still trying to understand the gnarly bit of Agda. I'm not good at this,
but 138 locs is definitely a little less daunting than all the papers written about containers since 2003. so you say:

I'll need simple types...

and this is good, we always do.

...and right-growing (i.e., backward) lists to put them in, to make contexts. -

sure, also fine. (don't know if the capital B is for backward or for the base category of the fibration p:E--> B, but this is fine)
Then stuff starts getting interesting because

Now the semisimplicial category.

(and I always thought that the list of variables with their dependent types was just a list.) sure, we can concatenate new pairs and contexts always grow, but it hadn't occurred to me to think of this as a "semisimplicial category". but fine. why is this

(It's secretly a bit vector.)

I have no idea, but maybe it's about the efficiency of stuff, I don't care.

The two CLUEs show that context extension is on-the-nose covariantly functorial.

works for me. as does

Now, here are well scoped well typed terms.

after a half a second hesitation on S >> T being the function space from S to T.
but then when we get to

The above is the fixpoint of a polynomial with blah, blah

I am lost.

I want nothing to do with the Fiori-Plotkin-Turi approach, if I'm reading right the passage indicated by James Wood as 10.3 of AACMM20. Because yes, I want my indexed category with de Bruijn encoding on the syntax, not presheaves. but it looks like reading of the previous version of AACMM20 is required.

view this post on Zulip Tom Hirschowitz (Apr 01 2021 at 07:02):

Valeria de Paiva said:

I want nothing to do with the Fiori-Plotkin-Turi approach, if I'm reading right the passage indicated by James Wood as 10.3 of AACMM20. Because yes, I want my indexed category with de Bruijn encoding on the syntax, not presheaves. but it looks like reading of the previous version of AACMM20 is required.

You seem to understand what got me puzzled earlier: would you please mind explaining in a bit more detail? What do you mean by having your indexed category with de Bruijn encoding on the syntax? What would you get with presheaves, an indexed category with de Bruijn encoding on presheaves?

view this post on Zulip Conor McBride (Apr 01 2021 at 10:13):

(don't know if the capital B is for backward or for the base category of the fibration p:E--> B, but this is fine)

I write Bwd for "backward": by programming convention established in the 1950s, "normal" lists grow by extending on the left, but the convention in logic/type theory is to write contexts as lists whose "local" end is on the right. I make the data in my code work like the data in my head.

When I say that morphisms in the semisimplicial category are secretly bit vectors, I'm making a combinatorial observation: the rows of Pascal's triangle sum to powers of two. When you embed a short context into a longer one by making insertions, you're effectively selecting the short context from the long one by giving one bit of information (select or discard) for each of the long context's entries. The values (mn)\left(\begin{array}{c}m\\n\end{array}\right) in Pascal's triangle document the number of length mm bit vectors with population count (the number of 1s) nn. It is missing a trick merely to count them: they have delightful compositional structure, really useful for working with de Bruijn terms. That's the structure being ignored when we treat the indices of the container only as discrete.

And when it comes to the polynomial...

I am lost.

...I'm not surprised, because it's an inside-out mess. The OSPIO \leftarrow S \leftarrow P \rightarrow I presentation of polynomial functors is a terrible way to compute with any specific polynomial functor. The merit of the "PP-first" presentation is in the abstract: thinking about polynomials in general, with as few dependencies/fibrations as you can get away with. The detail is inscrutable but not important: I felt obliged to calculate the polynomial corresponding to the functor whose least fixpoint yields the rather more intelligible inductive datatype definition, only to show that genuine OSPI-polynomials are indeed at work here.

Returning to the big picture, I work with machines and thus to a finer degree of pedantry than mathematical demonstrability. I'm interested in which of us (me or the machine) has to do the work to establish mathematical demonstrability.

In particular, I am looking for two related things. Firstly, I'm looking for a suitable interpretation of the OSPIO \leftarrow S \leftarrow P \rightarrow I story in functor categories, yielding polynomial functors between presheaves. Secondly, I'm looking for a complete syntactic presentation of the same notion, from which the such-thattery required to establish presheaf structure has been eliminated. Concretely, I want to codify which datatype declarations exhibit presheaf structure a priori, so that I can obtain it deliberately and for but the effort of asking. It's kind of funny: I'm not trying to prove that one or two specific constructions yield presheaves; I'm exactly trying to stop proving that and just see it instead.

view this post on Zulip James Wood (Apr 01 2021 at 12:07):

I suppose the O←S←P→I presentation is a bit “if this is the answer, what is the question?”. P describes where you might find premises of a rule, while S tells you which rule that premise is in.

view this post on Zulip James Wood (Apr 01 2021 at 12:08):

Maybe fun and arg would be clearer as app left and app right, or something.

view this post on Zulip Jacques Carette (Apr 01 2021 at 14:44):

Conor McBride said:

Returning to the big picture, I work with machines and thus to a finer degree of pedantry than mathematical demonstrability. I'm interested in which of us (me or the machine) has to do the work to establish mathematical demonstrability.

[...] Concretely, I want to codify which datatype declarations exhibit presheaf structure a priori, so that I can obtain it deliberately and for but the effort of asking. It's kind of funny: I'm not trying to prove that one or two specific constructions yield presheaves; I'm exactly trying to stop proving that and just see it instead.

This! So much this!

Computers are really good at eliminating drudgery, at least when we take the time to ask them nicely to do so. It does not happen by a miracle, we have to spot where things are fully automatable. But it's a question that I like to be asking constantly.

"What should I be getting for free here, if I were to ask in just the right way" yields all sorts of interesting dividends. Amusingly, it can sometimes force us to shift the focus back onto syntax: the semantics tells us there's automation to be had (the rest of @Conor McBride 's explanation is all about an important specific case), but if you don't set things up correctly, it won't be "free".

All the constructions of Universal Algebra are my current playground. Well, generalized UA - multi-sorted and indexed/parametrized. I'm endlessly amused that while constructing "homomorphism" was known since at least 1908, very few people seemed to both with the fact that a "homomorphism" needs to live in a theory too. And while you can take a weird multi-sorted view where it can fit, it's very unnatural (right up there with the single-sorted view of R\mathbb{R}-modules where you have to throw in a big infinity's worth of constants). And yet, in our theorem provers, writing down the definition of what a homomorphism (say of Monoids) is super easy. But it 'lives' in a view of what a 'theory' is that is usually not spoken about. [Hmm, I should perhaps be starting my own thread in 'practice: our work' for this stuff...]

view this post on Zulip Valeria de Paiva (Apr 01 2021 at 16:39):

Tom Hirschowitz said:

You seem to understand what got me puzzled earlier: would you please mind explaining in a bit more detail? What do you mean by having your indexed category with de Bruijn encoding on the syntax? What would you get with presheaves, an indexed category with de Bruijn encoding on presheaves?

hi @Tom Hirschowitz . no deep understanding on my part, simply the shallow observation that I want *only the categorical structure that I am forced to have, nothing more. My indexed categories come from work with Eike Ritter on categorical combinators for dependent type theories (https://www.sciencedirect.com/science/article/pii/0304397594001253) and there I need pullback for substitutions, Σ\Sigma's and Π\Pi's, satisfying BCC of course. the rest is up for grabs!

view this post on Zulip Valeria de Paiva (Apr 01 2021 at 17:25):

Many thanks @Conor McBride for indulging me with explanations!

you're of course right about the opposite traditions on the telescopes of dependent types, which I hadn't paid enough attention to.

and I learned lots from

I'm making a combinatorial observation: the rows of Pascal's triangle sum to powers of two. When you embed a short context into a longer one by making insertions, you're effectively selecting the short context from the long one by giving one bit of information (select or discard) for each of the long context's entries.[...] they have delightful compositional structure, really useful for working with de Bruijn terms.

I have survived so far without understanding de Bruijn terms properly and I hope to be able to carry on the same way. But I begin to see why you say

That's the structure being ignored when we treat the indices of the container only as discrete.

I do understand that a finer degree of pedantry is required for machines.

When we get to the crux of the matter

I'm looking for a suitable interpretation of the O \leftarrow S \leftarrow P \rightarrow IO←S←P→I story in functor categories, yielding polynomial functors between presheaves.

Great, as I am trying to understand exactly this. no one needs to tell me that polynomials are important in mathematics, this much I know. I am trying to understand why container morphisms are the way they are, as they're reminiscent of dialectica morphisms and I have a single-track mind.

Secondly, I'm looking for a complete syntactic presentation of the same notion, from which the such-thattery required to establish presheaf structure has been eliminated.

Excellent, as I thought this would be a great way of going about it! I have always held the belief that FPers have intuitions about syntax that mathematicians lack and that the uncovering these intuitions is the source of much real progress in maths.

view this post on Zulip Valeria de Paiva (Apr 02 2021 at 23:17):

and more thanks for

..I'm not surprised, because it's an inside-out mess. The O \leftarrow S \leftarrow P \rightarrow IO←S←P→I presentation of polynomial functors is a terrible way to compute with any specific polynomial functor. The merit of the "PP-first" presentation is in the abstract: thinking about polynomials in general, with as few dependencies/fibrations as you can get away with. The detail is inscrutable but not important: I felt obliged to calculate the polynomial corresponding to the functor whose least fixpoint yields the rather more intelligible inductive datatype definition, only to show that genuine OSPI-polynomials are indeed at work here.

I'm afraid I had not noticed the O<--S<--P<--I vertically written, so now it makes much more sense!

view this post on Zulip David Spivak (Apr 07 2021 at 20:58):

I'm a little late here and haven't read all the discussion below; maybe this has been discussed. In his HoTTEST video, Garner showed that bimodules between polynomial functors are parametric right adjoints between the associated copresheaf categories. So for any categories I\mathcal{I} and O\mathcal{O}, are the (I,O)(\mathcal{I},\mathcal{O})-bimodules what you're looking for, @Conor? They are richer than profunctors IO\mathcal{I}\to\mathcal{O}. You can think of any such bimodule as a O\mathcal{O}-indexed ducquery [disjoint union of conjunctive queries] on I\mathcal{I}.

view this post on Zulip Conor McBride (Apr 07 2021 at 21:12):

This sounds good to me. I'll check it out. Inevitably, I'll be looking for a syntactification of the result, because my favourite proof is "by inspection", especially if I can teach a machine to do the inspection.

view this post on Zulip David Spivak (Apr 08 2021 at 15:15):

Another thing that might be useful is the notion of "full internal subcategory", which you can find in Jacobs 99. [Sorry if you're way ahead of me!] For polynomials p: Set-->Set, you consider the category In(p) whose objects are elements i : p(1) [positions/shapes] and whose morphisms are given by Hom(i,j) = Set( p[i], p[j] ), the set of functions from the [directions/positions] of at i to those at j. This In(p) thing comes with a fully faithful functor to Set.

This thingy is fairly prominent in Clive's thesis about Natural Models, but I think perhaps it should be generalized to what I'm calling "p-concrete categories", which are categories C equipped with an identity-on-objects and faithful functor C-->In(p).

Anyway, if any of this looks like what you're looking for, let me know!