Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: category theory

Topic: Constant dependent function


view this post on Zulip Bruno Gavranović (Mar 03 2023 at 19:30):

There's several ways to characterise a constant morphism c:BCc : B \to C in category theory. For instance, as a morphism factoring through a terminal object, or as a morphism with the property that for every other AA, at most one morphism ACA \to C factors through it. In the category Set\mathbf{Set} this reduces to a constant function.

I'm interested in the analogous property of dependent functions. That is, I want to understand when is a dependent function (a:A)B(a)(a : A) \to B(a) constant, for some suitable meaning of this word. I come at this question from studying irrelevance/erasure in type theory, where we study maps like (0a:A)B(a)(0 a : A) \to B(a) - meaning the aa can only influence the type of the codomain, but not the value.

Is something know about a possible characterisation of such function, analogous to characterisations of the non-dependent case?

view this post on Zulip Bruno Gavranović (Mar 03 2023 at 19:31):

I should also add that for (0a:A)B(a)(0 a : A) \to B(a) I'm specifically interested in cases when B:ASetB : A \to \mathbf{Set} itself is not constant - otherwise the whole situation trivialises.

view this post on Zulip Bruno Gavranović (Mar 03 2023 at 19:36):

Here's one example of such a function (implemented in Idris 2)

f : (0 a : Type) -> (n : Nat ** Vect n a)
f _ = (0 ** Nil)

view this post on Zulip Bruno Gavranović (Mar 03 2023 at 20:04):

Another way to think about this is that for the non-dependent case for all b,b:Bb, b' : B it holds that c(b)=c(b)c(b) = c(b').
But generalising this to the dependent case grounds to a halt - the results c(b):B(b)c(b) : B(b) and c(b):B(b)c(b') : B(b') are of different types!

It looks like we might need to use some notion of heterogeneous equality. However, on nLab the part about its categorical semantics is unfortunately empty :face_with_diagonal_mouth:

view this post on Zulip Ryan Wisnesky (Mar 03 2023 at 22:53):

the exact types for which axiom K (https://ncatlab.org/nlab/show/axiom+K+%28type+theory%29) contradicts univalence is studied in homotopy type theory. the categorical semantics of heterogenous equality may just be under-developed; here's a setoid model of 'observational type theory', which admits axiom K: https://hal.inria.fr/hal-03367052/document

view this post on Zulip Josselin Poiret (Mar 04 2023 at 08:47):

Unfortunately, if you want to study the semantics of such functions, we need to understand what your categorical semantics for dependent types are first! Are you working with choices of fibrations in some category C C ?

view this post on Zulip Josselin Poiret (Mar 04 2023 at 09:30):

you might want to look into shape-irrelevance: basically, if you want to be able to form (0 a : A) → B a, B must already be shape-irrelevant in its argument. This means that for two unrelated inputs x and y, B x and B y are related, and for types in the universe this roughly means that they support some heterogeneous equality. Once you have that heterogeneous equality, you can state what being constant is!

view this post on Zulip Josselin Poiret (Mar 04 2023 at 09:31):

for some references, you can look into paramdtt (https://dl.acm.org/doi/10.1145/3110276) or Andreas Nuyts' thesis in general.

view this post on Zulip Josselin Poiret (Mar 04 2023 at 09:33):

see also https://arxiv.org/abs/1706.04383 for a model

view this post on Zulip Bruno Gavranović (Mar 04 2023 at 16:12):

Josselin Poiret said:

Unfortunately, if you want to study the semantics of such functions, we need to understand what your categorical semantics for dependent types are first! Are you working with choices of fibrations in some category C C ?

I'm working with what I thought was the categorical semantics for dependent types, which is the one given by slice categories, i.e. where objects in C/A\mathcal{C}/A are interpeted as types depending on AA. Actually, the whole reason I'm asking this question is because I'm trying to understand categorical semantics of erased dependent types.
In other words, if a morphism between indexed types X,Y:ASetX, Y : A \to \mathbf{Set} i.e. a map of type (a:A)X(a)Y(a)(a : A) \to X(a) \to Y(a) corresponds to a morphism in C/A\mathcal{C}/A, then what does an analogous zero-annotated morphism (0a:A)X(a)Y(a)(0 a : A) \to X(a) \to Y(a) correspond to? Can it be expressed as a map in some slice category?

view this post on Zulip Reid Barton (Mar 04 2023 at 18:47):

I don't think it means anything in the absence of extra structure (perhaps on C, or on X and Y).

view this post on Zulip Reid Barton (Mar 04 2023 at 18:55):

For example take C = Set and A = 2. Suppose I have a random type family presented by a map X2X \to 2. What does it mean for a section of it to be "constant"?

view this post on Zulip Reid Barton (Mar 04 2023 at 18:56):

I can independently replace the two fibers with isomorphic sets without changing the original map (up to isomorphism), so there doesn't seem to be any nontrivial way to define it.

view this post on Zulip Reid Barton (Mar 04 2023 at 18:58):

On the other hand if you represent your family by a function from 22 to the class of all ZFC sets, then you can say what it means: it means the two values of the function are equal (as ZFC sets).
Categorically this means we gave extra data: not just two sets, but also their intersection.

view this post on Zulip Reid Barton (Mar 04 2023 at 19:11):

I believe similar situations (e.g. "phase distinctions") have been analyzed using modalities; I'm not sure of the exact relationship to your question though.

view this post on Zulip Reid Barton (Mar 04 2023 at 19:22):

In Idris, can you do anything with one of these (0 a : A) -> B a functions that you can't do with an ordinary function?

view this post on Zulip Josselin Poiret (Mar 04 2023 at 20:00):

Bruno Gavranovic said:

Josselin Poiret said:

Unfortunately, if you want to study the semantics of such functions, we need to understand what your categorical semantics for dependent types are first! Are you working with choices of fibrations in some category C C ?

I'm working with what I thought was the categorical semantics for dependent types, which is the one given by slice categories, i.e. where objects in C/A\mathcal{C}/A are interpeted as types depending on AA. Actually, the whole reason I'm asking this question is because I'm trying to understand categorical semantics of _erased_ dependent types.
In other words, if a morphism between indexed types X,Y:ASetX, Y : A \to \mathbf{Set} i.e. a map of type (a:A)X(a)Y(a)(a : A) \to X(a) \to Y(a) corresponds to a morphism in C/A\mathcal{C}/A, then what does an analogous zero-annotated morphism (0a:A)X(a)Y(a)(0 a : A) \to X(a) \to Y(a) correspond to? Can it be expressed as a map in some slice category?

depending on who you ask, this is not well behaved: the action of substitutions on dependent types is not functorial! Also, you might ask for your dependent types to not just be morphisms XA X → A but fibrations in some specific model category/category of fibrations/tribes/etc.

view this post on Zulip Sam Speight (Mar 04 2023 at 20:01):

Maybe I am missing something, but if X:Γ.XΓX: \Gamma.X \rightarrow \Gamma is a dependent type, then isn't a term (section) t:ΓΓ.Xt: \Gamma \rightarrow \Gamma.X constant if there exists a morphism (substitution) f:1Γf: 1 \rightarrow \Gamma and a section c:11×ΓΓ.Xc: 1 \rightarrow 1 \times_\Gamma \Gamma.X of fX:1×ΓΓ.X1f^*X: 1 \times_\Gamma \Gamma.X \rightarrow 1 such that t=π2c!Γt = \pi_2 c !_\Gamma.

view this post on Zulip Josselin Poiret (Mar 04 2023 at 20:02):

like @Reid Barton mentioned above, most categorical semantics of type theory is data, you have to chose how to represent the different moving parts of the type theory using category theory

view this post on Zulip Reid Barton (Mar 04 2023 at 20:04):

Sam Speight said:

Maybe I am missing something, but if X:Γ.XΓX: \Gamma.X \rightarrow \Gamma is a dependent type, then isn't a term (section) t:ΓΓ.Xt: \Gamma \rightarrow \Gamma.X constant if there exists a morphism (substitution) f:1Γf: 1 \rightarrow \Gamma and a section c:11×ΓΓ.Xc: 1 \rightarrow 1 \times_\Gamma \Gamma.X of fX:1×ΓΓ.X1f^*X: 1 \times_\Gamma \Gamma.X \rightarrow 1 such that t=π2c!Γt = \pi_2 c !_\Gamma.

π2c!Γ\pi_2 c !_\Gamma will not be a section (unless Γ\Gamma is a singleton). After composing with XX, everything is sent to the image of cc.

view this post on Zulip Josselin Poiret (Mar 04 2023 at 20:12):

Reid Barton said:

In Idris, can you do anything with one of these (0 a : A) -> B a functions that you can't do with an ordinary function?

I think it's rather the opposite, just like with parametricity you have less irrelevant functions, and you can deduce theorems for them either meta-theoretically or internally if you have some internal parametricity theorem (which I don't think is the case in Iris). I guess the meta-theoretical results can be important to deduce optimizations like erasure when extracting code

view this post on Zulip Sam Speight (Mar 04 2023 at 20:30):

Reid Barton said:

Sam Speight said:

Maybe I am missing something, but if X:Γ.XΓX: \Gamma.X \rightarrow \Gamma is a dependent type, then isn't a term (section) t:ΓΓ.Xt: \Gamma \rightarrow \Gamma.X constant if there exists a morphism (substitution) f:1Γf: 1 \rightarrow \Gamma and a section c:11×ΓΓ.Xc: 1 \rightarrow 1 \times_\Gamma \Gamma.X of fX:1×ΓΓ.X1f^*X: 1 \times_\Gamma \Gamma.X \rightarrow 1 such that t=π2c!Γt = \pi_2 c !_\Gamma.

π2c!Γ\pi_2 c !_\Gamma will not be a section (unless Γ\Gamma is a singleton). After composing with XX, everything is sent to the image of cc.

Oh yea now I see the nuance of the problem.

view this post on Zulip Bruno Gavranović (Mar 05 2023 at 18:58):

Josselin Poiret said:

depending on who you ask, this is not well behaved: the action of substitutions on dependent types is not functorial! Also, you might ask for your dependent types to not just be morphisms XA X → A but fibrations in some specific model category/category of fibrations/tribes/etc.

Uh, yes that's what I meant! Sorry, I should've been more precise in saying that I'm thinking about not just the category C/A\mathcal{C}/A but the general assignment AC/AA \mapsto \mathcal{C}/A which is on morphisms defined by pullback. Does that answer your question now?

view this post on Zulip Josselin Poiret (Mar 06 2023 at 09:08):

Bruno Gavranovic said:
Uh, yes that's what I meant! Sorry, I should've been more precise in saying that I'm thinking about not just the category C/A\mathcal{C}/A but the general assignment AC/AA \mapsto \mathcal{C}/A which is on morphisms defined by pullback. Does that answer your question now?

Oh, but that's what I understood from what you initially said! If you do consider that there is just a set of types in a given context (like CwFs), this doesn't work because pullback is not strictly functorial: pulling back by f f and then g g is only isomorphic to pulling back by gf gf , so you get a pseudofunctor only. You can make this structure into something strict, but then the types are a little different, you can have a look at the last section of Awodey's natural models paper, this splitting is mentioned.

view this post on Zulip Bruno Gavranović (Mar 06 2023 at 12:55):

Ah, I see! Does this strictness impact what the answer is about categorical semantics of constant dependent functions?

view this post on Zulip Bruno Gavranović (Mar 06 2023 at 12:56):

And thanks for all the links and answers! I need to go through them a bit more carefully, and think about how to respond

view this post on Zulip Bruno Gavranović (Mar 06 2023 at 12:59):

Reid Barton said:

In Idris, can you do anything with one of these (0 a : A) -> B a functions that you can't do with an ordinary function?

I pretty much agree with @Josselin Poiret's answer. The goal isn't to be able to do more - here (0 a : A) -> B a is strictly a subset of (a : A) -> B a, so anything we can do with erased maps, we can do with non-erased maps. But if we know some our maps are erased, then it's very useful to be able to lift this information to the type-level. In my case, I understand the set-theoretic case of erasure, and I'm wondering how to generally express it in a model of dependent type theory.

I thought there was only one, given by the fibration of slice categories with pullback, but now I'm learning there's more, and that there's strictness issues as well.

view this post on Zulip Josselin Poiret (Mar 06 2023 at 13:12):

Bruno Gavranovic said:

Ah, I see! Does this strictness impact what the answer is about categorical semantics of constant dependent functions?

well, if you're looking for semantics of erasure in some specific model, that model has to exist first :) but in general, at the level of natural models/CwFs/etc... erasure most likely has to be added as additional stuff on top of it, so there might even be multiple possible semantics for constant dependent functions

view this post on Zulip Bruno Gavranović (Mar 06 2023 at 13:32):

I see. What's baffling me is that for non-dependent maps there's such a simple categorical property expressing constantness. I don't understand where all the complexity comes from in the dependent case.

I haven't yet resigned from the possibility that there might be an analogously simple one for the dependent case.

view this post on Zulip Sam Speight (Mar 06 2023 at 13:56):

Isn't the crux of this that the (output) type of a constant term (function) does not vary with its input? So there's an oxymoron.

view this post on Zulip Reid Barton (Mar 06 2023 at 16:44):

Certainly that is the case if you believe that a "value" (whatever that is) can only have a single type.

view this post on Zulip Sam Speight (Mar 06 2023 at 16:57):

Right. Is that not the case with erased types?

view this post on Zulip Reid Barton (Mar 06 2023 at 16:59):

Bruno Gavranovic said:

The goal isn't to be able to do more - here (0 a : A) -> B a is strictly a subset of (a : A) -> B a, so anything we can do with erased maps, we can do with non-erased maps. But if we know some our maps are erased, then it's very useful to be able to lift this information to the type-level. In my case, I understand the set-theoretic case of erasure, and I'm wondering how to generally express it in a model of dependent type theory.

It seems to me that it is not in fact tangibly useful. It is useful in some "meta" way.
If you have a nondependent function, you can define internally what it means what it means for it to be constant. And if you have a function that you know is constant, you can deduce internally that its values on any two inputs are equal.
By contrast, if there is nothing special you can do internally with a "constant dependent function", that suggests that it cannot be defined internally, which is pretty close to saying that you cannot expect to define what it means for a dependent function to be constant in a general model of dependent type theory.

view this post on Zulip Reid Barton (Mar 06 2023 at 17:00):

Sam Speight said:

Right. Is that not the case with erased types?

I think the intuition that the question comes from is that there is some kind of data or program represented by the terms, such as a function in the untyped lambda calculus, that might happen to be literally equal for different values of the input, even though the terms that it represents have different types.

view this post on Zulip Reid Barton (Mar 06 2023 at 17:02):

Such as: the empty list, regarded as lists of various types.

view this post on Zulip Reid Barton (Mar 06 2023 at 17:04):

In math I guess there might be different opinions about whether "the empty list of integers" and "the empty list of reals" are the "same object" or not, or whether the question is sensible in the first place.

view this post on Zulip Spencer Breiner (Mar 06 2023 at 17:14):

I think there is also a good argument for, e.g., the zero vector of a f.d. vector space (now dependent over N\N).

A more border-line case would be something like the uniform distribution on a finite set.

view this post on Zulip Spencer Breiner (Mar 06 2023 at 17:20):

My intuitions seem to revolve around naturality. Bumping N\N up to the simplex category makes 0nVn0_n\in V^n natural, but uniformity only seems to play nicely with the degeneracy maps (marginalization).

view this post on Zulip Ryan Wisnesky (Mar 06 2023 at 20:59):

As Josselin said, parametricity allows to prove for example there is only one inhabitant of the type "forall X, X -> X"... perhaps parametricty for dependent types would help here? cf https://libres.uncg.edu/ir/asu/f/Johann_Patricia_2014_A_Relationally_Parametric_Model_Of_Dependent_Type_Theory..pdf for such a model

view this post on Zulip dusko (Mar 07 2023 at 20:32):

Bruno Gavranovic said:

There's several ways to characterise a constant morphism c:BCc : B \to C in category theory. For instance, as a morphism factoring through a terminal object, or as a morphism with the property that for every other AA, at most one morphism ACA \to C factors through it. In the category Set\mathbf{Set} this reduces to a constant function.

I'm interested in the analogous property of dependent functions. That is, I want to understand when is a dependent function (a:A)B(a)(a : A) \to B(a) constant, for some suitable meaning of this word. I come at this question from studying irrelevance/erasure in type theory, where we study maps like (0a:A)B(a)(0 a : A) \to B(a) - meaning the aa can only influence the type of the codomain, but not the value.

Is something know about a possible characterisation of such function, analogous to characterisations of the non-dependent case?

[i am probably going to demonstrate ignorance, but i always preach to the students that demonstrating ignorance is an OK path to learning, so let me practice what i preach.]

the way i would answer this question to myself is:

** a function c:BCc:B\to C is constant if it factors through !:B1!:B\to 1.

** IF the types B,CB,C depend on AA, ie if they are in the form (x:A)B(x:A)\vdash B and (x:A)C(x:A)\vdash C
** THEN a function (x:A)c:BC(x:A)\vdash c:B\to C is constant if it factors through (x:A)!:B1(x:A)\vdash !:B\to 1.

semantically, the whole idea of dependent types was to consider

** indexed sets β:AS\beta:A\to {\cal S} in SA{\cal S}^A
** their comprehensions {β}A\{\beta\}\to A in S/A{\cal S}/A

as the BHK-identification of

** β(a):S\beta(a):{\cal S} = the set of proofs that a:Aa:A satisfies the predicate β\beta
**** (or brouwer would say "of constructions of a:Aa:A satisfying β\beta")

** the comprehension a:A{b(a):β(a)}A\sum_{a:A}\{b(a):\beta(a)\} \to A = collect the proofs of β(a)\beta(a) in the fiber above a:Aa:A
**** (to get the constructive version of the comprehension {a:A:β(a)}A\{a:A|\ast:\beta(a)\}\hookrightarrow A from classical set theory).

martin-loef's point was that brouwer's point was that it only made sense talking about a:Aa:A satisfying β\beta as a construction of a proof of β(a)\beta(a), and there may be many diffierent proofs, so the comprehension of a predicate β\beta is an AA-indexed set of such constructions with proofs.

the IF-THEN definition of constant would then say that a function fS/A(B,C)f\in {\cal S}/A(B,C) is constant if it factors through the terminal object idAS/Aid_A\in {\cal S}/A. if BS/AB\in {\cal S}/A interprets (x:A)B(x:A)\vdash B and idAS/Aid_A\in {\cal S}/A interprets (x:A)1(x:A)\vdash 1, then the interpretation of (x:A)!:B1(x:A)\vdash !:B\to 1 is the BB itself.

the fact that the substitutions along the pullback functors are not strict, ie that the indexed category corresponding to the fibration Cod:S/SSCod:{\cal S/S} \to {\cal S} is only a pseudofunctor seems like a red herring, since the isomorphisms (fg)gf(fg)^\ast \cong g^\ast f^\ast are unique, i.e. the transfer betwen the indexed sets and their comprehensions is coherent.

this coherence was the ground on which voevodsky stood when he claimed the univalence axiom.

i suppose i must have misinterpreted some of the notational updates which happened since i last looked. but it seems that the misinterpretation might at least be, erm... coherent? ou non?

view this post on Zulip dusko (Mar 07 2023 at 20:47):

... and the way i (mis?)understood the irrelevance notations was that they were trying to enforce the uniqueness-of-proofs property on functions, which would capture the proofs B(x)C(x)B(x)\to C(x) where the dependence on xx is irrelevant. which makes it
impossible to prove (x:A)11(x:A)\vdash 1\to 1 irrelevantly
(where the singleton is martin-loef's truth). so even the unit type does not have irrelevant constants. but that was put forward as a feature, not a bug. so the problem of comprehending the irrelevant constants is the problem of having an irrelevant truth. (each instance of (x:A)!:1(x:A)\vdash !:1 remembers about which x:Ax:A the truth is asserted, so it remains inexorably relevant. a nice property of the truth isn't it :)

view this post on Zulip Bruno Gavranović (Mar 08 2023 at 16:09):

dusko said:

the way i would answer this question to myself is:

** a function c:BCc:B\to C is constant if it factors through !:B1!:B\to 1.

** IF the types B,CB,C depend on AA, ie if they are in the form (x:A)B(x:A)\vdash B and (x:A)C(x:A)\vdash C
** THEN a function (x:A)c:BC(x:A)\vdash c:B\to C is constant if it factors through (x:A)!:B1(x:A)\vdash !:B\to 1.

This is where I stopped. I meant to have CC depend on BB, but BB itself in principle doesn't have to depend on AA.
And then the question would be - what it means for the dependent function from BB to CC to be constant.

(I hope I accurately interpreted what you said)

view this post on Zulip dusko (Mar 08 2023 at 18:30):

Bruno Gavranovic said:

dusko said:

the way i would answer this question to myself is:

** a function c:BCc:B\to C is constant if it factors through !:B1!:B\to 1.

** IF the types B,CB,C depend on AA, ie if they are in the form (x:A)B(x:A)\vdash B and (x:A)C(x:A)\vdash C
** THEN a function (x:A)c:BC(x:A)\vdash c:B\to C is constant if it factors through (x:A)!:B1(x:A)\vdash !:B\to 1.

This is where I stopped. I meant to have CC depend on BB, but BB itself in principle doesn't have to depend on AA.
And then the question would be - what it means for the dependent function from BB to CC to be constant.

i did read your question and noticed the asymmetry. that is why i thought it might be useful to remember the logical meaning of dependent types, which motivated their introduction, their use in programming, their inclusion in the theory of constructions. that is their meaning. the question "what is the suitable meaning of the word 'constant' for dependent types?" would normally be asked in the context of the standard meaning of dependent types.

if we change the meaning of a language, then it is entirely up to us what other meaning to replace it with. that is sometimes useful.

in this case, that is complicated by the fact that you specify the usual constant function from BB to CC, and then the putative case with dependent types without CC. the asymmetry of the question seems arbitrary. the type in which the putative constant function would output its constant value is not mentioned. whatever the answer to your question might be, a generalization of constant functions to AA-dependent types should boil down to the special case of independent constant functions by taking A=1A=1. the first step might be to ask the question in a format that allows that.

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 14:33):

Uh, maybe I misunderstood what you were saying. It sounded like you were talking about non-dependent functions BCB \to C (with a source of dependence somewhere else), and I just wanted to point out that this is the function I wanted to be dependent. I don't care too much about maps into BB, I'm fine with whatever makes sense for the theory.

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 14:35):

Maybe I should add the main piece of motivation. I know that I can go back and forth between the category Fam(X)\mathbf{Fam}(X) and Set/X\mathbf{Set}/X (for a set XX). (By Fam(X)\mathbf{Fam}(X) I mean a category where the objects are indexed sets A:XSetA : X \to \mathbf{Set} and maps ABA \to B are dependent functions f:(x:X)A(x)B(x)f : (x :X) \to A(x) \to B(x).)

Now, we've recently come across a similar category that's really important for us - one I'll call Fam0(X)\mathbf{Fam0}(X). It has the same objects, but a morphism ABA \to B is now a zero-indexed map f:(0x:X)A(x)B(x)f : (0 x :X) \to A(x) \to B(x).

Now, we want to describe this category in settings other than Set\mathbf{Set}. We know how to do that for Fam\mathbf{Fam} - by turning it into a slice category - but we have no idea how to do the same for Fam0(X)\mathbf{Fam0}(X)

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 14:48):

Reid Barton said:

For example take C = Set and A = 2. Suppose I have a random type family presented by a map X2X \to 2. What does it mean for a section of it to be "constant"?

I can independently replace the two fibers with isomorphic sets without changing the original map (up to isomorphism), so there doesn't seem to be any nontrivial way to define it.

I follow the part about independently replacing two fibers with isomorphic sets. But I'm not sure how this gives rise to the conclusion "there doesn't seem to be any nontrivial way to define it"

view this post on Zulip Josselin Poiret (Mar 09 2023 at 14:51):

Bruno Gavranovic said:

Maybe I should add the main piece of motivation. I know that I can go back and forth between the category Fam(X)\mathbf{Fam}(X) and Set/X\mathbf{Set}/X (for a set XX). (By Fam(X)\mathbf{Fam}(X) I mean a category where the objects are indexed sets A:XSetA : X \to \mathbf{Set} and maps ABA \to B are dependent functions f:(x:X)A(x)B(x)f : (x :X) \to A(x) \to B(x).)

Now, we've recently come across a similar category that's really important for us - one I'll call Fam0(X)\mathbf{Fam0}(X). It has the same objects, but a morphism ABA \to B is now a zero-indexed map f:(0x:X)A(x)B(x)f : (0 x :X) \to A(x) \to B(x).

Now, we want to describe this category in settings other than Set\mathbf{Set}. We know how to do that for Fam\mathbf{Fam} - by turning it into a slice category - but we have no idea how to do the same for Fam0(X)\mathbf{Fam0}(X)

what is this Fam0 \mathbf{Fam0} here?

view this post on Zulip Josselin Poiret (Mar 09 2023 at 14:51):

i mean, zero-indexed maps don't mean much to me, could you elaborate?

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 14:55):

This is the same zero annotation that you used here:
Josselin Poiret said:

you might want to look into shape-irrelevance: basically, if you want to be able to form (0 a : A) → B a, B must already be shape-irrelevant in its argument. This means that for two unrelated inputs x and y, B x and B y are related, and for types in the universe this roughly means that they support some heterogeneous equality. Once you have that heterogeneous equality, you can state what being constant is!

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 14:57):

We also have the definition of Fam0 in code (compare to the definition of ordinary Fam)

view this post on Zulip Reid Barton (Mar 09 2023 at 15:03):

That isn't a definition; you're saying "Fam0 is whatever (0 x : X) -> A x -> B x means", but also presumably using Fam0 to explain the meaning of (0 x : X)!

view this post on Zulip Reid Barton (Mar 09 2023 at 15:03):

It seems clear by now that there is no actual definition at hand here.

view this post on Zulip Reid Barton (Mar 09 2023 at 15:09):

I also suspect we may not agree on the meanings of some words like "set" and "category".

view this post on Zulip Reid Barton (Mar 09 2023 at 15:13):

I would agree with the following statement:
Suppose we are in some internal setting where, in addition to the standard formers of dependent type theory, we also have a primitive (= undefined) notion of "zero-indexed map".
Then, still internally to that setting, we can define a category Fam(X) whose objects are X-indexed families of sets and whose morphisms are said zero-indexed families of maps.
(Of course I would need to know what the rules are for manipulating zero-indexed maps to verify this statement, but I'm fine with taking this for granted.)

view this post on Zulip Reid Barton (Mar 09 2023 at 15:15):

In the "true" (ordinary, mathematical, classical) category of sets, there isn't any such notion of "zero-indexed map", but that doesn't prevent me from imagining other situations where there could be such a notion.

view this post on Zulip Reid Barton (Mar 09 2023 at 15:15):

Does that clarify some of the preceding discussion?

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 15:34):

Reid Barton said:

It seems clear by now that there is no actual definition at hand here.

There is no categorical definition that works for any base category indeed - that's exactly what I'm after in this thread. But there is a definition internal to the category of Idris types and functions. It works, does interesting things, and it's the entire reason why I started this thread.

It sounds like the concept of zero-annotation is confusing, which I took for granted that it isn't, since it's been discussed extensively in this thread already (and you provided some useful thoughts about it!).

What I mean when I write a map f :(0 x :X) -> Y x is that x can be used to form the type of the codomain, but it can't be used in computation of the value of the codomain. A special case is when Y is itself constant (at some Z), then f : (0 x : X) -> Z reduces to a constant function.

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 15:34):

Reid Barton said:

In the "true" (ordinary, mathematical, classical) category of sets, there isn't any such notion of "zero-indexed map", but that doesn't prevent me from imagining other situations where there could be such a notion.

Sure there is? A function f:(0x:X)Yf : (0 x : X) \to Y in Set\mathbf{Set} is simply a constant function - one which factors through the terminal object.

view this post on Zulip Reid Barton (Mar 09 2023 at 15:38):

I mean in the dependent case that you are interested in, obviously.

view this post on Zulip Josselin Poiret (Mar 09 2023 at 15:44):

Let's just step back for a bit: I don't think the zero-annotation is confusing for anyone here syntactically. What @Reid Barton and I have been saying is that semantically, you'll have to chose what zero-indexed pi types correspond to, there's no right answer here. The fact that for non-dependent functions, the choice of "maps that factor through 1 1 " seems obvious doesn't mean it's the only one, and that it would generalize nicely to dependent types.

view this post on Zulip Josselin Poiret (Mar 09 2023 at 15:44):

if you don't have anything special you can do internally with zero-indexed maps, then you can just take them to be usual maps semantically, nothing will break down

view this post on Zulip Josselin Poiret (Mar 09 2023 at 15:49):

what my initial messages were saying was: if you want to have some meaningful notion of dependently constant functions, you first need to have a notion of heterogeneous equality between different fibers, for any two points in the base. That means that you can't really take any dependent type, you need one which additionally has such structure. That's what shape-irrelevance is for: B : (@shirr A) → Set means that B is not a usual dependent type but rather shape-irrelevant, meaning that you can compare elements in B x and B y with no further assumption on x and y.

view this post on Zulip Josselin Poiret (Mar 09 2023 at 15:50):

this means that for (@0 a : A) → B a to even make sense, you need B : @shirr A → Set.

view this post on Zulip Josselin Poiret (Mar 09 2023 at 15:53):

this in turn means that to give semantic meaning to zero-indexed functions, you need to also give semantic meaning to shape-irrelevant dependent types! That's where the trouble starts

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 15:58):

Reid Barton said:

I mean in the dependent case that you are interested in, obviously.

Is there not? Here's an example.
Let

A:NSetnVectnNA : \mathbb{N} \to \mathbf{Set} \coloneqq n \mapsto \mathsf{Vect} \enskip n \enskip \mathbb{N}

B:NSetnVectn2B : \mathbb{N} \to \mathbf{Set} \coloneqq n \mapsto \mathsf{Vect} \enskip n \enskip 2

Then I can write a function f:(0n:N)A(n)B(n)f : (0 n : \mathbb{N}) \to A(n) \to B(n) which doesn't use nn by choosing the implementation λ_mapisEven\lambda \_ \mapsto \mathsf{map} \enskip \mathsf{isEven}

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 16:02):

Josselin Poiret said:

if you don't have anything special you can do internally with zero-indexed maps, then you can just take them to be usual maps semantically, nothing will break down

The kind of setting I'm in is a dependent function (a:A)×B(a)(a :A) \times \dots \to B(a) which has a number of arguments. I want to explicitly seize control over values over which the output B(a)B(a) depends. Sometimes I want explicitly disallow the usage of aa in computing the value of the output, even if the type of the output depends on it.

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 16:11):

Josselin Poiret said:

what my initial messages were saying was: if you want to have some meaningful notion of dependently constant functions, you first need to have a notion of heterogeneous equality between different fibers, for any two points in the base. That means that you can't really take any dependent type, you need one which additionally has such structure. That's what shape-irrelevance is for: B : (@shirr A) → Set means that B is not a usual dependent type but rather shape-irrelevant, meaning that you can compare elements in B x and B y with no further assumption on x and y.

I think we agree here. Heterogeneous equality is something I hypothesised about in the original post, indeed. I think at this point I understand this might be the key to going forward, but I'm stuck at understanding a) whether there even is an established categorical semantics to this, and if yes, b) where to find it.

You've provided many resources for me to read, but as I'm not well-versed in type theory I'm not having an easy time answering these questions by going through them - there's quite a lot of stuff! :)

So maybe I can ask directly: do these papers provide a categorical semantics here? Is there a particular theorem I should be looking at?

view this post on Zulip Sam Speight (Mar 09 2023 at 16:18):

You might be interested to read this paper of Atkey. The situation there is kind of opposite: they want to prevent types from depending on certain terms (this encpomases linear dependent type theories where you do not want types to depend on linear variables, only cartesian ones). But maybe you will find the ideas useful. They present a semantics that takes a standard categorical semantics of dependent types (CwFs) and augments it, which it sounds like you'll need to do.

view this post on Zulip Bruno Gavranović (Mar 09 2023 at 16:29):

Ah, I know about this paper. I actually some discussions with @Bob Atkey about this, but I'm not sure if we reached a conclusion. Last time he was telling me that erasure might be modellable with realisers a la Functors as Type Refinement Systems, but there were some issues in actually getting the machinery to work. This seems distinct from the ideas presented in the paper you linked, though, and it's still unclear to me what the connection is.

Of course, QTT has a lot of machinery to deal with arbitrary quantities, which isn't something that's needed for the purpose of studying only erasure. It's also unclear what the connection of these papers is with heterogeneous equality.

view this post on Zulip Josselin Poiret (Mar 09 2023 at 16:38):

Bruno Gavranovic said:

So maybe I can ask directly: do these papers provide a categorical semantics here? Is there a particular theorem I should be looking at?

what do you mean by categorical semantics? Let me provide an example to clarify the situation:

We don't know yet what kind of data is needed to interpret ParamDTT, but we do have a specific semantic model. If what you mean by categorical semantics is some structure on a category, like type-theoretic model categories, then I don't think we know.

[1] Awodey (2014) Natural models of homotopy type theory, arXiv.
[2] Shulman (2019) All $(\infty,1)$-toposes have strict univalent universes, arXiv.
[3] Nuyts (2018) A Model of Parametric Dependent Type Theory in Bridge/Path Cubical Sets, arXiv.

view this post on Zulip dusko (Mar 09 2023 at 20:36):

as far as i can tell, the process that we are following is something like this:
1) there is a syntactic framework (idris 2) which extends a standard language by nonstandard constructs.
2) we take a standard concept (constant functions) and ask what it means in the new syntactic framework and how to express that meaning.
3) we search literature of nonstandard constructs expected to explain each other.

if idris2 is replaced with an ontology, and constant functions with, say, "the nature of Trinity", you may notice that the process of reasoning describe above is precisely the scholastic method of ~XIII~ century theology. what motivated francis bacon to say suggest that we should occasionally go behond syntactic frameworks and consult reality, by means of testing the existing assignments of meaning.

the slogan that motivated introducing the irrelevant typing was that it should characterize those terms where the typing can be erased without losing information. now it is possible to say that polymorphic constructs satisfy that requirement: an identity is an identity. but then the further details of the proposed syntax precluded the identity as an irrelevant term. (which is why i pointed out that 1 has no irrelevant inhabitants.) the next interpretation was that the irrelevantly typed terms might be those that diverge as untyped lambda-terms. that seemed like a sound interpretation. but then the question arises: which irrelevantly typed terms are constant? what is the vanishing point of all non-church-rosser terms?...

i am just pulling a possible theological interpretation off the top of my head. other theologists would pull other possible interpretations off the top of their heads, and prove them by providing authoritative references .

theological methods are fantastically appropriate when we look for meanings of unassigned signifiers. math is i think meant to do the opposite: to easily assign signifiers to some existing meanings which we are trying to better understand.

view this post on Zulip Reid Barton (Mar 10 2023 at 06:31):

Bruno Gavranovic said:

Reid Barton said:

I mean in the dependent case that you are interested in, obviously.

Is there not? Here's an example [...]

You gave an example of a formula and I agree that one can define what it means for a formula to not use a variable in value-relevant positions (subject to writing down precise rules of course).

But since 100+ years ago mathematicians distinguish between formulas (syntax) and functions (semantics). The morphisms of the category of sets are functions, not formulas. What I'm claiming is that you cannot define anything sensible for functions.

view this post on Zulip Reid Barton (Mar 10 2023 at 06:37):

Of course, you can also think about semantics in some other category than the category of sets--and you had better do this, if you want a nontrivial meaning for (0 x : X) -> .

view this post on Zulip Mike Shulman (Mar 10 2023 at 07:22):

FWIW, if "set" means a set as in ZF, with a global membership predicate, then there is an induced notion of "heterogeneous equality" and we can define constant dependent functions that way. For instance, f:(n:N){kNkn}f : (n : \mathbb{N}) \to \{ k\in \mathbb{N} \mid k \le n \} defined by f(n)=0f(n) = 0 for all nn would then be a "constant function" in the sense that the "0"s in each output set {kNkn}\{ k \in \mathbb{N} \mid k \le n \} are literally equal in the global ZF sense. But this notion of global equality is not visible to the category of sets unless you equip it with extra structure.

view this post on Zulip Bruno Gavranović (Mar 10 2023 at 10:58):

Reid Barton said:

Bruno Gavranovic said:

Reid Barton said:

I mean in the dependent case that you are interested in, obviously.

Is there not? Here's an example [...]

You gave an example of a formula and I agree that one can define what it means for a formula to not use a variable in value-relevant positions (subject to writing down precise rules of course).

But since 100+ years ago mathematicians distinguish between formulas (syntax) and functions (semantics). The morphisms of the category of sets are functions, not formulas. What I'm claiming is that you cannot define anything sensible for functions.

I'm not sure what you mean here by the distinction of formulas and functions. Are you saying ff isn't a function? I don't see how how that can be, as it looks to me like a well-defined assignment of an output to every input.

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 12:01):

dusko said:

as far as i can tell, the process that we are following is something like this:
1) there is a syntactic framework (idris 2) which extends a standard language by nonstandard constructs.
2) we take a standard concept (constant functions) and ask what it means in the new syntactic framework and how to express that meaning.
3) we search literature of nonstandard constructs expected to explain each other.

if idris2 is replaced with an ontology, and constant functions with, say, "the nature of Trinity", you may notice that the process of reasoning describe above is precisely the scholastic method of ~XIII~ century theology. what motivated francis bacon to say suggest that we should occasionally go behond syntactic frameworks and consult reality, by means of testing the existing assignments of meaning.

the slogan that motivated introducing the irrelevant typing was that it should characterize those terms where the typing can be erased without losing information. now it is possible to say that polymorphic constructs satisfy that requirement: an identity is an identity. but then the further details of the proposed syntax precluded the identity as an irrelevant term. (which is why i pointed out that 1 has no irrelevant inhabitants.) the next interpretation was that the irrelevantly typed terms might be those that diverge as untyped lambda-terms. that seemed like a sound interpretation. but then the question arises: which irrelevantly typed terms are constant? what is the vanishing point of all non-church-rosser terms?...

i am just pulling a possible theological interpretation off the top of my head. other theologists would pull other possible interpretations off the top of their heads, and prove them by providing authoritative references .

theological methods are fantastically appropriate when we look for meanings of unassigned signifiers. math is i think meant to do the opposite: to easily assign signifiers to some existing meanings which we are trying to better understand.

This is just amazing Dusko :D

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 12:04):

Bruno Gavranovic said:

I'm not sure what you mean here by the distinction of formulas and functions. Are you saying ff isn't a function? I don't see how how that can be, as it looks to me like a well-defined assignment of an output to every input.

Every formula defining a function gives you an actual function (which is a morphism in Set\bf Set, i.e. a [[functional relation]]) but there's many formulas corresponding to the same function. We definitely care about this distinction in programming, where the same function can be represented by wildly different (both in intension as in runtime behaviour) programs.

view this post on Zulip Bruno Gavranović (Mar 10 2023 at 12:53):

Matteo Capucci (he/him) said:

Every formula defining a function gives you an actual function (which is a morphism in Set\bf Set, i.e. a [[functional relation]]) but there's many formulas corresponding to the same function. We definitely care about this distinction in programming, where the same function can be represented by wildly different (both in intension as in runtime behaviour) programs.

Okay, it sounds like formula here is referring to implementation details of a function. If so, then this is clear - there's many different ways to write a program implementing a function. But what does that have to with the question of whether ff is zero-annotated?

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:36):

Bruno Gavranovic said:

I'm not sure what you mean here by the distinction of formulas and functions. Are you saying ff isn't a function? I don't see how how that can be, as it looks to me like a well-defined assignment of an output to every input.

formulas are just part of a language used to talk about objects. You can dream up any language you want, with rules that constrain it, but as long as you don't have a way to map meaning onto this language, then it's nothing more than a pile of symbols

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:40):

Matteo Capucci (he/him) said:

Every formula defining a function gives you an actual function (which is a morphism in Set\bf Set, i.e. a [[functional relation]]) but there's many formulas corresponding to the same function. We definitely care about this distinction in programming, where the same function can be represented by wildly different (both in intension as in runtime behaviour) programs.

that's not even true, suppose your formula is in a language that doesn't have set semantics, eg. HoTT with ua

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:48):

Bruno Gavranovic said:

Matteo Capucci (he/him) said:

Every formula defining a function gives you an actual function (which is a morphism in Set\bf Set, i.e. a [[functional relation]]) but there's many formulas corresponding to the same function. We definitely care about this distinction in programming, where the same function can be represented by wildly different (both in intension as in runtime behaviour) programs.

Okay, it sounds like formula here is referring to implementation details of a function. If so, then this is clear - there's many different ways to write a program implementing a function. But what does that have to with the question of whether ff is zero-annotated?

There is no such thing as a 0-annotated function, only 0-annotated programs. The problem is that even if a function can be implemented by a 0-annotated function, it might also be implemented by ones that are not. For example the constant function 3:NN3:\N \to \N can be implemented as nn103n \mapsto n^{10} \mapsto 3, which is not a program of type (0n:N)N(0\, n: \N) \to \N.

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:48):

Josselin Poiret said:

Matteo Capucci (he/him) said:

Every formula defining a function gives you an actual function (which is a morphism in Set\bf Set, i.e. a [[functional relation]]) but there's many formulas corresponding to the same function. We definitely care about this distinction in programming, where the same function can be represented by wildly different (both in intension as in runtime behaviour) programs.

that's not even true, suppose your formula is in a language that doesn't have set semantics, eg. HoTT with ua

Sure... I was talking about a formula in a language which has been given a semantics in Set\bf Set

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:55):

Matteo Capucci (he/him) said:

There is no such thing as a 0-annotated function, only 0-annotated programs.

I'd rather say there's no 0-annotated function that corresponds to the specification OP has in mind

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:55):

What do you mean?

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:55):

Matteo Capucci (he/him) said:

There is no such thing as a 0-annotated function, only 0-annotated programs. The problem is that even if a function can be implemented by a 0-annotated function, it might also be implemented by ones that are not. For example the identity function NN\N \to \N can be implemented as nn2n \mapsto \sqrt{n^2}, which is not a program of type (0n:N)N(0\, n: \N) \to \N.

the identity function isn't 0-annotated by any means

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:56):

Ah, of course lol

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:56):

Let me edit

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:56):

Matteo Capucci (he/him) said:

What do you mean?

well Mike above outlined a naive interpretation of 0-annotated function, it's just not what the op is looking for

view this post on Zulip Josselin Poiret (Mar 10 2023 at 13:56):

you could also say that 0-annotated functions are just functions

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 13:57):

Yeah

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 14:06):

Perhaps what Bruno meant to ask is what are the categorical facilities allowing for interpreting graded dependent type theories. Additionally , what are the categorical properties we expect the class of 0-annotated morphisms to have.

For instance I expect that if our category is structured in some way (say, monoidal) then the structure morphisms are 0-annotated. Like we wouldn't expect the associator to be considered as 'using' its arguments. To me this reminds be of the zeroth Hochschild homology of categories (the endomorphisms of the identity)...

view this post on Zulip Bruno Gavranović (Mar 10 2023 at 14:12):

Matteo Capucci (he/him) said:

There is no such thing as a 0-annotated function, only 0-annotated programs. The problem is that even if a function can be implemented by a 0-annotated function, it might also be implemented by ones that are not. For example the constant function 3:NN3:\N \to \N can be implemented as nn103n \mapsto n^{10} \mapsto 3, which is not a program of type (0n:N)N(0\, n: \N) \to \N.

That's a different matter. I'm talking about functional dependence. Denotational, if you will.
Your second function also doesn't use the nn. You're doing something with the input, but then discarding the result, which is equivalent to discarding the input, by property of the terminal object in Set\mathbf{Set}.

view this post on Zulip Bruno Gavranović (Mar 10 2023 at 14:13):

I'll grant you that there's a number of complexities that arise once you start thinking about operational matters, but that's not what I'm doing here. I'm merely asking whether jiggling the input jiggles the output. In both of your cases it doesn't.

view this post on Zulip Josselin Poiret (Mar 10 2023 at 14:15):

i think the notation 0 is unfortunate here, I'd rather talk about irrelevance. I don't think it has anything to do with graded stuff, at least not trivially, and I would expect the structure maps of some monoidal category to not be irrelevant in their domain

view this post on Zulip Josselin Poiret (Mar 10 2023 at 14:17):

this discussion has been going for quite a while, but I think everything I've mentioned is still relevant, it's just highly non-trivial. For irrelevant dependent function types, you first need shape irrelevance, and so you need to choose some semantics for both of these.

view this post on Zulip Josselin Poiret (Mar 10 2023 at 14:18):

(note that non dependent types would automatically be shape-irrelevant by whatever rules govern shape irrelevance, hence why your example seems to work well)

view this post on Zulip Bruno Gavranović (Mar 10 2023 at 14:37):

Josselin Poiret said:

this discussion has been going for quite a while, but I think everything I've mentioned is still relevant, it's just highly non-trivial. For irrelevant dependent function types, you first need shape irrelevance, and so you need to choose some semantics for both of these.

Right. I should probably answer your previous question
Josselin Poiret said:

what do you mean by categorical semantics?

What I mean is that you used things like B : (@shirr A) -> Set in your comment. What I'm trying to understand is what this means - preferably categorically. Can I express this as some property of a map f:XSetf : X \to \mathbf{Set}, perhaps as some commuting square, or universal property? Or do I have to look at it in a completely different way?

I suppose what I'm first and foremost trying to establish is whether this is something people have figured out :) But from the looks of it seems the answer is no. Maybe that's what you're saying, that this is highly non-trivial

view this post on Zulip Josselin Poiret (Mar 10 2023 at 14:48):

Bruno Gavranovic said:

What I mean is that you used things like B : (@shirr A) -> Set in your comment. What I'm trying to understand is what this means - preferably categorically. Can I express this as some property of a map f:XSetf : X \to \mathbf{Set}, perhaps as some commuting square, or universal property? Or do I have to look at it in a completely different way?

Ha, one more reason to get rid of the convention to call the universe Set :). You should've read that as B : (@shirr A) → Type instead, a dependent shape-irrelevant function into the universe of types. I did not mean to imply that there are set semantics for this

view this post on Zulip Jules Hedges (Mar 10 2023 at 16:38):

Mike Shulman said:

FWIW, if "set" means a set as in ZF, with a global membership predicate, then there is an induced notion of "heterogeneous equality" and we can define constant dependent functions that way. For instance, f:(n:N){kNkn}f : (n : \mathbb{N}) \to \{ k\in \mathbb{N} \mid k \le n \} defined by f(n)=0f(n) = 0 for all nn would then be a "constant function" in the sense that the "0"s in each output set {kNkn}\{ k \in \mathbb{N} \mid k \le n \} are literally equal in the global ZF sense. But this notion of global equality is not visible to the category of sets unless you equip it with extra structure.

I think this is quite close to what's happening in the Idris code, where there is a similar global notion of equality coming from the fact that runtime values are in the end implemented as bits in memory. Being able to erase types means something like you can take a bit-pattern coming from one type and interpret it as a value of a different type, and nothing will go horribly wrong

view this post on Zulip Matteo Capucci (he/him) (Mar 10 2023 at 17:46):

Bruno Gavranovic said:

Matteo Capucci (he/him) said:

There is no such thing as a 0-annotated function, only 0-annotated programs. The problem is that even if a function can be implemented by a 0-annotated function, it might also be implemented by ones that are not. For example the constant function 3:NN3:\N \to \N can be implemented as nn103n \mapsto n^{10} \mapsto 3, which is not a program of type (0n:N)N(0\, n: \N) \to \N.

That's a different matter. I'm talking about functional dependence. Denotational, if you will.
Your second function also doesn't use the nn. You're doing something with the input, but then discarding the result, which is equivalent to discarding the input, by property of the terminal object in Set\mathbf{Set}.

This example was deceitfully simple so my point didn't get across, it being this is only an operational distinction. The second program in my example does use n to compute n10n^{10}, but then also discards it (though of course what 'usage' means depends on how you actually realize these functions, which is my overarching point).

A more convincing example might be this: does the length map on vectors use its argument? There's two ways to define the map n:Nlen:VecnNn : \N \vdash {\sf len} : {\sf Vec}\, n \to \N: one that simply outputs nn and another that counts through the length of a given vector. These programs 'compile' to the same function but one can be also expressed in context 0,n:N0, n:\N while the other (should) not.

view this post on Zulip Reid Barton (Mar 10 2023 at 20:39):

Another issue: most functions aren't given by any formula at all.

view this post on Zulip dusko (Mar 10 2023 at 22:26):

Bruno Gavranovic said:

Josselin Poiret said:

this discussion has been going for quite a while, but I think everything I've mentioned is still relevant, it's just highly non-trivial. For irrelevant dependent function types, you first need shape irrelevance, and so you need to choose some semantics for both of these.

Right. I should probably answer your previous question
Josselin Poiret said:

what do you mean by categorical semantics?

What I mean is that you used things like B : (@shirr A) -> Set in your comment. What I'm trying to understand is what this means - preferably categorically. Can I express this as some property of a map f:XSetf : X \to \mathbf{Set}, perhaps as some commuting square, or universal property? Or do I have to look at it in a completely different way?

there is no chance that irrelevantly typed constant functions can be modeled over Set\mathbf{Set} as the base topos generated by 11, for the reason that i mentioned in the first posting. any irrelevantly typed constant function would have to factor through 11 and the identity on 11 is not irrelevantly typed. the lifting of 11 to Set/A\mathbf{Set}/A is relevantly dependent on AA.

but that does not imply that there is a problem with categorical semantics of irrelevantly typed constants. the untyped lambda-calculus cannot be modeled over Set\mathbf{Set} either, and its categorical semantics could hardly be more interesting...

come to think, there is a presheaf topos with a fully abstract model of untyped lambda-calculus, i.e. an object DD with DDDD\cong D^D. (i think the base category for the presheaves are the finite assignments to closed terms... something like that.) my theological idea of modeling irrelevantly typed terms as some typed terms that diverge when we erase types could probably be realized there, fully abstractly, by considering the kleisli for D×()D\times(-). but this idea is theological, in the sense that it is open to the "design decisions" how to interpret it. pretty much everyone has been repeating that we need to specify the meaning of "meaning" before we ask about the meaning of something.

ironically, the semantics of irrelevantly typed functions is itself a function that it is highly relevantly typed by both its domain and its codomain.

view this post on Zulip Mike Shulman (Mar 11 2023 at 03:23):

What are the formal rules of irrelevance? If we have those specified, then we can ask what it takes for a category to model them, and whether Set\rm Set can possibly do so.

view this post on Zulip dusko (Mar 11 2023 at 23:00):

the rules that i looked up (when i started apologizing that i would demonstrate ignorance) are from:
https://arxiv.org/abs/1203.4716
which came up on google. but i confess that i only looked up the rules for like 5 min and that my intuition is coming (unless i am halucinating) from a lengthy thesis that i was asked to read i think in the 2000s but cannot find it anymore.

view this post on Zulip Mike Shulman (Mar 12 2023 at 17:49):

Thanks! The rules for IITT in that paper seem to be basically a modal type theory with an idempotent monad for which any two elements of a modal type are definitionally equal. That makes perfect sense in Set\rm Set with the (1)(-1)-truncation modality.

But it requires (as is normal for modal type theories) that the codomain type of an irrelevant function also depends irrelevantly on the argument. They have some remarks on p6 and p12 about some problems that arise if you want an irrelevant function to belong to a relevant codomain:

This is fine in an erasure semantics, but incompatible with our typed semantics in the presence of large eliminations; we will detail the issues in examples 2.3 and 2.8.

I guess "erasure semantics" refers to something like the ZFC representation that I mentioned earlier. I haven't completely grokked their Example 2.8, but it seems to argue that with irrelevant functions in relevant codomains and a unit type with an η\eta-rule it would be impossible to have decidable (transitive) definitional equality. This suggests that the prospects for a categorical semantics are also rather dim.

view this post on Zulip Dylan Braithwaite (Mar 12 2023 at 22:39):

Mike Shulman said:

Thanks! The rules for IITT in that paper seem to be basically a modal type theory with an idempotent monad for which any two elements of a modal type are definitionally equal. That makes perfect sense in $\rm Set$ with the $(-1)$-truncation modality.

But it requires (as is normal for modal type theories) that the codomain type of an irrelevant function also depends irrelevantly on the argument. They have some remarks on p6 and p12 about some problems that arise if you want an irrelevant function to belong to a relevant codomain:

I guess this paper is describing a sort of proof-irrelevance, whereas the notion in Idris is something more like value-irrelevant terms. Where the former forces equalities between irrelevant elements, the latter is just requiring some kind of parametricity in how irrelevant variables are used. I think the rules for it are based on Bob Atkey's QTT paper (which Sam also mentioned earlier in the thread).

view this post on Zulip dusko (Mar 13 2023 at 03:55):

Mike Shulman said:

Thanks! The rules for IITT in that paper seem to be basically a modal type theory with an idempotent monad for which any two elements of a modal type are definitionally equal. That makes perfect sense in Set\rm Set with the (1)(-1)-truncation modality.

i seem to be misunderstanding some basic terminology. there seem to be just two nontrivial idempotent monads on Set\bf Set. if modal types are their images, then there are not enough for a lengthy paper.

and then neither google nor Sydney nor i know about (-1)-truncation modality. explain?

view this post on Zulip Mike Shulman (Mar 13 2023 at 04:25):

The paper isn't about semantics in Set! It's about syntax.

view this post on Zulip Mike Shulman (Mar 13 2023 at 04:26):

The (1)(-1)-truncation of a set is also known as its support: it's a subsingleton (= truth value = proposition) that's inhabited precisely when the set is. Equivalently it's the quotient of the maximal equivalence relation on the set. Classically, it's empty if the set is, otherwise it's terminal.

view this post on Zulip Mike Shulman (Mar 13 2023 at 04:34):

@Dylan Braithwaite Thanks for pointing that out. It looks like their models use a sort of linear realizability to interpret irrelevance. That also suggests that there are unlikely to be nontrivial models in Set.

view this post on Zulip dusko (Mar 13 2023 at 08:31):

Mike Shulman said:

The paper isn't about semantics in Set! It's about syntax.

yes, and the paper didn't say "The rules for IITT in that paper seem to be basically a modal type theory with an idempotent monad for which any two elements of a modal type are definitionally equal. That makes perfect sense in Set". you said that. so i asked you which of the two idempotent monads you had in mind.

or maybe you rushed a little ;?

view this post on Zulip dusko (Mar 13 2023 at 08:41):

Mike Shulman said:

The (1)(-1)-truncation of a set is also known as its support: it's a subsingleton (= truth value = proposition) that's inhabited precisely when the set is. Equivalently it's the quotient of the maximal equivalence relation on the set. Classically, it's empty if the set is, otherwise it's terminal.

most people and most books on topos theory assume that the base topos Set\bf Set set is boolean and generated by 1. it looks like (1)(-1)-truncation monad requires something else to be nontrivial. this monad projecting everything to the partial map classifier? maybe we could understand what it is if you specify it in the language of elementary topos? where does the name come from?

view this post on Zulip Morgan Rogers (he/him) (Mar 13 2023 at 09:11):

@dusko Mike explicitly says what the monad does, what are you asking for exactly?

view this post on Zulip Josselin Poiret (Mar 13 2023 at 09:12):

Mike Shulman said:

What are the formal rules of irrelevance? If we have those specified, then we can ask what it takes for a category to model them, and whether Set\rm Set can possibly do so.

Well, the Degrees of Relatedness paper (https://anuyts.github.io/files/paper-reldtt.pdf) contains rules for irrelevance and shape irrelevance but also other modalities. I'm not sure if it can be weakened to only have those two, but maybe I'm wrong.

view this post on Zulip Josselin Poiret (Mar 13 2023 at 09:16):

dusko said:

most people and most books on topos theory assume that the base topos Set\bf Set set is boolean and generated by 1. it looks like (1)(-1)-truncation monad requires something else to be nontrivial. this monad projecting everything to the partial map classifier? maybe we could understand what it is if you specify it in the language of elementary topos? where does the name come from?

In the language of elementary topoi, the (-1)-truncation takes an object X X and gives you the proposition corresponding to the image of X1 X → 1 . It's called truncation because in homotopy type theory you have universes of n n -truncated types, which are basically types with non-trivial homotopical information up until n n , and for n=1 n = -1 you get exactly propositions, ie. types which only have at most one possible inhabitant

view this post on Zulip Josselin Poiret (Mar 13 2023 at 09:18):

note that in HoTT usually the universe of n n -truncated types is constructed internally, and is not a universe of definitionally n n -truncated types. The subobject classifier in usual elementary topoi is a universe of definitionally 1 -1 truncated types.

view this post on Zulip Mike Shulman (Mar 14 2023 at 05:47):

dusko said:

yes, and the paper didn't say "The rules for IITT in that paper seem to be basically a modal type theory with an idempotent monad for which any two elements of a modal type are definitionally equal. That makes perfect sense in Set". you said that. so i asked you which of the two idempotent monads you had in mind.

I specified "the (1)(-1)-truncation monad". Then it became evident that not everyone knows what that meant, so I clarified:

The (1)(−1)-truncation of a set is also known as its support: it's a subsingleton (= truth value = proposition) that's inhabited precisely when the set is. Equivalently it's the quotient of the maximal equivalence relation on the set. Classically, it's empty if the set is, otherwise it's terminal.

view this post on Zulip Mike Shulman (Mar 14 2023 at 05:51):

Josselin Poiret said:

The subobject classifier in usual elementary topoi is a universe of definitionally 1 -1 truncated types.

While I know what you mean, I think this is a bit of a misleading way to phrase it, because adjectives like "definitional" and "propositional" only make sense in syntax, whereas an elementary topos is a semantic notion. I think the point you're making is that in an elementary topos (or, more generally, any 1-category with its subobject logic) the propositions are strictly (1)(-1)-truncated, in that any two of their elements are equal in the strongest possible way that anything can be equal.

This makes it possible to use such a category to interpret a type theory in which propositions are definitionally (1)(-1)-truncated, but it doesn't force us to do so; we might also interpret a type theory with propositionally-truncated propositions. One reason we might want to do the latter is that in dependent type theory, it sometimes happens that there is a type we can prove to be propositionally truncated, even though it isn't so "by definition", and it's quite hard to set up a well-behaved type theory in which such things end up being definitionally truncated.

view this post on Zulip Josselin Poiret (Mar 14 2023 at 10:02):

Mike Shulman said:

Josselin Poiret said:

The subobject classifier in usual elementary topoi is a universe of definitionally 1 -1 truncated types.

While I know what you mean, I think this is a bit of a misleading way to phrase it, because adjectives like "definitional" and "propositional" only make sense in syntax, whereas an elementary topos is a semantic notion. I think the point you're making is that in an elementary topos (or, more generally, any 1-category with its subobject logic) the propositions are strictly (1)(-1)-truncated, in that any two of their elements are equal in the strongest possible way that anything can be equal.

The way I think about it is that the semantic notion of a subobject classifier supports the syntactic notion of a universe of definitionally 1-1-truncated types. Maybe "implements" rather than "is" would have been a better phrasing?

view this post on Zulip Bruno Gavranović (Mar 14 2023 at 11:54):

Matteo Capucci (he/him) said:

A more convincing example might be this: does the length map on vectors use its argument? There's two ways to define the map n:Nlen:VecnNn : \N \vdash {\sf len} : {\sf Vec}\, n \to \N: one that simply outputs nn and another that counts through the length of a given vector. These programs 'compile' to the same function but one can be also expressed in context 0,n:N0, n:\N while the other (should) not.

I see what you mean. You're saying both of these functions

f1 : {n : Nat} -> Vect n Bool -> Nat
f1 {n} xs = n

f2 : {0 n : Nat} -> Vect n Bool -> Nat
f2 [] = 0
f2 (x :: xs) = 1 + f2 xs

map the same set of inputs to the same set of outputs, but one of them is obviously zero-annotated, while other one is not. I agree - this is a interesting question, and seems to get down to the crux of the essence of what it means for a map to be erased. As such, it's probably necessary to answer this first before expecting to find any more general formulations of erasure.

I'm trying to understand what's going on here. On one hand, I could imagine an argument saying that both of them actually are erased in n. Could we say that f2 also uses the n because a pattern-match on Vect n Bool reveals some information about n? As a matter of fact, isn't that what dependently-typed pattern matching tells us? On the other hand, I'm not sure if that arguments holds up to closer scrutiny.

Perhaps the argument could be simplified. @Reid Barton, you mentioned that most functions in Set\mathbf{Set} aren't given by any formula at all. What's an example of this? (I feel like I'll realise the answer to this question is also deceitfully simple, once I hear the answer)

view this post on Zulip Reid Barton (Mar 14 2023 at 16:04):

Well it would be a bit difficult to write down a formula for an example.

view this post on Zulip Reid Barton (Mar 14 2023 at 16:04):

There are uncountably many functions from Nat to Nat, and only countably many formulas, so a lot of the functions must get left out.

view this post on Zulip Mike Shulman (Mar 15 2023 at 04:06):

You can beta-reduce that argument a little. List all the formulas defining functions NN\mathbb{N}\to\mathbb{N}, let fnf_n be the function defined by formula nn, and define g(n)=fn(n)+1g(n) = f_n(n) + 1.

view this post on Zulip Josselin Poiret (Mar 15 2023 at 08:42):

Bruno Gavranovic said:

I'm trying to understand what's going on here. On one hand, I could imagine an argument saying that both of them actually are erased in n. Could we say that f2 also uses the n because a pattern-match on Vect n Bool reveals some information about n? As a matter of fact, isn't that what dependently-typed pattern matching tells us? On the other hand, I'm not sure if that arguments holds up to closer scrutiny.

Yes, this is one of the other complicated parts of irrelevance: you can't really pattern match like you did. In a way, this is because when pattern matching, you'd have to unify a non-irrelevant varialbe to an irrelevant one (the n n ), so it should be forbidden here as well.

view this post on Zulip Bruno Gavranović (Mar 15 2023 at 11:26):

Mike Shulman said:

You can beta-reduce that argument a little. List all the formulas defining functions NN\mathbb{N}\to\mathbb{N}, let fnf_n be the function defined by formula nn, and define g(n)=fn(n)+1g(n) = f_n(n) + 1.

Doesn't this make gg different than fnf_n? Besides, fn(n)+1f_n(n) + 1 looks like it should be in the list of formulas defining functions NN\mathbb{N} \to \mathbb{N}

view this post on Zulip Bruno Gavranović (Mar 15 2023 at 11:28):

Josselin Poiret said:

Yes, this is one of the other complicated parts of irrelevance: you can't really pattern match like you did. In a way, this is because when pattern matching, you'd have to unify a non-irrelevant varialbe to an irrelevant one (the n n ), so it should be forbidden here as well.

Right! But doesn't this prohibit us from computing the length of v : Vect n Bool? Because and kind of observation of information about this vector reveals information about its length. And since they're defined inductively, the only way to perform this observation is to essentially pattern match

view this post on Zulip Mike Shulman (Mar 15 2023 at 15:02):

Bruno Gavranovic said:

Doesn't this make gg different than fnf_n? Besides, fn(n)+1f_n(n) + 1 looks like it should be in the list of formulas defining functions NN\mathbb{N} \to \mathbb{N}

Yes, that's the point: gg is different from fnf_n for all nn, so it can't appear anywhere in the list of "functions defined by a formula"; hence it can't be defined by any formula. g(n)=fn(n)+1g(n) = f_n(n)+1 is not a formula, because there is no formula computing fnf_n in terms of nn.

view this post on Zulip Mike Shulman (Mar 15 2023 at 15:03):

This is the basic idea of any "diagonalization argument", here the one showing that there are uncountably many functions NN\mathbb{N} \to \mathbb{N}.

view this post on Zulip Josselin Poiret (Mar 15 2023 at 18:08):

Bruno Gavranović said:

Right! But doesn't this prohibit us from computing the length of v : Vect n Bool? Because and kind of observation of information about this vector reveals information about its length. And since they're defined inductively, the only way to perform this observation is to essentially pattern match

Actually, I'm not too sure of what I said. I think it might actually be possible to pattern match on that. I was worried that by being able to compute the length of a vector you could extra n irrelevantly, but you can't build a generic Vec Unit n from @irr n so it's fine.

view this post on Zulip Bruno Gavranović (Mar 15 2023 at 18:15):

Josselin Poiret said:

Actually, I'm not too sure of what I said. I think it might actually be possible to pattern match on that.

But doesn't pattern matching on a vector reveal some information about n?

I'm not sure why you would need to build this generic vector, though:

I was worried that by being able to compute the length of a vector you could extra n irrelevantly, but you can't build a generic Vec Unit n from @irr n so it's fine.

view this post on Zulip Bruno Gavranović (Mar 15 2023 at 18:17):

That is, I'm aware of the idea that erasing some information doesn't prohibit us from obtaining that information in other ways (the canonical example of this being the computation of vector length). What I'm now questioning is whether in this particular example erasing n ought to prevent us from pattern matching on the vector.

view this post on Zulip Josselin Poiret (Mar 15 2023 at 20:52):

Bruno Gavranović said:

That is, I'm aware of the idea that erasing some information doesn't prohibit us from obtaining that information in other ways (the canonical example of this being the computation of vector length). What I'm now questioning is whether in this particular example erasing n ought to prevent us from pattern matching on the vector.

What I didn't want to exist is a function @0 Nat → Nat that is the identity, but since you don't havea canonical function @0 n : Nat → Vec Unit n anyways (because of irrelevance), you won't get a problem with the length function

view this post on Zulip Jacques Carette (Mar 23 2023 at 21:01):

Reid Barton said:

Another issue: most functions aren't given by any formula at all.

This is where a conversation between classical mathematicians with computer scientists of a particular kind goes completely sideways. In many areas of computing, the only interesting things are those functions that are denotations of formulas (well, expressions, to me formulas are boolean-valued) - the others basically don't exist.

view this post on Zulip Mike Shulman (Mar 23 2023 at 21:35):

I'm sure that's sometimes the case, but the OP in this thread was specifically about category theory, not about syntax.

view this post on Zulip Jacques Carette (Mar 23 2023 at 22:00):

The OP was weaving in and out between Idris 2 and category theory during the thread and was admonished for using the word 'function' when applied to a description.

view this post on Zulip Mike Shulman (Mar 24 2023 at 01:51):

Right -- because in the context of the question that was asked, "function" is a categorical notion.

view this post on Zulip Reid Barton (Mar 26 2023 at 07:46):

This conversation also gave me a new understanding of the "Hask is/is not a category" debate.
The problem appears to be about the word "Hask", but actually there's already a lack of agreement about the meaning of "category".

view this post on Zulip Bruno Gavranović (Mar 26 2023 at 11:32):

I suppose another way to state my confusion is: if being constant is a property in the non-dependent case (see the original post for info), why should I expect it now to become a structure (which is what I understand people are claiming), if I want to express it in the dependent case?

view this post on Zulip Bruno Gavranović (Mar 26 2023 at 11:34):

(I'm not disagreeing with this, merely expressing my lack of intuition here. Somehow I don't understand the jump here)

view this post on Zulip Reid Barton (Mar 26 2023 at 11:52):

The extra structure is not on the function in question, but rather on the type family that is its codomain.
(This is semi-tautological, since the structure could just be "what does it mean to give a constant section of this family".)
In the case of a constant family there is one obvious choice of this structure, but why should it happen for other families?

view this post on Zulip Reid Barton (Mar 26 2023 at 12:10):

Like if I have a constant vector bundle (or fiber bundle) F×MF \times M over a manifold MM then I can say what it means to givea constant section of the bundle (it means the projection to FF is a constant function), but if I just have a random bundle EE then I need some extra structure to talk about constant sections, like a connection or something.

view this post on Zulip Reid Barton (Mar 26 2023 at 12:13):

This extra structure on a type family isn't something you control directly (or, apparently, can use in any way internally at all). It is determined from the syntax of the type family, by some rules. And those rules are extra structure on the interpretation of the type theory.

view this post on Zulip Reid Barton (Mar 26 2023 at 12:13):

For example, one boring but apparently legal choice would be that all sections of all type families are "constant".

view this post on Zulip Bruno Gavranović (Mar 26 2023 at 12:13):

It sounds like you're saying that the fact we can think of constancy as a property in the non-dependent case is because in some way it's a canonically chosen structure.
Is this a consequence of some general categorical fact that every property can be thought of as a canonically chosen structure?

view this post on Zulip Josselin Poiret (Mar 26 2023 at 13:01):

Bruno Gavranović said:

It sounds like you're saying that the fact we can think of constancy as a property in the non-dependent case is because in some way it's a canonically chosen structure.
Is this a consequence of some general categorical fact that every property can be thought of as a canonically chosen structure?

As a general motto, "property is just degenerate structure", yes. For how to make it precise, well... I would appeal to HoTT, and define property as (-1)-truncation of some structure, and there it's tautological.

view this post on Zulip Josselin Poiret (Mar 26 2023 at 13:05):

@Reid Barton a connection is not enough i feel like, since a connection basically lifts paths in the base to morphisms of fibers, which we already have in type theory with propositional equality and J (iiuc). We want to be able to canonically compare fibers over two points without necessarily having any way to relate them.

view this post on Zulip Jacques Carette (Mar 26 2023 at 13:25):

I'm surprised no one has brought up [[weakly constant function]] yet. It's definitely related. There's at least one paper out there that defines all sorts of variants of being "constant" which are the same classically, but different constructively (unfortunately, I'm not finding it right now).

view this post on Zulip Jacques Carette (Mar 26 2023 at 13:29):

(Ah, that might be because there might not have been a paper, but merely discussions on various mailing lists.)

view this post on Zulip Josselin Poiret (Mar 26 2023 at 17:20):

@Jacques Carette this definition is only for non-dependent function types unfortunately

view this post on Zulip Jacques Carette (Mar 26 2023 at 18:46):

Yes, I know. But there are some 'related ideas' that ought to be useful on that page.

view this post on Zulip Ryan Wisnesky (Mar 26 2023 at 21:00):

There's an obvious generalization of weakly constant function using dependent equality, maybe that could be useful

Require Import JMeq.
Definition weakConstDep : forall A G
(f: forall a:A, G(a)),
forall x y: A, JMeq (f x) (f y).

view this post on Zulip Josselin Poiret (Mar 27 2023 at 12:59):

for people who don't speak Coq, this is basically the "obvious" definition but using heterogeneous equality, inductively defined. This won't do, because you can't internally prove that f n = [] from n : Nat to List n is constant, since List n and List m aren't comparable

view this post on Zulip Reid Barton (Mar 27 2023 at 13:12):

In fact you can probably even internally prove that f isn't constant in that sense, by taking n = 0 and m > 0, assuming that we are talking about lists of some type that has more than one element

view this post on Zulip Josselin Poiret (Mar 27 2023 at 13:19):

if you want to do it all internally, your best bet is to go through some setoid and dependent setoid hell, since you will have to consider a more general equality than the propositional one. If you want to do it definitionally, you're going to need new modalities IMO.

view this post on Zulip Jacques Carette (Mar 27 2023 at 14:30):

Setoids are not hell. If you think of reasoning with setoids as being finicky set-theory, then sure, it is tedious. If you think of reasoning with setoids as particularly simple categorical reasoning, then it is indeed rather nice.

Setoids ~ 0-Groupoids. That's the clue, right there.

view this post on Zulip Jules Hedges (Mar 27 2023 at 15:03):

The mention of connections sent me down a small rabbit hole of trying to learn about what a connection is. Here's my naive attempt to write up my thought process:

Suppose we have a set-indexed family of sets F:XSetF : X \to \mathbf{Set}, arguably the simplest case where we don't have an obvious notion of what it means for a section f:xXF(x)f : \prod_{x \in X} F(x) to be constant. We have a group action of Aut(X)\mathbf{Aut} (X) on XX, but there's no natural way to extend that action to x:XF(x)\sum_{x : X} F(x). Suppose we did have an action \bullet of Aut(X)\mathbf{Aut} (X) on x:XF(x)\sum_{x : X} F(x) which is over the action on XX, ie. if σ:XX\sigma : X \to X is a bijection then σ(x,x)=(σ(x),σ(x))\sigma \bullet (x, x') = (\sigma(x), \sigma(x')) where σ(x)\sigma (x') is some element of F(σ(x))F (\sigma (x)), which are appropriately functorial. I think if we have this data then we can define what a "constant section is"

My gut feeling is group actions are too strong to be useful here, but everything probably works for the action of the monoid End(X)\mathbf{End} (X) too

view this post on Zulip Mike Shulman (Mar 27 2023 at 17:55):

I think the extra structure that you need on a type family B:ATypeB : A \to \mathsf{Type} in order to talk about constancy of sections f:(x:A)B(x)f : (x:A) \to B(x) is that BB itself should be constant. As usual, by definition this means that BB factors through the unit type, i.e. there is a specified type CC and a family of equalities ex:B(x)=Ce_x : B(x) = C for all x:Ax:A. Given this, we can say that f:(x:A)B(x)f : (x:A) \to B(x) is constant if there exists a c:Cc:C and a family of equalities f(x)=cf(x) = c, or more precisely (ex)(f(x))=c(e_x)_*(f(x)) = c.

The reason constancy of B:ATypeB : A \to \mathsf{Type} is a structure rather than a property is that its codomain Type\mathsf{Type} is not 0-truncated.

view this post on Zulip Mike Shulman (Mar 27 2023 at 17:57):

Jacques Carette said:

There's at least one paper out there that defines all sorts of variants of being "constant" which are the same classically, but different constructively (unfortunately, I'm not finding it right now).

Are you thinking of Notions of Anonymous Existence in Martin-Löf Type Theory?

view this post on Zulip Jacques Carette (Mar 27 2023 at 18:13):

Mike Shulman said:

Are you thinking of Notions of Anonymous Existence in Martin-Löf Type Theory?

Yes! Thank you. I even had one of the authors in mind, but the paper title didn't "ring a bell" for me.

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:15):

It should have been linked from the nLab page. I added it.

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 18:35):

Mike Shulman said:

I think the extra structure that you need on a type family B:ATypeB : A \to \mathsf{Type} in order to talk about constancy of sections f:(x:A)B(x)f : (x:A) \to B(x) is that BB itself should be constant. As usual, by definition this means that BB factors through the unit type, i.e. there is a specified type CC and a family of equalities ex:B(x)=Ce_x : B(x) = C for all x:Ax:A. Given this, we can say that f:(x:A)B(x)f : (x:A) \to B(x) is constant if there exists a c:Cc:C and a family of equalities f(x)=cf(x) = c, or more precisely (ex)(f(x))=c(e_x)_*(f(x)) = c.

The reason constancy of B:ATypeB : A \to \mathsf{Type} is a structure rather than a property is that its codomain Type\mathsf{Type} is not 0-truncated.

Is this true? Let ASetA \coloneqq \mathbf{Set}, and consider a non-constant B(X)=ListXB(X) = \mathsf{List } X.
Then there is a constant map f:(X:Set)ListX_[]f : (X : \mathbf{Set}) \to \mathsf{List } X \coloneqq \_ \mapsto []

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 18:37):

My intuition tells me that constancy has deep relation to parametricity. Doesn't the "Theorems for Free" paper tell us that there is only one implementation of

id : forall a . a -> a

precisely because the argument a in forall a is erased, that is, we have to implement this function without looking at the input type, for every type.

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:43):

Bruno Gavranović said:

Let ASetA \coloneqq \mathbf{Set}, and consider a non-constant B(X)=ListXB(X) = \mathsf{List } X.
Then there is a constant map f:(X:Set)ListX_[]f : (X : \mathbf{Set}) \to \mathsf{List } X \coloneqq \_ \mapsto []

I wouldn't call that map "constant".

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 18:46):

Why not?

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:47):

Because to say that ff is constant you have to say that f(x)=f(y)f(x) = f(y) for different x,yx,y, and in order to do that you have to have some way to identify the types of f(x)f(x) and f(y)f(y), which you don't in that case.

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:48):

I'm speaking categorically/semantically rather than syntactically, of course, which I understood was the context of this entire thread.

view this post on Zulip Dylan Braithwaite (Mar 27 2023 at 18:53):

Even syntactically, if you take a definition of List in Agda for example, I think [][] won't actually be a constant. It's a function with an implicit type parameter giving a distinct term []X:List X[]_X : \mathrm{List}\ X for each X

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:54):

Well, there's syntax and there's syntax.

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:55):

I would say that raw un-elaborated terms are also a kind of "syntax".

view this post on Zulip Mike Shulman (Mar 27 2023 at 18:55):

But yes, it's true that even syntactically one often cares about the fully elaborated version.

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 18:57):

Mike Shulman said:

Because to say that ff is constant you have to say that f(x)=f(y)f(x) = f(y) for different x,yx,y, and in order to do that you have to have some way to identify the types of f(x)f(x) and f(y)f(y), which you don't in that case.

It sounds like you're saying that to say that f(x)f(x) and f(y)f(y) are equal there first needs to be a way of identifying their types.
Is this necessarily true? I'm thinking of lists as coproducts, i.e. 1+X+X2+1 + X + X^2 + \dots, and in this way, for some values of lists, it looks like there's a meaningful way to identify the 11 of ListX\mathsf{List } X with 11 of ListY\mathsf{List } Y.

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 18:59):

In other words, what I'm saying is that it feels like this ought to be a constant function, even though the types can't necessarily "fully" be identified here.

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 19:01):

This example was first told to me by @Conor McBride here, who might have something to say about this

@bgavran3 @bentnib A is Set; a is X; B(a) is List (List X).

- Suk Lemon (@PTOOP)

view this post on Zulip Mike Shulman (Mar 27 2023 at 19:05):

From a semantic and category-theoretic viewpoint, yes, that is necessarily true. Essentially by definition, for a statement to be "category-theoretic" it must be invariant under equivalence of categories. And there is a theorem that a statement is so invariant precisely when it can be expressed in the dependently-typed language of categories without equality of objects, which in particular means that you cannot ask whether two morphisms are equal unless you already know that their domains and codomains are equal.

view this post on Zulip Mike Shulman (Mar 27 2023 at 19:06):

There are other viewpoints under which it does make sense to say that []List(X)[] \in {\rm List}(X) is equal to []List(Y)[] \in {\rm List}(Y), as have been mentioned previously in this thread: in un-elaborated syntax, or in membership-based set theory. But not in category theory.

view this post on Zulip Dylan Braithwaite (Mar 27 2023 at 19:11):

If you take Fin n instead of lists then I think the subtlety with just identifying the 11's becomes more obvious. For example if Fin 2 is 1+11 + 1 and Fin 3 is 1+1+11 + 1 + 1, then there are many ways to identify elements of those types/objects. In practice when you work with them you have a chosen copy that is i.e. the zero element, but this is extra structure!

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 19:14):

I see, that helps.

Though, it sounds like we're here merely passive observers of what kind of representations our Fin 2 and Fin 3 have -- looking at their representations which happen to be 1+11 + 1 and 1+1+11 + 1 + 1. But we know how Fin is defined. Doesn't that give us a clue about what things should be identified, which is different than our attempt of identification of types that Fin 2 and Fin 3 end up being isomorphic to?

view this post on Zulip Bruno Gavranović (Mar 27 2023 at 19:17):

Mike Shulman said:

From a semantic and category-theoretic viewpoint, yes, that is necessarily true. Essentially by definition, for a statement to be "category-theoretic" it must be invariant under equivalence of categories. And there is a theorem that a statement is so invariant precisely when it can be expressed in the dependently-typed language of categories without equality of objects, which in particular means that you cannot ask whether two morphisms are equal unless you already know that their domains and codomains are equal.

This might be something I'll need a while to understand. What does it mean "dependently-typed language of categories without equality of objects"?

view this post on Zulip Dylan Braithwaite (Mar 27 2023 at 19:27):

Bruno Gavranović said:

I see, that helps.

Though, it sounds like we're here merely passive observers of what kind of representations our Fin 2 and Fin 3 have -- looking at their representations which happen to be $1 + 1$ and $1 + 1 + 1$. But we know how Fin is defined. Doesn't that give us a clue about what things should be identified, which is different than our attempt of identification of types that Fin 2 and Fin 3 end up being isomorphic to?

To say that we know how Fin is defined is invoking both a syntax, and extra structure, which is how the syntax is modelled in our chosen category. This is fine, but we're again no longer doing a purely semantic construction

view this post on Zulip Mike Shulman (Mar 27 2023 at 22:14):

The dependently typed theory of categories consists of a type O\vdash O, a family of types x:O,y:OA(x,y)x:O, y:O \vdash A(x,y), identities x:Oix:A(x,x)x:O \vdash i_x :A(x,x), composition x:O,y:O,z:O,f:A(x,y),g:A(y,z)gf:A(x,z)x:O,y:O,z:O, f:A(x,y), g:A(y,z) \vdash g\circ f : A(x,z), and axioms like x:O,y:O,f:A(x,y)rf:Eq(fix,f)x:O, y:O, f:A(x,y) \vdash r_f : Eq(f \circ i_x,f) and so on. Stating these axioms requires some notion of "equality" but only for parallel morphisms, e.g. x:O,y:O,f:A(x,y),g:A(x,y)Eq(f,g)x:O, y:O, f:A(x,y),g:A(x,y) \vdash Eq(f,g).

One way to do this formally is to have a "logic-enriched type theory" over the above dependently typed signature, with atomic equality propositions only for parallel arrows. I sketched what this looks like in section 3 of Comparing material and structural set theories; you can also look at the original papers of Freyd ("Properties invariant within equivalence types of categories") and Blanc ("Equivalence naturelle et formules logiques en theorie des categories").

Another way is to formulate it as a [[FOLDS]] theory, although that requires turning the operations into functional relations. Makkai also proves a version of this theorem in his language, in his paper "First order logic with dependent sorts, with applications to category theory".

Finally, you can also use univalent categories in homotopy type theory. In this approach you don't discard the equality of objects entirely, but you say that it's a type (not just a proposition) that's equivalent to the type of isomorphisms. Thus, talking about equality of objects doesn't get you anything you couldn't already say by talking about isomorphisms.

view this post on Zulip Josselin Poiret (Mar 28 2023 at 08:23):

Jacques Carette said:

Setoids are not hell. If you think of reasoning with setoids as being finicky set-theory, then sure, it is tedious. If you think of reasoning with setoids as particularly simple categorical reasoning, then it is indeed rather nice.

Setoids ~ 0-Groupoids. That's the clue, right there.

setoids are hell, especially once you need dependent (or displayed) setoids, like here. I tried writing down an example of what I meant by internalizing everything, but it just took me too long so I started doing something else instead.

view this post on Zulip Josselin Poiret (Mar 28 2023 at 08:27):

i'm not saying they're a bad idea, just that in general if you can avoid working with them then it's a better deal

view this post on Zulip Josselin Poiret (Mar 28 2023 at 08:28):

also, unless your setoids are valued in prop, they don't correspond to 0-groupoids!

view this post on Zulip Josselin Poiret (Mar 28 2023 at 08:30):

the appeal of type theory (for me) is to have a synthetic language to manipulate the objects I want to manipulate. If all of a sudden I have to explicitely use setoids everywhere because I want all my objects to have a modifiable notion of equality, then that's a net loss in my book

view this post on Zulip Josselin Poiret (Mar 28 2023 at 08:31):

Mike Shulman said:

Because to say that ff is constant you have to say that f(x)=f(y)f(x) = f(y) for different x,yx,y, and in order to do that you have to have some way to identify the types of f(x)f(x) and f(y)f(y), which you don't in that case.

or you could have a notion of heterogenous equality on the fibers of your dependent type! (and not the one that lies over paths in the base, but one that can canonically compare any fibers)

view this post on Zulip David Corfield (Mar 28 2023 at 10:41):

Mike Shulman said:

I think the extra structure that you need on a type family B:ATypeB : A \to \mathsf{Type} in order to talk about constancy of sections f:(x:A)B(x)f : (x:A) \to B(x) is that BB itself should be constant...

This is all very much the terrain of counterparts in modal logic, the provision of a standard by which to locate a counterpart across fibres. Mike's MO answer is worth understanding.

There's a local version of this pulled back standard/ gauge idea in seeing partial differential equations as coalgebras over the jet comonad, a way to compare infinitesimally close fibres.

view this post on Zulip Notification Bot (Mar 28 2023 at 13:06):

22 messages were moved from this topic to #theory: category theory > h-levels, equalities, and other things by Matteo Capucci (he/him).

view this post on Zulip Matteo Capucci (he/him) (Mar 28 2023 at 13:06):

(Sorry for the slicing but it felt the conversation went a bit OT)

view this post on Zulip Mike Shulman (Mar 29 2023 at 04:03):

Josselin Poiret said:

Mike Shulman said:

Because to say that ff is constant you have to say that f(x)=f(y)f(x) = f(y) for different x,yx,y, and in order to do that you have to have some way to identify the types of f(x)f(x) and f(y)f(y), which you don't in that case.

or you could have a notion of heterogenous equality on the fibers of your dependent type! (and not the one that lies over paths in the base, but one that can canonically compare any fibers)

All notions of heterogeneous equality in type theory that I'm aware of include an identification of types. Sometimes that identification is a parameter of the heterogeneous equality, and other times it's a consequence, but it's always there.

view this post on Zulip dusko (Mar 29 2023 at 07:57):

Mike Shulman said:

From a semantic and category-theoretic viewpoint, yes, that is necessarily true. Essentially by definition, for a statement to be "category-theoretic" it must be invariant under equivalence of categories. And there is a theorem that a statement is so invariant precisely when it can be expressed in the dependently-typed language of categories without equality of objects, which in particular means that you cannot ask whether two morphisms are equal unless you already know that their domains and codomains are equal.

this "semantic and category theoretic point of view" sounds like like category theory is some sort of Latin where we cannot say bad things. or those famous programming languages where they tried to prevent buffer overflows by syntax. category theory is a living language and if we want to say something, we shouldn't need to go to other languages to say it. if two morphisms live in two different hom-sets, but they have something in common that makes me want to ask whether they are equal, i can do that, just like i can do it in english. if C\cal C is the category 2×M2\times M where MM is some monoid, then C\cal C comes with two projections, and i can certainly ask whether a morphism aC(0,0)a\in{\cal C}(0,0) is equal to bC(1,1)b\in{\cal C}(1,1). unless someone says that this is prohibited, in which case i'll do it when they are not looking. if people feel that the empty list should be the same for all types (like martin-loef felt when he defined the list types in his book), they can certainly express that in category theory (including martin-loef's). oh and the way the univalence axiom was presented at least initially was that it allows us to treat X0X^0 and Y0Y^0 as the same type, as long as they are in the same universe.

view this post on Zulip Reid Barton (Mar 29 2023 at 08:09):

Of course in a specific category it might make sense to talk about equalities of morphisms between different objects. It's just not something one can do meaningfully in a generic category, so in a situation where it does make sense, one has a category with extra structure. This extra structure was the point of the thread since the beginning.

view this post on Zulip Josselin Poiret (Mar 29 2023 at 08:40):

Mike Shulman said:

All notions of heterogeneous equality in type theory that I'm aware of include an identification of types. Sometimes that identification is a parameter of the heterogeneous equality, and other times it's a consequence, but it's always there.

The idea here is instead to add a notion of relatedness of types: when two types are related, there exists a notion of heterogeneous equality between them. You then get propositional equality by saying that every type is related to itself. This way you can compare types which aren't propositionally equal (but you need a lot more type-theoretic machinery).

view this post on Zulip Josselin Poiret (Mar 29 2023 at 08:42):

i'm probably doing RelDTT/RelMTT a disservice by summarizing it so succinctly, you should have a look at Andreas' paper [1] and PhD thesis chapter [2] on the subject.
[1] https://anuyts.github.io/files/paper-reldtt.pdf
[2] https://lirias.kuleuven.be/retrieve/581985

view this post on Zulip Josselin Poiret (Mar 29 2023 at 08:44):

then, you can just say that shape irrelevant dependent types send arbitrary elements x and y to related types, hence you're able to compare them heterogeneously

view this post on Zulip Mike Shulman (Mar 29 2023 at 19:04):

Unfortunately I don't have time to plow through those papers. My initial reaction is that now what you're calling "relatedness" is just a stand-in for an identification of types, even if you don't reify it in the syntax as a propositional equality. And in a categorical semantics, you would still have to interpret this "relatedness" by some kind of equivalence or isomorphism.

view this post on Zulip Mike Shulman (Mar 29 2023 at 19:10):

Usually when I hear the word "relatedness" in a context like this I think of parametricity. But parametricity relatedness isn't something that I would think of calling "heterogeneous equality". For instance, in that situation an element of one type can be related to more than one distinct element of the other type.