Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: type theory & logic

Topic: Active-inert factorizations


view this post on Zulip James Deikun (Jan 11 2026 at 12:42):

James Deikun said:

David Jaz Myers said:

The difference between the inerts and the actives is reflected in this type theory as follows: the inerts are, roughly speaking, the weakenings: the maps between contexts which reflect forgetting about some of the variables in them. The actives correspond to the substitutions which use all of the variables in the context. I don't know when type theories admit active-inert factorization systems on their categories of contexts, but I think it's an interesting question.

For essentially algebraic theories, or their \infty-analogues, this basically happens when there is a presentation as a monad of the sort called "strongly cartesian", or sometimes "polynomial". (Neither of these terms is really ideal--both of them could mean too many things.)

Syntactically, this reflects "rigidity"--in the 1-pov this is simply the assertion that equations between terms don't make use of any structural operations (including weakening) to align their contexts. This means that every term-in-context has a "trace" of which structural rules have been used on it, and this "trace" is invariant under permissible rewritings.

What this corresponds to in the higher POV is less clear to me. The "rigidity" is definitely less strict in that situation, as witnessed by the fact that EE_\infty is a "polynomial" \infty-monad presented by an algebraic pattern.

This might be a fun question to work on a bit more, actually.

view this post on Zulip James Deikun (Jan 11 2026 at 12:50):

Syntactically, the inert part of the factorization system corresponds to a renaming. So what we want is, for each substitution, for it to canonically be a renaming of an "active" or "generic" substitution determined only by the operations of the substitution. So, first off, if we blank out the context and all the references to variables in a substitution, we want to be able to reconstruct a substitution just from that, and the substitutions so reconstructed will be "active". Needless to say, this means the same will apply to all the parts of a substitution: types, terms, and contexts.

view this post on Zulip James Deikun (Jan 11 2026 at 12:55):

(All of these reconstructions are idempotent except for the one for contexts, which I can't see a way to make idempotent. Maybe the one for contexts isn't actually important, though.)

view this post on Zulip James Deikun (Jan 11 2026 at 13:03):

So in order for this to make sense, as a type theory, there are several conditions we need to hold. Probably the simplest one is that, for any computation rule or other rule that produces a definitional equality, the left and right sides of the definitional equality produced have to reconstruct the same context for their respective active type or term judgements.

view this post on Zulip James Deikun (Jan 11 2026 at 13:06):

Another rule we don't need to insist on, but can without loss of generality, and will probably simplify things if we do, is that formation and introduction and elimination rules are active as they stand. (I think they will always be active up to an invertible renaming, and so, active in the semantics.)

view this post on Zulip James Deikun (Jan 11 2026 at 13:15):

There's still quite a bit to work out, but I think I'll stop here for now. Anybody, please feel free to join in with contributions or thoughts!