You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
This is kind of a weird question - I'm not sure how to really make it precise. But what do we actually mean when we write something like
f(x) = g(x) + 1
or
fact(x) = if x==0 then 1 else x*fact(x-1)
?
The first one kind of seems to make sense in terms of function composition - I can see the function as post-composed with a function that increments its argument.
But the second one is a bit weirder. We could see the definition as an equation for the function fact
, which the compiler solves in order to work out what the function is. But in practice, at least conceptually, that's not really what happens. Instead, when we call fact
with a positive argument, the function gets called multiple times until we hit the base case x=0
, and then values get passed back up the call stack again in order to generate the final result.
This has a very lens-like feel - each invocation of fact
gets an input (x
), then passes a "forward" output (x-1
) to the next function, then gets the result from that and passes something "backwards" up the stack that depends both on its original input and on the result that was passed bacwards to it (x*fact(x-1)
).
So really my question is, do function calls and call stacks have something to do with lenses in some way? Or maybe with dependent lenses, since recursion might give rise to a branching tree instead of just a linear chain.
Has anything like that been worked out? Or if not, is there some other way to get a handle on "what's really going on" with recursion and call stacks from a category-theoretic point of view?
This is an inductive definition, and it makes sense/is well-defined because the natural numbers are an inductive type. That is, we can define construct the natural numbers with an introduction rule for 0 and an inference rule that gives us x+1 from x. As a consequence, we can define a function on the natural numbers inductively (the natural numbers are formally taken to be the smallest set of expression in the language built from the alphabet 0 and S, say, which is closed under the rules of the inductive definition, so it is necessary and sufficient to explain what happens to the output of these rules given the value on their input).
That might not necessarily be helpful, but the takeaway is that whatever you can say about this example using lenses should extend to inductive definitions more broadly.
I guess what you're describing is continuation-passing style (CPS), which is kind of lensy indeed.
AFAIU, in CPS returns are replaced with calls to continuations, for instance
fact(x, k) = if x == 0 k(1) else fact(x-1, λy. k(x * y))
Such a function is in fact a monadic function (= a map in the Kleisli category) for the [[continuation monad]] , which indeed amounts to a map (for fact
, ).
This monad is related to (the selection monad) by a monad morphism . I learned about these monads from Escardo' and Oliva (they actually wrote many papers doing very cool things using this technology, so I suggest a quick scholar search to the interested reader).
Anyway, I'm telling you this because we've recently proven that lenses induce morphisms (this is even true in a bit more generality!). So there's definitely something going on.
Moreover, the backward part of a lens is , so not quite the same as , but of course if you have the latter you get the first by:
which is definitely something that deserves more attention...
(tagging @Jules Hedges here, and @Bruno Gavranovic which recently entertained a version of lenses/optics where the residual is actually a whole continuation)
I don't know about the link to lenses, but recursive function definitions have been thoroughly studied in domain theory.
I'm not sure I've managed to make sense of this yet, but here's one thing we can note, following on from Matteo's post.
I found it a bit odd that the return type ( in this case) is baked in to the continuation monad. It looks kind of like lenses, but every lens has to be for some . We can get rid of that by defining a more lens-like category instead, as follows:
objects are pairs of sets (or pairs of objects in some other Cartesian closed category), written .
a morphism is a function
composition (hopefully I'll get this right): is the function given by
where the types in that lambda expression are , and . (Maybe this can be simplified somehow.)
The composition rule is very similar to Kleisli composition in the continuation monad (which Jules explained here), just generalised so that , and can be different types. The identity morphisms are similarly reminiscent of the unit of the continuation monad.
Having written that down I still don't have much intuition for what a function of type really is or how it relates to call stacks and recursive functions. It will come to me eventually. But the last equation in Matteo's post shows that this is a generalisation of lenses, in that if we have a lens we can convert it into one of these things. Presumably that forms a functor by which lenses embed into this category.
One question I have is how this would be combined with effectful computation. One thing I'm interested in is probabilistic programs, where we might want to define a probability distribution by sampling recursively. (Simple example: flip a coin until it comes up tails and count the number of heads.) Is there a monadic version of the continuation monad, or a version that would work in a Markov category somehow, instead of a Cartesian closed one?
Very cool!
I wonder if morphisms shouldn't also have a 'forward' part, something of type ?
Also have you heard of grates? My gut feeling says they are related, perhaps dual
I think the forward part is sort of secretly already there.
Or rather, if you have a lens consisting of and then you can form a function using the same manoeuvre you did:
So the information about both and is already included in the function .
I've heard of grates but I don't know anything about them - I'll have to look into it.
Well you surely can go from 'these things' (please christen them with your favorite variety of cut glass - I propose periscopes) but I don't see how you can define a lens given one of those. In , is in negative position whereas in is in positive position.
Btw, if we take seriously that is the backwards part of a mixed optic, then you can read off the action generating it, namely .
Suppose is cartesian closed. Then
Hence in order to make into an action, you have to equip the acting category with the product:
I'd call this exponential affine, since it's basically the same as the product used to get affine traversals, which is the composition product of coefficients of affine functions ().
For to be associative, it suffices to have a distributive law , which we indeed have by cartesianity of .
Now a bit of coend calculus (basically following Prop 3.26 here) yields:
To get seems quite hard... I'll have a thought about it
I think "continuation lenses" would be a logical name, but I like "periscopes" also. I agree that you can't get a lens from a periscope, that makes sense.
(I'll have to think about the rest properly later on, it's late here)
Lots of interesting stuff to to think about here! I've considered this exact category a little bit before, but I never wrote anything down and I don't remember what I concluded about it, if anything
I have a bunch more thoughts on this, but it might take me a while to get them organised / worked out enough to write down. For now I just want to note that the composition rule for periscopes can be simplified a bit. If we have and , then their composite turns out to be the function
where the types are , and . Which I guess is what you'd get from "type-driven programming", but I got to it by simplifying the definition I gave above.
I just had a chat with @Matteo Capucci (he/him) about this, where he explained to me that this structure is a composition of a lens and a grate.
But this kind of thing already has a name: glass. It's defined in Profunctor Optics, a categorical update (Section 3.3.2)
I hope Google develops this and calls it Google Glass.
Bruno Gavranovic said:
But this kind of thing already has a name: glass. It's defined in Profunctor Optics, a categorical update (Section 3.3.2)
Thanks! I find that paper really hard - I never seriously coded Haskell so the motivation is really different from where I'm coming from - so I've never gone through it.
Is it really the same though? Their definition looks to me like (or maybe it should be ?), which seems related to but not obviously the same.
Here's the thought I keep having, which might be related. But I am also not sure if it will make sense to anyone besides me.
Take a periscope (glass?) . The input type looks kind of like a decision problem, if we think of as a utility. You get an input in and a map from outputs to utilities . The thing we return, , looks like a coutility being passed back to a previous player.
But this is a bit weird, because we're taking a decision problem as input and passing back a coutility, but we're skipping over the part where a decision is actually made. I think this is why Matteo says they should also have a forward map . But what makes more sense to me is if the decision in is a function of a the decision problem that we receive as input. In that case it should be . Then there should also be a map saying how the coutility depends on the input, output and utility, and we should demand that the function factors through these somehow. I didn't work out the details of that yet though. (I was actually trying to do something a bit more ambitious, involving a monadic version, so that it would let us talk about probability.)