You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Recursive data structures can be defined as initial algebras or final coalgebras of endofunctors (depending on whether you want to include a possibly-infinite amount of data in your structure). Recursive/corecursive functions can be defined using the universal property of an initial algebra or final coalgebra.
However, I sometimes find myself wondering about something a bit more primitive than recursive functions, namely functions that only do "a set number of steps" of calculation - the sort of thing you could do in a single step of pattern matching, without recursing through the whole data structure.
For example, suppose our data structure is lists of natural numbers, so the initial algebra of . Examples of such "bounded computation" functions would be
Some non-examples would be
The idea is that the positive examples only ever need to look at the first few elements of the list and the number of elements they need to look at doesn't grow with the length of the list, whereas the negative ones have to recurse over the whole thing.
The question is whether there is a nice way to characterise such functions, in terms of initial algebras or final coalgebras or similar. I don't mind exactly how they should be defined, since the question is partly whether there's a nice definition that captures the idea. It seems like they should be related to natural transformations between the underlying functors somehow, but I haven't quite seen how to do it.
Sometimes when you write down a question, the answer becomes obvious-ish.
Suppose my structures are the initial algebra of some functor , say . Then is actually an isomorphism. if I have a natural transformation for , I think I can regard the function as being the kind of function I was asking about.
I don't know how to state that claim as something that could be proved, but it seems to make sense in the example. Intuitively, it's because a component of a natural transformation has to treat the set as a black box, so it can't do anything that explicitly depends on its contents. This means that in the list example, can't depend on anything past the first entries in the list.
This construction didn't actually depend on being an initial algebra, it only depended on being an isomorphism, so the same thing works for final coalgebras as well. (Since the structure map for a final coalgebra is also an isomorphism.)
I guess this could be developed a bit further, especially if you wanted to consider functions with multiple inputs and outputs, possibly of different types, so I'm still happy if someone has a reference for that.
Are the bounded functions the ones that don't use recursion?
This reminds me of the distinction, in recursion theory, between primitive recursive functions and general recursive functions. Intuitively, a primitive recursive function is a function that can be defined with nested "for loops" with finite depth. So it is a form of boundedness. On the other hand, there are general recursive functions (e.g. Ackermann function) that are not primitive recursive: you cannot use a finite-depth nesting of for loops to define them.
But I'm not sure how the recursion discussed here relates to these concepts.
[edit: I fixed the wording (primitive instead of primary), thanks John for pointing it out]
You meant "primitive" recursive functions, not "primary" recursive functions. By the way, there's a wonderful connection between primitive recursive functions and parametrized natural numbers objects in a category with finite products, which is discussed here:
The functions which are constructible out of the structure of a category with finite products and such a "parametrized NNO" are precisely the primitive recursive ones. Specifically, the unique structure-preserving functor from the free such category into Set yields a bijection between and the actual natural numbers, as well as surjections from onto the primitive recursive functions of arity for each finite . With cartesian closure, however, this identification no longer holds, since some non-primitive recursive functions (such as the Ackermann function) become definable as well.
Matteo Capucci (he/him) said:
Are the bounded functions the ones that don't use recursion?
Yes, exactly - the functions that operate in a "uniform" way on recursive data structures (whatever that exactly means) but don't use any recursion at all (whatever that exactly means).
So quite a bit more primitive than primitive recursive - a primitive recursive function can do an amount of computation that's bounded in a way that can depend on its input, whereas I'm interested in functions where the amount of computation is bounded by a constant.
But the reason to think about them is kind of similar. I'm thinking about models of computation like Turing machines and Minsky register machines, where there's a function that does "one step of computation", and we just iterate that until some termination condition is met. (For a Turing machine, that's the function that takes in the state of the tape and the state machine and then does one step of updating the state machine, writing to the tape, and moving the head.) I'm idly wondering if there's a nice way to define what "one step of computation" means, in a general way.
@Nathaniel Virgo I recommend the paper "Algebras and Coalgebras in the Light Affine Lambda Calculus" which goes into this question - there's a class of logics called "bounded linear logics" which restrict the use of contraction (but do not eliminate it). This allows writing functions that fall within specific complexity classes. The paper then looks at how to recover initial algebras and final coalgebras in this setting.