Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Functions that do "bounded computation"


view this post on Zulip Nathaniel Virgo (Jul 30 2024 at 06:50):

Recursive data structures can be defined as initial algebras or final coalgebras of endofunctors (depending on whether you want to include a possibly-infinite amount of data in your structure). Recursive/corecursive functions can be defined using the universal property of an initial algebra or final coalgebra.

However, I sometimes find myself wondering about something a bit more primitive than recursive functions, namely functions that only do "a set number of steps" of calculation - the sort of thing you could do in a single step of pattern matching, without recursing through the whole data structure.

For example, suppose our data structure is lists of natural numbers, so the initial algebra of (N×)+1(\mathbb{N}\times {-}) + 1. Examples of such "bounded computation" functions would be

Some non-examples would be

The idea is that the positive examples only ever need to look at the first few elements of the list and the number of elements they need to look at doesn't grow with the length of the list, whereas the negative ones have to recurse over the whole thing.

The question is whether there is a nice way to characterise such functions, in terms of initial algebras or final coalgebras or similar. I don't mind exactly how they should be defined, since the question is partly whether there's a nice definition that captures the idea. It seems like they should be related to natural transformations between the underlying functors somehow, but I haven't quite seen how to do it.

view this post on Zulip Nathaniel Virgo (Jul 30 2024 at 09:41):

Sometimes when you write down a question, the answer becomes obvious-ish.

Suppose my structures are the initial algebra of some functor FF, say ι:FII\iota:FI\to I. Then ι\iota is actually an isomorphism. if I have a natural transformation η:FnFm\eta:F^n\to F^m for n,mNn,m\in \mathbb{N}, I think I can regard the function I(ι1)nFn(I)ηIFm(I)ιmII\xrightarrow{(\iota^{-1})^n}F^n(I)\xrightarrow{\eta_I}F^m(I)\xrightarrow{\iota^m}I as being the kind of function I was asking about.

I don't know how to state that claim as something that could be proved, but it seems to make sense in the example. Intuitively, it's because a component ηX\eta_X of a natural transformation has to treat the set XX as a black box, so it can't do anything that explicitly depends on its contents. This means that in the list example, (ι1)n;ηI;ιm{(\iota^{-1})^n}\mathrel{;}{\eta_I}\mathrel{;}{\iota^m} can't depend on anything past the first nn entries in the list.

This construction didn't actually depend on (I,ι)(I,\iota) being an initial algebra, it only depended on ι\iota being an isomorphism, so the same thing works for final coalgebras as well. (Since the structure map for a final coalgebra is also an isomorphism.)

I guess this could be developed a bit further, especially if you wanted to consider functions with multiple inputs and outputs, possibly of different types, so I'm still happy if someone has a reference for that.

view this post on Zulip Matteo Capucci (he/him) (Jul 31 2024 at 06:29):

Are the bounded functions the ones that don't use recursion?

view this post on Zulip Peva Blanchard (Jul 31 2024 at 09:28):

This reminds me of the distinction, in recursion theory, between primitive recursive functions and general recursive functions. Intuitively, a primitive recursive function is a function that can be defined with nested "for loops" with finite depth. So it is a form of boundedness. On the other hand, there are general recursive functions (e.g. Ackermann function) that are not primitive recursive: you cannot use a finite-depth nesting of for loops to define them.

But I'm not sure how the recursion discussed here relates to these concepts.

[edit: I fixed the wording (primitive instead of primary), thanks John for pointing it out]

view this post on Zulip John Baez (Jul 31 2024 at 09:53):

You meant "primitive" recursive functions, not "primary" recursive functions. By the way, there's a wonderful connection between primitive recursive functions and parametrized natural numbers objects in a category with finite products, which is discussed here:

The functions which are constructible out of the structure of a category with finite products and such a "parametrized NNO" are precisely the primitive recursive ones. Specifically, the unique structure-preserving functor from the free such category FF into Set yields a bijection between HomF(1,N)\mathrm{Hom}_F(1, \mathbb{N}) and the actual natural numbers, as well as surjections from HomF(Nm,N)\mathrm{Hom}_F(\mathbb{N}^m, \mathbb{N}) onto the primitive recursive functions of arity mm for each finite mm. With cartesian closure, however, this identification no longer holds, since some non-primitive recursive functions (such as the Ackermann function) become definable as well.

view this post on Zulip Nathaniel Virgo (Jul 31 2024 at 12:55):

Matteo Capucci (he/him) said:

Are the bounded functions the ones that don't use recursion?

Yes, exactly - the functions that operate in a "uniform" way on recursive data structures (whatever that exactly means) but don't use any recursion at all (whatever that exactly means).

view this post on Zulip Nathaniel Virgo (Jul 31 2024 at 12:55):

So quite a bit more primitive than primitive recursive - a primitive recursive function can do an amount of computation that's bounded in a way that can depend on its input, whereas I'm interested in functions where the amount of computation is bounded by a constant.

But the reason to think about them is kind of similar. I'm thinking about models of computation like Turing machines and Minsky register machines, where there's a function that does "one step of computation", and we just iterate that until some termination condition is met. (For a Turing machine, that's the function that takes in the state of the tape and the state machine and then does one step of updating the state machine, writing to the tape, and moving the head.) I'm idly wondering if there's a nice way to define what "one step of computation" means, in a general way.

view this post on Zulip Zanzi (Jul 31 2024 at 13:41):

@Nathaniel Virgo I recommend the paper "Algebras and Coalgebras in the Light Affine Lambda Calculus" which goes into this question - there's a class of logics called "bounded linear logics" which restrict the use of contraction (but do not eliminate it). This allows writing functions that fall within specific complexity classes. The paper then looks at how to recover initial algebras and final coalgebras in this setting.