You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hi folks, I'm interested in the problem of computing whether two functions are extensionally equal. I don't know much type theory, so I'm trying to establish an intuition about the problem, and what is possible.
I know that the problem is undecidable for Turing-complete programming languages, but I'm not interested in those. I'm interested in suitably restricted languages. In particular, I'm interested in (purely functional) languages that include:
But (perhaps unconventionally), I only want to consider languages without inductive/recursive types. (Integers are built-in, so they need not be treated as an inductive type.) I've recently been considering alternative approaches to structuring data wherein functions also fulfil the role of finite maps (a.k.a. dictionaries, associative arrays...), and I believe this can serve as an adequate alternative to inductive types for an everyday programming language. (Lookups would be statically-checked.)
As for recursive functions, they might exist in a very constrained form, or be banned entirely. (Is it reasonable to have this discussion without establishing what kind of recursion is permitted?)
So, back to our problem: determining extensional equality of functions. Thinking about this for a while, I've noticed there are actually several related problems:
My questions include:
I've been utterly entranced by this problem lately. I've stumbled onto type-theoretic discussions about extensional equality such as the recent buzz around cubical type theory, but:
I guess you have to avoid being able to state all Diophantine equations, otherwise equality of open terms (i.e, polynomials) will be undecidable. Probably the most natural restriction is to linear integer arithmetic, but of course this may be a strong restriction.
Let me see if I understand this problem. We can write a function that tests whether is a solution to a Diophantine equation (returning a Boolean), and we can write a function that always returns false. An algorithm that told us whether for any such pair of functions (containing only integer addition, multiplication, and equality tests) would be solving an undecidable problem.
Is that right?
Sounds right to me. By the way, we can take . That's the current smallest known value that works.
What's the largest known that doesn't work?
It seems like most serious work has been done for natural number solutions of Diophantine equations, not integer solutions. Matiyasevich showed the best upper bound: there's no decision procedure for Diophantine equations in 9 natural number variables. Wikipedia says even 3 cannot now be excluded as a possible upper bound. I assume that means for natural numbers solutions there's a decision procedure for 2-variable Diophantine equations.
The difficulty of settling Fermat's last theorem must have something to do with how problems kick in at n = 3.
Ok, it seems I can throw extensional equality in the dustbin then. It seems utterly unrealistic even for "basic" functions :sweat_smile:.
The only way forward (apart from having no concept of function equality whatsoever) seems to be treating functions as pieces of syntax, and comparing them for syntactic equality. I think type theorists call this "definitional equality". I suppose that must suffice!
Yes, that's the reason why, as you said, a programmer or mathematician has to manually prove extensional equality of functions. The only kind of equality check for functions that we can expect a computer to do is intensional.
Does the quick appearance of undecidability have something to do with (non-)continuity? If my language hypothetically had only rational arithmetic (no or types, or any inductive type), can we have more expressive languages whilst retaining decidable equality?
Or are there other simple counter-examples...
Having only real arithmetic would work, rational not so much. The theory of a real closed field is decidable. You can't have trigonometric functions though.
That result about real closed fields is due to Tarski; it's good to read this to see what he did.
It seems that while it's decidable, the best decision procedure is very slow. I don't know how slow! But if I had to wildly guess, I'd say double exponential time, something like . So this raises the question: is decidability useful if you have to wait billions of years to decide the equality of two functions?
The really bad slowdown is in quantifier alternation depth, making results like this quite practical for many purposes though...
If you stick to the world of arithmetic, indeed things go sideways quite quickly. But if you instead look at holonomic functions (functions that satisfy a linear ode with polynomial coefficients, which almost all special functions do), then things look bright again. See for example this paper from Joris van der Hoeven.
I don't understand... how are holonomic functions not "sticking to the world of arithmetic"? It seems like they're still defined in terms of addition, multiplication, etc.
Perhaps I'm missing the "big picture" of how holonomic functions are different to traditional functions.
They are 'different' because they are defined by a finite amount of data that has a very large amount of regularity properties, even though they are 'infinite' objects defined over continuous spaces. The resulting solutions are analytic with well-behaved, locatable singularities.
One thing to remember is that these are defined implicitly, as the solution of an LODE. So they really live in the world of analysis, not arithmetic.
They do seem to be pretty exciting, from what I can comprehend. It's a shame that there is not a drop of introductory material about them. Every resource I can find assumes you are a mathematician. It's hard for me to discern whether they are a useful construct for more than just niche applications.
From what I can tell, some operations on holonomic functions might also be computationally intractable...
For example, it seems like you need to compute Gröbner bases, which is doubly exponential:
The complexity of Gröbner bases has been the object of extensive studies. It is well-known that in the worst-case, the complexity is doubly exponential in the number of variables.
Most operations on holonomic functions don't need Gröbner bases, in fact they can mostly be done via straightforward linear algebra. Furthermore, Gröbner bases are quite remarkable in that their running time is often nowhere near the worst case (we do know that in degenerate cases, Gröbner bases exactly correspond to Gaussian Elimination and to the Euclidean algorithm). The work on understanding when Gröbner bases are fast and when they are not is still ongoing, and appears to be extremely deep.
I find Bruno Salvy's talk that goes along with the paper Linear Differential Equations as a Data Structure to be a really good entry point. The audience were mathematically-oriented computer scientists.
Manuel Kauer's The Holonomic Toolkit is rather good, but was written for an audience of people who do Symbolic Computation. YMMV.
Thank you for these references! I'll see if they help me :smile:. I am very much on board with treating equations as data structures. It's an approach I've been investigating for a while now...