You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Thursday the 2nd of September, 17:00 UTC
Abstract:
UniFormal is the idea of representing all aspects of knowledge uniformly, including narration, deduction, computation, and databases. Moreover, it means to abstract from the multitude of individual systems, which not only often focus on just one aspect but are doing so in mutually incompatible ways, thus creating a universal framework of formal knowledge.
MMT is a concrete representation language to that end. It systematically abstracts from assumptions typically inherent in the syntax and semantics of concrete systems, and focuses on language-independence, modularity, and system interoperability. While constantly evolving in order to converge towards UniFormal, its design and implementation have become very mature. It is now a readily usable high-level platform for the design, analysis, and implementation of formal systems.
This talk gives an overview of the current state of MMT, its existing successes and its future challenges.
Zoom: https://topos-institute.zoom.us/j/5344862882
YouTube: https://youtu.be/uJ3psQ-Dkpw
A reminder, this is in less than two hours, unless I made a big mistake in my hour calculations. (OTH delighted to discover after @Jade Master 's talk that everyone else also feels paranoid about getting the times wrong!)
the countdown timer on the webpage should help to solve this problem somewhat! https://topos.site/topos-colloquium/
Valeria de Paiva said:
A reminder, this is in less than two hours, unless I made a big mistake in my hour calculations. (OTH delighted to discover after Jade Master 's talk that everyone else also feels paranoid about getting the times wrong!)
Yes! time zones are very confusing to me.
I'm glad you can just google things like "6 pm UTC in California" or "10 pm California in England". However, if you're asking not about today but the future, you need to beware changes due to daylight savings time!
I like subscribing to the researchseminars . org, which sticks it right into my google calendar, though I've been burned by this before. If it's not available, when I go to make a calendar event, I change the timezone for the event itself, and then set the time as it's presented in it's local time. For myself, I think this is the method least likely to screw up.
Hi. I just watched on Youtube. It's a very interesting though very challenging project. I know from browsing major (at least operating system) widely used software projects like the Linux kernel that there is an enormous amount of code to detect what version or quirk an external component (hardware or software or firmware), and either adapt to it or fail with an incompatibility warning. The Linux kernel build system also uses a wide-ranging configuration user input system that restricts or automatically makes future configuration choices based on what was inputted or automatically made before. Developers must maintain their subsystems' configuration script in addition to its code. Configuration responses are often implemented by conditional inclusion of code. Such systems evolved out of a long lineage of beginning with build step economizers (Unix make was probably the first general tool for this, in the Java world, this kind of tool is called ant.) and continuing with combinations of tools such as GNU autotools.
So I would think that surveying such technologies and corresponding software engineering research (not just C/Linux and GNU!) may give some clues for your project. Particularly what to do about users making inconsistent choices of things to combine.
However, after looking at the MTT web site I now understand (perhaps) that the various formalizations and their implementations are recoded. In which case no interface is used to externally supplied systems.
Just watched the video. I'm very much in sympathy with the idea of doing all the peripheral stuff "once and for all"; it's been a frustration for me for a long time that every new type theory seems to have to be implemented from the ground up. However, what I really want when I'm experimenting with a new type theory is a proof assistant for it, so it's disappointing to me to hear you say that MMT is not a proof assistant. But you did mention that you have a general algorithm for type inference and solving implicit arguments. What in particular do you have in mind that's lacking when you say that MMT is not a proof assistant? If I have a new type theory that I want to play around with, and I implement it in MMT and try to use MMT as a proof assistant, what will I feel is missing as compared to using a "real" proof assistant? Is it methods for progressive construction of terms, like tactics in Coq or holes in Agda?
Hmm, actually, is Florian on this Zulip? I don't see him on the users list.
@Seth Chaiken and @Mike Shulman I do not think that Florian Rabe subscribes to this zulip. I can send him email, but so can you, it might make more sense. His email is on his webpage https://kwarc.info/people/frabe/. his talk is part of the Topos Institute discussion on formalization of mathematics that so far encompassed Larry Paulson's and Kevin Buzzard's talks. We plan to continue the series.
but trying to answer your question @Mike Shulman , yes tactics would be missing. but also the big libraries of basic facts to build from. As Florian says, unlike the guys of Lean, he's not interested in building libraries, as he prefers to build agnostic infra-structure. Florian gave another cool talk at Andrej Bauer's "Every proof assistant" series, https://vimeo.com/421123419 last year
Thanks, I watched that talk too and came away with the same questions. I don't think the missing libraries of basic facts are relevant for my use case, where I have a new type theory that I want to experiment with, so there's no chance anyone could have built a library of facts about it already in any case. So would you say that tactics/holes are all that's missing? (Of course, Agda does okay without tactics.)
well, this is not my expertise, but I believe that apart from tactics/tacticals (or whatever plays this role), maybe one would still be missing unit-tests-like stuff. (I'm saying that bc this is what I wanted to create when trying to have a proof assistant for LinLogic, and I then used Girard's translation on usual intuitionistic prop theorems ). but maybe this is so obvious that you didn't think it needed mentioning...
@Valeria de Paiva Thanks!
That's so far from obvious (to me) that I don't even know what you mean. (-:
Do you mean MMT wouldn't come with tests verifying that your representation of your new type theory is correct?
Also, if I email Florian, where can I point him to get a working invitation to join the Zulip?
Fresh invite links appear in this thread.
well, say you want to eventually (only eventually) machine learn the theorems of your new shiny type theory. Wouldn't you need to have these unit-test-like things (actually I only mentioned the positive ones) but you need both, for what you want your type theory to say and what you actually want it NOT to say?
Still, because of your question Mike, I decided to watch W. Bowman's talk on Cur. he seems to want many more things. He says: " We evaluate this design [of a non-devious proof assistant] by building a proof assistant that features a small dependent type theory as the core language and implementing the following extensions in small user-defined libraries:
Is there something about MMT that would prevent you from writing unit tests for a type theory implemented therein?
Valeria de Paiva said:
I decided to watch W. Bowman's talk on Cur.
Which talk is that? Was it in "Every proof assistant"?
Is there something about MMT that would prevent you from writing unit tests for a type theory implemented therein?
No, there's nothing that prevents you from writing these things in MMT! but this is how I understood Florian: he wished to have the time to make MMT a proof assistant, but since he doesn't, he'd prefer to work on the infrastructure than to work on the niceties of making it work as a proof assistant. YMMV
and yes, it's another talk in the "Every proof assistant series", https://vimeo.com/432569820.
@Florian Rabe
Mike Shulman said:
it's disappointing to me to hear you say that MMT is not a proof assistant. But you did mention that you have a general algorithm for type inference and solving implicit arguments. What in particular do you have in mind that's lacking when you say that MMT is not a proof assistant? If I have a new type theory that I want to play around with, and I implement it in MMT and try to use MMT as a proof assistant, what will I feel is missing as compared to using a "real" proof assistant? Is it methods for progressive construction of terms, like tactics in Coq or holes in Agda?
I agree. I also want it to do more automated proving, and the only hold-up is available resources to add it.
You can already do decent proofs in MMT by writing the proof terms manually. Agda-like holes and interactive term completion are available too.
But MMT still lacks automated proof search and user-defined tactics.
So is the experience significantly worse in MMT than in Agda, then? Agda doesn't have tactics, and I've never used any automated proof search therein.
Mike Shulman said:
I don't think the missing libraries of basic facts are relevant for my use case, where I have a new type theory that I want to experiment with, so there's no chance anyone could have built a library of facts about it already in any case.
Indeed, so far MMT has mostly been used to
For deep libraries (i.e, large developments in a fixed type theory), more proof automation would help a lot.
We're working on that, but we currently don't have funding to prioritize it.
Mike Shulman said:
Is there something about MMT that would prevent you from writing unit tests for a type theory implemented therein?
I'm not sure what you mean by unit tests. But I suspect there's nothing preventing it.
If writing in MMT is not that different from writing in Agda without fancy features, then maybe you shouldn't sell it short by saying that it's not a proof assistant! Indeed, there's other experimental software that is even less fully featured than Agda that people don't hesitate to call proof assistants.
I have a couple of other questions that I was waiting for you to show up to ask, @Florian Rabe . I don't understand why you needed to back off an extra step from a logical framework and implement multiple different LFs. Can you give an example of a theory that you weren't able to implement in ordinary LF? In my experience a dependently typed LF is very flexible.
The primary sorts of theories that I've had trouble representing in an ordinary LF are modal and substructural ones. Is that what you had in mind? Some of your slides gave the impression that you needed a separate LF for almost every substantial object theory, and the theories of most existing proof assistants are not modal or substructural.
Separately, I'd also like to know whether you've implemented any modal or substructural theories in MMT?
Mike Shulman said:
I have a couple of other questions that I was waiting for you to show up to ask, Florian Rabe . I don't understand why you needed to back off an extra step from a logical framework and implement multiple different LFs. Can you give an example of a theory that you weren't able to implement in ordinary LF? In my experience a dependently typed LF is very flexible.
Generally speaking, there are certain features, where if the logical framework doesn't already support it, it gets very awkward to formalize it.
Binding is one such feature, and LF has it. So representing binders is elegant in LF.
Examples of such features that LF doesn't have are rewriting, soft typing, subtyping, undefinedness, or records.
You can encode these in LF, but it requires encoding artifacts. For example, one can use explicit injection functions to encode subtyping.
That usually only works in toy examples, and if you try to process an entire proof assistant library, the encoding artifacts grow out of hand.
Here are some examples of general features beyond LF that people have added (not necessarily to build logical frameworks but also to just prototype type theories):
The Italian group around LLFP has also done extensive work on extensions of LF.
I've implemented LLFP in MMT as well.
This paper has some examples: https://kwarc.info/people/frabe/Research/MR_prototyping_19.pdf
Here are some examples of concrete logics where an encoding in plain LF was not feasible:
You can find more on this in https://kwarc.info/people/frabe/Research/KR_oafexp_20.pdf (JAR 2021).
Mike Shulman said:
The primary sorts of theories that I've had trouble representing in an ordinary LF are modal and substructural ones. Is that what you had in mind? Some of your slides gave the impression that you needed a separate LF for almost every substantial object theory, and the theories of most existing proof assistants are not modal or substructural.
Those are also difficult. We've done experiments, and it's not always obvious, which extra features come in handy:
Mike Shulman said:
Separately, I'd also like to know whether you've implemented any modal or substructural theories in MMT?
Modal logics via Hilbert calculi and Kripke models work nicely in plain LF.
For more natural deduction-style calculi, see the experiment above.
Substructural logics have not been a priority. I did the experiment above as a one-off, and it works quite well.
But there are other routes that could be taken, including special rules for checking occurrences of variables.
I think LLFP can also be applied to substructural logics and those solutions would carry over to the implementation of LLFP in MMT mentioned above, but I haven't looked into that.
@Florian Rabe Thanks! I'll have a look at those papers. If many or most object theories require their own customized LF, what is the advantage of including a framework level at all? Couldn't you just use the "meta-meta-language" in which you describe the logical frameworks to describe the object theories directly?
Mike Shulman said:
Florian Rabe If many or most object theories require their own customized LF, what is the advantage of including a framework level at all? Couldn't you just use the "meta-meta-language" in which you describe the logical frameworks to describe the object theories directly?
Good question. Logical frameworks (of some sort) are without alternative for meta-reasoning about the languages and are more trustworthy for proof checking.
If one only wants rapid prototyping, one can indeed skip the framework and do it directly in MMT.
Often in practice, MMT we use a mixed approach: logical framework as an intermediate level where possible, direct MMT-level rules where necessary.
For example, for Coq, we declared the syntax in LF except for induction, and skipped LF for the induction-related expressions.
@Florian Rabe is the new formal underpinning of Scala3 (e.g. here) going to help MMT?
Hmm. Is it that because the logical framework is simpler than the object-theory, it's easier to establish metatheoretically that its raw syntax is meaningful, and then you can work within the meaningful syntax of the framework to establish the meaning of the object theory?
Henry Story said:
Florian Rabe is the new formal underpinning of Scala3 (e.g. here) going to help MMT?
1) Regarding the question of representing a rule in a logical framework vs. implementing it directly in MMT: I suspect if you can't represent a rule R elegantly in a logical framework, writing the specification for a Scala implementation of R will also be difficult. So formal Scala won't help as much.
2) Regarding the meta-theory of the (a) general algorithms implemented in MMT as well as (b) the rules for LF, which must be implemented in MMT to get off the ground: Formal Scala might allow verifying the behavior of MMT's type reconstruction algorithm.
An alternative route for (2b) is to specify those relatively simple rules in LF itself and compile them to Scala code in order to bootstrap MMT's type checking rules.
Mike Shulman said:
Hmm. Is it that because the logical framework is simpler than the object-theory, it's easier to establish metatheoretically that its raw syntax is meaningful, and then you can work within the meaningful syntax of the framework to establish the meaning of the object theory?
I'm not sure what you mean.
I'm trying to understand your statement that
Logical frameworks (of some sort) are without alternative for meta-reasoning about the languages and are more trustworthy for proof checking.
The "alternative" under consideration would be to implement an object language directly in MMT. Why is that not an "alternative" for meta-reasoning about the language, and why is it less trustworthy for proof checking? My guess was that since the core constructs of MMT are less structured, it's harder to formally prove things about a language implemented directly in terms of them; so you want to only do that for as simple a language as possible, namely the logical framework, and then the step from the logical framework to the more complicated object theory is easier because the logical framework is more structured (e.g. intrinsically well-typed).
Mike Shulman said:
I'm trying to understand your statement that
Logical frameworks (of some sort) are without alternative for meta-reasoning about the languages and are more trustworthy for proof checking.
The "alternative" under consideration would be to implement an object language directly in MMT. Why is that not an "alternative" for meta-reasoning about the language, and why is it less trustworthy for proof checking? My guess was that since the core constructs of MMT are less structured, it's harder to formally prove things about a language implemented directly in terms of them; so you want to only do that for as simple a language as possible, namely the logical framework, and then the step from the logical framework to the more complicated object theory is easier because the logical framework is more structured (e.g. intrinsically well-typed).
Yes. Implementing a language in MMT requires writing little snippets of Scala code (one for each typing rule).
Even though a small fragment [*] of Scala is sufficient for that, any meta-reasoning would have to reason about Scala code.
Notably the fragment is a bit bigger than one might expect: In practice, the code should include at least commands for logging and error reporting.
Thanks @Florian Rabe. Are your logical frameworks sufficiently expressive to encode the normalization algorithm of object-languages without needing to descend to Scala code again? How does that work? Most logical frameworks I've seen can encode the equality judgment of object-languages, or perhaps some small- or big-step operational semantics relation, but I never thought of being able to extract an actual implementation of a normalization algorithm from such an encoding without additional coding.
Mike Shulman said:
Thanks Florian Rabe. Are your logical frameworks sufficiently expressive to encode the normalization algorithm of object-languages without needing to descend to Scala code again? How does that work? Most logical frameworks I've seen can encode the equality judgment of object-languages, or perhaps some small- or big-step operational semantics relation, but I never thought of being able to extract an actual implementation of a normalization algorithm from such an encoding without additional coding.
This is not so much an MMT question but a logical framework question. If you have such a logical framework, it should be straightforward to prototype it in MMT
If you want to extract externally runnable code, it depends on the similarities/differences between
For example:
In any case, the extractor could be implemented as an MMT exporter.
I'm not talking about extraction to runnable code (sorry that I confused things by using the word "extract"), I'm talking about normalization in the proof assistant in the process of typechecking. It's an MMT question because I'm saying that I don't know how to do such things with LFs, while you said that MMT encodes all object-languages with LFs.
Forgot to notify @Florian Rabe on that reply.
Mike Shulman said:
I'm not talking about extraction to runnable code (sorry that I confused things by using the word "extract"), I'm talking about normalization in the proof assistant in the process of typechecking. It's an MMT question because I'm saying that I don't know how to do such things with LFs, while you said that MMT encodes all object-languages with LFs.
I now think you mean executing the algorithms specified in the logical framework from within the logical framework.
Most logical frameworks allow specifying the algorithm but a meta-level/extra-linguistic step is needed to trigger it.
For example, Twelf allows the user to run an algorithm specified as a logic program in LF - but the command to run is sent interactively by the user.
This is mostly a logical framework--design issue: Most frameworks need the distinction between object and meta-level to distinguish the function space of the framework, which is used to represent binding via HOAS, from the function space in which syntax-driven algorithms like normalization live.
MMT has no problem representing any solution to that issue. But it doesn't automatically solve that issue either.