You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I have several related (and quite loaded) questions for the community:
(I think this is mostly of interest to applied category theorists, but the existence of homotopy.io made me think that some ordinary category theorists might also be interested)
If you're a programmer I'd also like to hear your thoughts from the opposite perspective!
Jules Hedges said:
If you're a programmer I'd also like to hear your thoughts from the opposite perspective!
Interesting question, since I joined this community I've seen a few instances of new members with software developer backgrounds, count myself among them actually. I'm still a beginner in CT, but I can see the potential of a space where programmers and mathematicians are able meet
For what it's worth, python 3.9 is released today and they move some typing hints into the standard lib
my_list : list[int] = [3,4,5]
.
Also a module for topological sorting can be imported from the standard lib too. Among other feats.
Not extremely categorical, but at least things get conquered by types.
Here is some Python code that I use for calculations with very basic presheaf toposes, in the joint work with Morgan Rogers. It works, but as you can see the code is not very clean and it probably contains some bugs.
It would be great if a more experienced programmer could give some ideas on how to improve the code, to the point that it can reliably be used by other people as well.
Jens Hemelaer said:
Here is the code: pdf file, ipynb file (jupyter notebook).
Wow!
I could give several examples of my own but I'll start with the top one. I would like a graphical editor in which you can draw a string diagram over a monoidal signature, push a button, and it will output a JSON file (or whatever) describing the denoted morphism in the term language of and
Jules Hedges said:
I could give several examples of my own but I'll start with the top one. I would like a graphical editor in which you can draw a string diagram over a monoidal signature, push a button, and it will output a JSON file (or whatever) describing the denoted morphism in the term language of and
I could give this a shot
(Statebox already have a currently closed-source editor along these lines. It's almost but not quite robust enough to be useful)
Doesn't homotopy.io do something quite similar already?
If I understand correctly, the file format of homotopy.io is very different
I'm not sure about the export format, but changing that sounds like by far the easiest part of such a project
I doubt it. homotopy.io is based on a new definition of higher categories based on foliations of diagrams, and the file format is directly based on that definition. Translating back and forth into the obvious format for the special case of monoidal categories looks not trivial to me
I don't see how it could be nontrivial for monoidal 1-categories specifically. For higher-monoidal higher categories, sure.
As far as I can imagine from eyeballing it, I imagine the translation would use topological sort in one direction, and some min-cut sort of stuff going the other way. In any case, I continue to believe it's beyond my ability. But this is exactly the sort of thing I would like somebody to do. (But #general on here is not the best place to have this conversation)
This thread seems a better fit for #practice: software.
Yes, I wanted to ask the question itself on general, but any conversations that spawn from it belong elsewhere
A suggestion: a firefox extension that changes the names of mathematical variables on web pages. Suppose you want to look up the formula for the Beck–Chevalley condition on the nLab, but your commutative diagram has maps while the maps on the nLab are called . Then the extension would change the variable names on nLab to yours (without editing the page for other people of course).
I don't know how long it takes to make a firefox extension, so I'm not sure if it is worth the effort.
Would you be interested in "category theory hackathons" where mixed teams of programmers and mathematicians build mathematical software during a virtual or in-person event?
that sounds sick
although i resent the implicit drawing of a line between 'programmers' and 'mathematicians' :upside_down:
Jules Hedges said:
I could give several examples of my own but I'll start with the top one. I would like a graphical editor in which you can draw a string diagram over a monoidal signature, push a button, and it will output a JSON file (or whatever) describing the denoted morphism in the term language of and
Catlab does half of this. Specifically, given a wiring diagram, it can convert it to a symbolic expression in terms of and . This could be hooked up to a graphical editor for wiring diagrams or even to a hacked-up editor for graphs. As proof of concept, I once wrote an importer for yFiles, but it would be better to have a dedicated editor for wiring diagrams.
Jens Hemelaer said:
Here is some Python code that I use for calculations with very basic presheaf toposes, in the joint work with Morgan Rogers. It works, but as you can see the code is not very clean and it probably contains some bugs.
It would be great if a more experienced programmer could give some ideas on how to improve the code, to the point that it can reliably be used by other people as well.
Jens Hemelaer said:
Here is the code: pdf file, ipynb file (jupyter notebook).
Hi Jens,
I've been working with presheaf toposes of the form Set^D, where D is a finite poset of width at most 2, and my library for doing calculations with them is able to display all the objects using both ascii art and LaTeX... to take a look at the LaTeX output, go here. I am interested in testing your code. Let's discuss this in private!
Jules Hedges said:
I have several related (and quite loaded) questions for the community:
- What software would be useful in your research that you don't have the ability/resources to create?
- How would your research be different if you could collaborate with a programmer?
- Would you be interested in some forum where category theorists and interested programmers can virtually meet and discuss?
- Would you be interested in "category theory hackathons" where mixed teams of programmers and mathematicians build mathematical software during a virtual or in-person event?
Short answer: yes! Are you going to start with a topic here in the ACT Zulip chat?
Eduardo Ochs said:
I've been working with presheaf toposes of the form Set^D, where D is a finite poset of width at most 2, and my library for doing calculations with them is able to display all the objects using both ascii art and LaTeX... to take a look at the LaTeX output, go here. I am interested in testing your code. Let's discuss this in private!
Displaying all the objects of would be quite a feat! From the linked page I gather that you mean the subterminal objects/"truth values". Very cool stuff :smiley:
Eduardo Ochs said:
Hi Jens,
I've been working with presheaf toposes of the form Set^D, where D is a finite poset of width at most 2, and my library for doing calculations with them is able to display all the objects using both ascii art and LaTeX... to take a look at the LaTeX output, go here. I am interested in testing your code. Let's discuss this in private!
Great news!
Thanks, @[Mod] Morgan Rogers!!! =) I usually draw non-subterminal objects of like this, but at this moment I draw them "by hand" using (dednat6)[http://angg.twu.net/dednat6.html]...
Eduardo Ochs said:
Short answer: yes! Are you going to start with a topic here in the ACT Zulip chat?
Probably. We have the #practice: software channel already for this. In the end it might be better to start on a different platform, depending on whether Real Hackers find this zulip too intimidating/distracting/annoying/whatever
Probably the best thing would be starting to organize such an event, see how much traction it gets and organizing stuff from there.
I'm up for giving a hand organization wise, if needed. :smile:
Yes. I agree. It's just I hate online events, so I'm not very enthusiastic right now
Especially for hackatons having the event in person is really important imho
Especially in the first iterations of it. Once we develop a workflow it will be easier to port everything to an online format
This means that probably nothing can happen until at least next summer
On the other hand, an online event can be run with a budget of 0
True. The possible drawback is that it sucks so much that we lose motivation to organize the in person one xD
As a front-end web developer and a beginner student of category theory, I would love to find people who want to work on a web app that does intereactive diagramatic visualizations of categorical concepts.
I have many ideas and aborted projects with the main goal being to compile web documents which are prettier and more interactive than the usual pdfs LaTeX yields without adding too much complexity on the .tex side. Here is an idea that might interest you.
When writing down proofs using (commutative) diagrams, I find that loosing the ability to point at certain parts of a diagram makes everything harder to understand. Thus, I would like to 1) display commutative diagrams on the page and 2) highlight the relevant parts of the diagram when I hover over some equations in the text of the proof.
Avi Craimer said:
As a front-end web developer and a beginner student of category theory, I would love to find people who want to work on a web app that does intereactive diagramatic visualizations of categorical concepts.
I think @Spencer Breiner is interested in this. Are you familiar with Quantomatic and homotopy.io? I think a number of researchers would love help from a front-end web developer.
There are probably many other systems out there... but still room for more, doing different things!
Hi @Avi Craimer. Thanks for the ping @John Baez.
I am quite interested in user interface and interaction with CT, as I think this is one of the biggest barriers to entry. For one, better display (and authoring) tools (e.g., along the lines noted by @Ralph Sarkis ) would make it easier to learn in the first place. Even more important, even moderately sized examples are usually a pain to keep track of using scripts/code and text input. User interfaces can build in defaults, cues and guided interactions to make exploring model-design spaces much easier.
I've got lots to say on this topic, but I've got to run for the moment, but first:
One thing I think would be nice: a framework to display mathematical text and diagrams side-by-side, with scrollable text and diagrams that build as the text progresses, like working on the board while you give a lecture.
There is so much room for improvement in the way that we interact with mathematical documents.
Spencer Breiner said:
User interfaces can build in defaults, cues and guided interactions to make exploring model-design spaces much easier.
One thing I think would be nice: a framework to display mathematical text and diagrams side-by-side, with scrollable text and diagrams that build as the text progresses, like working on the board while you give a lecture.
I like that idea a lot, having a way to coordinate text and diagrams is important and it's a non-trivial problem in terms of layout and design (especially to support mobile screen sizes).
I've also had the idea of visualizing different diagrammatic views on the same ideas. I often find that following arguments in category theory requires juggling a lot of different contexts in your head. e.g., switching between seeing functors as mappings between categories, or as morphisms in cat, or as objects in a functor category. What would be really cool would be to be able to visualize all three of these interpretations simultaneously, and have interactive graphical highlighting that crosses the different views. So for example, if you mouse over an arrow in a diagram in cat, it highlights the corresponding dot in the functor category.
I think this sort of thing would be incredibly helpful for pedagogy, and possibly also for research (although I can't really speak to that).
As a first step towards all these ideas I would really love to see progress on migrating out of LaTeX and PDF. But I am not really sure what format should be used instead - as we saw in the "finding a paper on premonoidal categories" topic, archival is important.
Antonin Delpeuch said:
As a first step towards all these ideas I would really love to see progress on migrating out of LaTeX and PDF. But I am not really sure what format should be used instead - as we saw in the "finding a paper on premonoidal categories" topic, archival is important.
I'm not sure what the best solution is for published documents, but for the web I think SVG would definitely be the way to go. They are very easy to manipulate and allow full graphical control as well as interactivity. SVG (with a library called D3) is the standard used for infographics by most journalism organizations.
Avi Craimer said:
I often find that following arguments in category theory requires juggling a lot of different contexts in your head. e.g., switching between seeing functors as mappings between categories, or as morphisms in cat, or as objects in a functor category.
This is definitely one of the trickiest parts for newcomers, and the details are complex enough to benefit from management even for those who are comfortable switching contexts.
I think this sort of thing would be incredibly helpful for pedagogy, and possibly also for research (although I can't really speak to that).
Definitely for research in applications, where specifying and managing all the details is a big problem. For example, a user interface on top of the CSets or WiringDiagrams APIs in the Catlab library would make it easier to set up new models inside these frameworks.
Antonin Delpeuch said:
As a first step towards all these ideas I would really love to see progress on migrating out of LaTeX and PDF. But I am not really sure what format should be used instead - as we saw in the "finding a paper on premonoidal categories" topic, archival is important.
100% agree. We need to make it much easier for mathematicians, scientists and engineers to create interactive displays and models. Any such project should also aim to include hooks for semantic analysis, like attaching mathematical types (Group, Ring, ...) to notation variables to ease a path towards formal analysis and verification.
In my head, I call it SemanTeX
The New York Times and some other journalistic websites have these fancy things where as you scroll down, images appear and information is revealed. It's a bit hard to explain, but here is an example: https://www.nytimes.com/interactive/2015/10/27/world/greenland-is-melting-away.html
I imagine it could be nice to do something like this, but then the image is a diagram where different parts get highlighted as you scroll down
Ralph Sarkis said:
I have many ideas and aborted projects with the main goal being to compile web documents which are prettier and more interactive than the usual pdfs LaTeX yields without adding too much complexity on the .tex side. Here is an idea that might interest you.
When writing down proofs using (commutative) diagrams, I find that loosing the ability to point at certain parts of a diagram makes everything harder to understand. Thus, I would like to 1) display commutative diagrams on the page and 2) highlight the relevant parts of the diagram when I hover over some equations in the text of the proof.
I played a bit with highlighting and coloring nodes and arrows of diagrams years ago... but I did that using dednat6, that people find too outlandish...
There are some existing JavaScript libraries for doing 2D geometry. Like https://github.com/alexbol99/flatten-js
D3 can run force simulations to layout network diagrams dynamically. I've played around with it a bit and it's not too hard to set up.
Here is an example: https://www.d3-graph-gallery.com/network.html
With something like this you could build a graphical interface to let people build network diagrams for CT. If we build in some basic semantic awareness, it could connect diagram elements across different views.
@Avi Craimer For figures, sure (our new renderer https://wetneb.github.io/sheetshow/ outputs SVG), but I would not write an entire paper in an SVG image. We need a document processor to go with it :)
Avi Craimer said:
There are some existing JavaScript libraries for doing 2D geometry. Like https://github.com/alexbol99/flatten-js
D3 can run force simulations to layout network diagrams dynamically. I've played around with it a bit and it's not too hard to set up.
Here is an example: https://www.d3-graph-gallery.com/network.htmlWith something like this you could build a graphical interface to let people build network diagrams for CT. If we build in some basic semantic awareness, it could connect diagram elements across different views.
Our library pyzx uses d3 in combination with Jupyter notebooks to visualise ZX-diagrams. It works quite well I must say.
We've also used a d3 force-graph to map out all the people working on the ZX-calculus for those interested: http://zxcalculus.com/map.html
Antonin Delpeuch said:
Avi Craimer For figures, sure (our new renderer https://wetneb.github.io/sheetshow/ outputs SVG), but I would not write an entire paper in an SVG image. We need a document processor to go with it :)
I think we are talking about (at least) two different potential projects here. As you highlight, one need is a better way to make static documents such as PDFs for academic publication. The second project, would be an interactive web app that could be used as a teaching and/or research tool.
The sheetshow example you posted is awesome!
There might be some overlap between the projects. As you say, maybe figures can be generated as SVGs and then embedded in PDFs while using other means for general document layout.
Ok, so I think there's definitely enough interest to justify some kind of organising here. That's something I'm definitely willing to do in theory, although with the long awaited sequel Lockdown 2 just started my heart's not exactly in it at the moment. I think @Fabrizio Genovese is also willing to be involved in some organising, at least we talked about it. Slightly longer term I'd like to try to get some money for holding hackathons or other kind of software events, although since nobody can travel at the moment there's probably no point doing it right now
Let's do it!
In the meantime, before any organisation happens, I'd encourage talking about this stuff to happen over on #practice: software
You think right, I'm totally onboard with this.
For sure there are already 2-3 frameworks of categorical software that we may consider as a starting point to build on in a hypothetical hackaton. I'm mainly thinking about catlab, idris-ct and agda-categories. We could organize some hackatons where the goal is building on top of these platforms. Orthogonally, we could look at categorical software which is not necessarily built on top of category theory libraries, e.g. hacking on diagram editors, the open games software and so on
For instance - and I'm just wild guessing since this is not my field of expertise - there's been quite a lot of work on probability theory from a categorical perspective. Probability is nice, among other things, because it's quite computation-friendly, at least in its basic form. An hackaton around the idea of building software based on categorical probability theory would be dope as well. :smile:
Also Catlab.jl! @Evan Patterson did an awesome job implementing a SIR model using structured cospans of petri nets
I'd be onboard to participate in such a hackathon too! Of course, in the bright post-apocalyptic future
(Plus MSP feels like a good place for this kind of stuff :laughing: )
Thanks for saying so, although @Micah Halter and @James Fairbanks did most of the modeling work; I mainly worked on the underlying infrastructure for structured cospans.
Organizing hackathons for CT software sounds great. Having a solid, open source graphical editor for directed and undirected wiring diagrams would benefit everyone who works on CT tools, no matter what their preferred language/framework.
Started this topic in the software stream:
https://categorytheory.zulipchat.com/#narrow/stream/229125-practice.3A-software/topic/category.20theory.20software
I'm curious, has anyone used category theory as inspiration or basis for broader software architecture design and decisions? For example, perhaps having to code in object oriented code, but using patterns or concepts from category theory to inspire the design?
The place I'd suggest to look is the overlap of strongly typed functional programming (where there are many design patterns that are literally category theory, like mapping and folding and so forth) with architecture
Kale Evans said:
I'm curious, has anyone used category theory as inspiration or basis for broader software architecture design and decisions? For example, perhaps having to code in object oriented code, but using patterns or concepts from category theory to inspire the design?
We at Statebox do. This is also the way I personally teach category theory. Applying category theory "strictly" is often not possible for many devs, either because it is too time consuming or because the needed theory is not there yet. Still, I always incite devs to embrace "categorical thinking", that is, recognizing, transforming and treating patterns in a given problem like we do in category theory.
This seems an obvious thing to do for us that work with cats all day long, but it is absolutely not obvious for people coming from other fields. Up to now I've received a lot of good feedback about this, many people say that starting to think about problems "categorically" improved their workflow by a lot.
Do you have any resources on how to think "categorically"? I'm exactly in this situation of being a dev, have a strong love for functional programming, and I'm reading through 7 Sketches of Compositionality. But I still feel I don't have the right jumping off points to take a problem and think about it categorically
'Conceptual Mathematics' by Lawvere and Schanuel is an incredible book I think. Extremely introductory but written by two masters of the field.
In my experience it's hard for beginners to read and understand the whole thing - Conceptual Mathematics, I mean. But if you can, it must mean you're starting to think categorically.
Yeah it's not a 'read on the beach' book, but it's worth the effort. I can't really think of any other books which are easier though.
Some of it you can read on the beach.
Depends how much you know going in - I meant for a total beginner
Well it's dark and cold in Dublin. I'll read it in the comfort of my home :sweat_smile:
Ok, so it sounds like there is this notion of thinking categorically lending itself to architecturally sound Software. I'm very interested in this. Also reading Dr. Fong and Dr. Spivak's book, An Introduction to.." and am hoping that upon finishing a decent portion of it, things will come together for me.
@Fabrizio Genovese any examples of categorical thinking applied to software outside of funtional programming language context? I'm looking to apply categorical thinking to a C++ and Obj-C/Swift environment, where the data types will be passed and shared between the two languages
@Kale Evans something that's interesting in the space of category theory and compilers is "compiling to categories". I believe the basic idea is that your programs compile to cartesian closed categories that can then be compiled to C code for example. Conal Elliot started the work here but a friend of mine at Kitty Hawk is working on a more fleshed out version. I believe they'll open source it some time next year.
Your use case reminded me of it because it sounds like a shared intermediate representation :grinning_face_with_smiling_eyes:
@Fintan Halpenny wow that's super cool. Can't wait for it to be open sourced, I'd love to contribute to and learn from that project. I've thought about something like that recently. I worked on simulink in the past, which is a graphical block based editor, meant to represent a set of differential equations essentially, that ultimately complies into c/c++ code, with optimal optimizations for embedded processors.
However, I'm sure that if the system you are looking to model doesn't rely on using numerical analysis, that a block diagram to build out categories would work well. Now I know that Statebox enables use of graphical editor to build categories. Does it also compile into target languages like C? @Fabrizio Genovese
Also, is there any utility in Statebox support on mobile?
Hey everyone, sorry for having disappeared. I'm literally flooded with things to do at an unprecedented level. I hope I'll find the time to reply in a bit. There's no Statebox for mobile. There are backends but not C yet.
Kale Evans said:
Fabrizio Genovese any examples of categorical thinking applied to software outside of funtional programming language context? I'm looking to apply categorical thinking to a C++ and Obj-C/Swift environment, where the data types will be passed and shared between the two languages
Finally I have a bit of time. So, one of the most insightful examples of "categorical thinking" for me is distinguishing betweem modularity and compositionality. I wrote about it here. Many devs think modularly, meaning that they design their systems as "modules" that interface with each other. Still, not many devs think about which properties of the modules are preserved when they are combined. For instance, if the communication interface between two software components exposes too much or too little of the components features, you could get emerging behavior, meaning that the components do not interact as you expect. A very infamous example of this is worpress plugins: You load a bunch of them and it's almost guaranteed that your blog will explode. This is mainly due to miscommunication problems, or to different modules accessing the same global variables at the same time etc.
Opposed to this, compositional thinking is about designing how modules are supposed to communicate by focusing on which properties should be preserved by the composition. You may call it "clever API design" if you prefer, but the idea is writing a suitable communication protocol between your components so that emerging behavior is minimized as much as possible. Compositional thinking is quintessentially category-theoretic, while modular thinking is not. :smile:
You do not need to know category theory to think compositionally. Still, it helps a big deal, especially when the communication follows a non-obvious pattern. One of the most powerful features that categorical thinking gives you is the capacity of thinking in terms of universal properties. You may think about universal properties as a way to canonically build some information from the ingredients you have, so that some constraints are guaranteed. Some universal properties are well-known: For instance, if you have a mode of communication that behaves like a logical or on your input (if your input is A then to this, otherwise do that), it may make sense to phrase this using coproducts. Again, knowing what all those things formally means gives you quite a bunch of tools in your skillset. They are not just useful to prove theorems, you can also apply them "informally" in your working practice. :smile:
That is fantastically put! I'm gonna have to mull over this :grinning_face_with_smiling_eyes:
A beautiful example of this, by the way, is optics. Optics in functional programming are bimodular data accessors, that is, ways to interact with data in both directions (e.g. read and write). If I want to read something that is inside something else, or write something that is inside something else, I need to compose those things. Obviously I don't want the composition to explode in my face, I actually want it to give me the right notion of read/write for a nested kind of data.
Optics, and in particular lenses, have been created in functional programming first, and have been proved useful since the beginning, because they are really well-behaved. Not surprisingly, category theorists did a lot of work on this, generalized things further, and in the end proved how optics satisfy a universal property, expressed using a thing called end/coend calculus. This is relevant. The reason why optics are so well behaved is because they are ineed canonical, even if proving this came some time after the definition was given and libraries were implemented. :smile:
There's 2 blog posts I like to share on this topic, "compositional thinking" as distinct from category theory. One by me: https://julesh.com/2017/04/22/on-compositionality/ and a more recent one by Fabrizio: https://blog.statebox.org/modularity-vs-compositionality-a-history-of-misunderstandings-be0150033568
@Jules Hedges I just read through your blog post on compositionality. I agree that compositionality is contrary to emergence, and that if we admit emergent patterns, we won't be able to study them via their parts, but I don't know if that means that talk of emergence must be banished from science altogether.
If we come across parts which we want to put together to form some system, that should obviously be done in a compositional way to avoid emergent effects. On the other hand, if we come across a system which we want to study, I don't know if it's always possible (at least in practice) to study it by breaking it down and reconstructing it compositionally.
In a sense, category theory is also well-adapted to dealing with such cases because we can abstract away the micro-level patterns which gave rise to the macro-level and work instead with this emergent macro-behaviour synthetically.
@Fabrizio Genovese Thank you so much for your detailed response. As @Fintan Halpenny said, this is something I'm definitely going to mull over. Also, thank you @Jules Hedges for linking the blog posts!
image-6da79dc1-4596-4734-a43a-43aa258dd66a.jpg
My beach reading arrived
I don't think my copy is red!
I believe this one is a 2nd edition :thinking:
I have a Petri net describing some system I'd like to build. I was wondering if anyone knows of any documentation for a methodology for converting a Petri net into code?
Section 25.4 of this book discusses software packages for implementing Petri nets:
I know that's not quite what you asked for, but maybe it will help.
Have a look at
https://www.algebraicjulia.org/
cc @Evan Patterson
Yes, that too!
John Baez said:
Section 25.4 of this book discusses software packages for implementing Petri nets:
- John Baez and Jacob Biamonte, Quantum Techniques for Stochastic Mechanics.
I know that's not quite what you asked for, but maybe it will help.
Wow. I hadn't seen this :heart_eyes:
Kale Evans said:
I have a Petri net describing some system I'd like to build. I was wondering if anyone knows of any documentation for a methodology for converting a Petri net into code?
www.statebox.io does exactly this.
Our software is still experimental, but the whole point is that you draw a Petri net, and we execute it for you in the cloud. Clearly each transition can correspond to some other thing that has to happen, e.g. running code or exchanging data
A good idea could be subscribing to our telegram channel and ask to have early access. We're letting a very small number of users try our software atm.
Thank you so much for your responses @John Baez , @Eric Forgy , and @Fabrizio Genovese . All great stuff. It's such an awesome place to get started.
@Fabrizio Genovese I would love to subscribe to telegram and check that out actually. We have what I believe is a very good use case for Petri nets, and being able to prototype something relatively quickly will def help me prove a business case for it, which is where I am now.
Is there a link you could provide?
Alternatively, you can send a mail to info@statebox.io
We're very eager to connect with people that are/plan to use Petri nets to do stuff. Our software is still in development but it is very close to be stable enough so that we can open it up to users. At the moment, the way it works is that if you wanna try it we can manually create an account on our cloud for you. As I said, we plan to replace this with an invitation-based thing (like gmail was ages ago) in a few weeks
Thanks! I just joined. I'll request in the chat.
Yeah, feel free to ask! Also we are from Europe and it's 3am here (I still don't know precisely why I'm not sleeping atm) so it's likely you'll get answers in a few hours. :smile:
Lol, im not sure why your not sleeping either, but much appreciated! Thanks. We'll talk soon
I am interested in these kind of meetings and events. has there been any progress in organizing or is it still preliminary?
my main interest: I've seen several implementations of presheaves, but I don't know what each can really do. I want to use the full higher-order dependent type theory of a presheaf topos. so, the language at least needs dependent types, which limits the options. to what extent has the the internal language of presheaf toposes been implemented in Idris, Agda, and Coq?
I'm hoping that idris-ct can do this, maybe even efficiently; @Fabrizio Genovese what do you think?
Christian Williams said:
I am interested in these kind of meetings and events. has there been any progress in organizing or is it still preliminary?
All this organization just started today, I think. I hope you can join those other folks in discussing these matters.
Christian Williams said:
I'm hoping that idris-ct can do this, maybe even efficiently; Fabrizio Genovese what do you think?
We've never moved in that direction as of now, because what we need is mainly monoidal categories
In my limited experience, I can safely say that implementing category theory in a formally verified setting is a can of worms at every step. If you just need theorem proving then you could get away by using cubical agda or so, but that is stuff that typechecks, and does not compile
In general, defining e.g. presheaves in idris is not difficult. Building them (that is, proving that some other thing fits your definition) is
E.g. Defining symmetric monoidal categories is reasonably easy, building the free SMC is a nightmare, and it took us a few months to do it.
@Fabrizio Genovese can you point me to a repo where some of those difficult things are? I've found doing things in agda-categories relatively straightforward, once a few foundational decisions were settled on. There are indeed paths that are 'a nightmare'.
Topos theory is one place where things are tricky. The usual theory is very classical (relies on excluded middle rather pervasively). My hope is that the constructive versions of a topos are actually good enough to do most of what people actually want to do.
https://github.com/statebox/idris-ct/tree/master/src
I do agree that making the right foundational choices is paramount
I knew where the repo was... I was asking for pointers to those things that gave you the biggest headaches.
Oh, the worst one was the implementation of free ssmcs
One sec, let me find it
This was particularly nasty
Fabrizio Genovese said:
In my limited experience, I can safely say that implementing category theory in a formally verified setting is a can of worms at every step. If you just need theorem proving then you could get away by using cubical agda or so, but that is stuff that typechecks, and does not compile
Yeah. in your experience so far, what are the main reasons for the difficulty?
Jacques Carette said:
Topos theory is one place where things are tricky. The usual theory is very classical (relies on excluded middle rather pervasively). My hope is that the constructive versions of a topos are actually good enough to do most of what people actually want to do.
Huh? What are you talking about? Topos theory literature is very self-aware about precisely where and when non-constructive principles are employed, more so than anything else I've read.
I was also surprised by this comment, especially since my own introduction to topos theory was in part through the works of Erik Palmgren who explicitly uses MLTT as a foundation...
In general “Dutch” and “Swedish” topos theorists are certainly very careful about constructiveness. I do not know enough beyond that. :)
What examples are there of non-avoidable non-constructive reasoning in topos theory?
"every coherent topos has enough points" is non-constructive, for example, but the non-constructiveness in Deligne's original proof can be reduced to the requirement of a well-ordering of a site presenting a given coherent topos. The non-constructiveness doesn't live in the meta-theory; the well-ordering principle (or the weaker necessary and sufficient condition of the Boolean Prime Ideal Theorem) can be expressed as a property of the base topos over which one is working.
Are there any examples of non-constructive reasoning then where the subject matter is constructive (i.e. no classical toposes, etc..)
Maybe saying non-constructive is a bit too much about topos theory, but there is a clear gap between what we are able to implement well in proofs assistants and topos theory: the latter pervasively uses extensionality principles (function extensionality, propositional extensionality, equality reflection) as well as some weak choice principle (unique choice for instance) and quotients that are not straighforward to get in the multiple implementations of Intensional Type Theory.
As a result, you will need to work hard to obtain examples of toposes even if you were to formalize them (because types and functions between them do not form a topos since the metatheory lacks a few properties holding inside a topos).
Of course there are some (quite recent) implementations of proofs assistant that help with thoses aspects (cubical agda for instance).
If you're unwilling to add axioms for basic things like function extensionality then I think your "tool" might be better called a proof hamperer rather than a proof assistant.
But if you insist on working in such a spartan theory then you can just define an E-topos or something instead, right?
One thing I never understood here is what it even means for example for a category to have finite products, and for a functor to preserve them
Reid Barton said:
If you're unwilling to add axioms for basic things like function extensionality then I think your "tool" might be better called a proof hamperer rather than a proof assistant.
backs away slowly
Well usually ‘have’ means equipped with some appropriate adjoint right?
And preserve is just up to isomorphism
Where the codomain category also has to ‘have’ such structure
This can all be then expressed in a finite limit meta theory
Well actually I’m not sure about expressing the existence of an isomorphism. I guess you can adjoin more function symbols
What I really mean is that Kenji seemed to use "proof assistant" as a stand-in for working in a specific flavor of MLTT without axioms. But in fact most proof assistants aren't based on any sort of dependent type theory at all (e.g. Mizar, Isabelle, Metamath) or aren't allergic to adding axioms (e.g. Lean) or can prove the missing axioms (e.g. proof assistants for cubical type theory).
You can of course add axioms (I often do for practical purposes), but I am not sure how constructive that approach is: you easily lose all kind of witness extraction property for instance.
But it's right that I was considering only dependent type theories (which was the setting of the discussion, e.g. idris and agda, if I'm not confused)
But the point in the end is that the statement you want (Set is a topos) is just not provable in your background theory, and it has nothing to do with proof assistants or not.
It just leads to more confusion on all sides if people conflate "working in a proof assistant" and "working in ITT", since in actuality they are independent of each other. That's all I wanted to say.
Kenji Maillard said:
there is a clear gap between what we are able to implement well in proofs assistants and topos theory: the latter pervasively uses extensionality principles (function extensionality, propositional extensionality, equality reflection) as well as some weak choice principle (unique choice for instance) and quotients that are not straighforward to get in the multiple implementations of Intensional Type Theory.
External function extensionality, that morphisms are equal if their application to all global elements of the domain object, is false in most non-trivial toposes, so we don't use that. The internal one, that two morphisms are equal if their composites with all "generalized elements" (ie composable morphisms) are equal, is trivially true from the fact that a 1-topos is a 1-category. So sure, if you work in a meta-theory/over a topos without a strict equality predicate, then this principle might not be valid, but it shouldn't be difficult to replace it with the weaker notion of equality that you prefer.
I have less to say about unique choice, but also don't know that I've seen unavoidable applications of it in topos theory proofs. Maybe in some logical applications of toposes that I've spent less time studying this is more important.
I don't think that's what he means by function extensionality
In type theory, function extensionality means that agreement on all variable elements implies equality
Yes? How is that different from what I said?
Yeah my bad, I misunderstood what you meant.
I think that intensional type theory has nicer computational properties so that's why its used in proof assistants right?
Things like decidability of type checking, which doesn't hold when you enforce function extensionality
Morgan Rogers (he/him) said:
The internal one, that two morphisms are equal if their composites with all "generalized elements" (ie composable morphisms) are equal, is trivially true from the fact that a 1-topos is a 1-category.
To clarify, when I say "is trivially true" I mean it reduces to "two functions are equal if and only if their respective composites with the identity on the domain are equal", which I'm pretty sure should hold in any type theory too :upside_down:
Sorry @Morgan Rogers (he/him) I was sloppy. What I meant, and has been echoed by others, is that although the internal language of a topos is indeed constructive, the external construction of some toposes often requires classical axioms.
Case in point: is a topos, but its more constructive cousin is not, it is only a $$\Pi$$W-pretopos.
Ooh interesting! What are the morphisms in ? And what's missing from a -pretopos compared to a topos? (the nLab page is not very enlightening)
Morphisms in Setoid are functions that respect the underlying equivalence relation.
The pages on predicative topos and pretopos are much more instructive indeed.
One issue is that the classifying map of the suboject classifier, as explained in the last comment. It's unlikely to be the only issue.
In settings where you have UIP (uniqueness of identity proofs), this part goes through because proofs don't bump up sizes, as they are unique, by fiat.
I was aware of the issues of elementary topoi wrt predicativity, but I was also under the impression that the relation of predicativity and constructiveness is somewhat ambiguous, in that one can reject LEM and accept impredicative definitions. On the other hand, your original comment mentioned specifically reliance on excluded middle. Do you have any cases in mind that are strictly non-constructive as well as impredicative?
Excellent question. Hmm, off the top of my head, I don't. I might have mixed up two different places where "things go sideways" (the other being building -1-categories as -2-category-enriched categories) where excluded middle does play an important role.
By the way what I had in mind earlier was this paper by Palmgren showing that one can get a perfectly fine constructive theory of Grothendieck topoi with a constructive completeness theorem for first-order intuitionistic logic, so the problems do seem to arise only from the impredicativity of subobject classifiers (which of course are part of the definition of elementary topoi and needed for higher-order logic...)
But it's already hard to do higher-order logic predicatively :grimacing:
Our plan is to 'implement' a whole zoo of pretopos definitions, to make it easier to explore this space, to see where things are odd. Including both elementary topos and Grothendieck topos.
Woah, that's such a computer science approach! :open_mouth: My own approach to comparing these things would have been to export the construction of from to an arbitrary topos, to provide a nice big supply of such categories, check that the properties one expects do in fact lift, and then use those to "see where things are odd" using those.
Going back to where the conversation with @Fabrizio Genovese was yesterday: yes, you are in for a lot of pain when doing 'strict' anything in a type theory like those of Idris/Agda (but not in cubical Agda, just a lot of work). That's why I've avoided strict completely in agda-categories. It just goes against the grain of the host type theory.
Not that I haven't played with categories of permutations (as morphisms) - that's the whole topic of the (fully formalized) proof that they are a 'symmetric rig groupoid', as proved in Computing with Semirings and Weak Rig Groupoids. To obtain full reversibility of all coherences needed... 168 of them, if I remember well. That was definitely tedious to prove in full.
And yes, I am now thoroughly a CS person, my Ph.D. in pure math notwithstanding.
Fawzi Hreiki said:
Things like decidability of type checking, which doesn't hold when you enforce function extensionality
I don't think this is correct. Maybe you are thinking about extensional type theory, i.e., turning propositional equalities into judgmental ones? An axiom can just be thought of as implicitly added to every context, so it can't affect decidability of type checking.
I agree that strict anything is likely to be a headache, at least if you use the dependent type-style definition of a category. (If you use the "internal"-style definition with source and target as functions, there will be other and bigger headaches.) For example I think it's actually easier to formalize bicategories than strict 2-categories.
If you formulate the internal language of a topos as a dependent type theory (e.g. as in this), then, for the reasoning to be sound and complete, you need to have extensionality in the sense of turning propositional equalities into judgmental ones.
I guess it depends if you want to implement the topos rules as its own type theory or internally (i.e. within the existing type theory)
I can report that, for agda-categories, it was definitely easier to do bicategories (defined cat-enriched categories + some coherences) than strict 2-categories. In fact, I think the latter only exist in a branch, as what we had was too hard to work with.
@Fawzi Hreiki that link leads to a malformed PDF.
Ah, well it’s meant to be a link to this https://pdfs.semanticscholar.org/fc4e/4155e4246811003c168d60897a296c2c56b9.pdf
I need to find the time to work my way through that paper. I've been trying to find a definition of things exactly like this that 'fit' in MLTT. Categories with Families... doesn't. Or, it does, but things exploded when you try to put in Pi types.
my experience with these languages is limited, so my perspective is still naive. but what confuses me about the difficulty of implementing presheaf toposes is that Coq uses the Calculus of Constructions, which is almost exactly the same as the full higher-order dependent type theory of a topos. so it just seems like it should be completely natural to use Coq to work in the language of a certain topos.
Fawzi Hreiki said:
If you formulate the internal language of a topos as a dependent type theory (e.g. as in this), then, for the reasoning to be sound and complete, you need to have extensionality in the sense of turning propositional equalities into judgmental ones.
Surely that's only the case if you want "complete" to mean that equal things in the topos are judgmentally equal in the type theory? Wouldn't a more sensible notion of completeness be that equal things in the topos are propositionally equal in the type theory?
@Christian Williams It's easy to use Coq to work in the language of a certain topos. Just add whatever axioms happen to hold in that topos and away you go. I think what people are talking about here is formalizing reasoning about toposes externally, rather than in its internal language. (But I haven't been following closely, so I could be wrong.)
Yes, that is indeed what some of us have been discussing, reasoning about toposes, what things are provably toposes, etc.
@Mike Shulman Yeah that's a good point.
Mike Shulman said:
Fawzi Hreiki said:
If you formulate the internal language of a topos as a dependent type theory (e.g. as in this), then, for the reasoning to be sound and complete, you need to have extensionality in the sense of turning propositional equalities into judgmental ones.
Surely that's only the case if you want "complete" to mean that equal things in the topos are judgmentally equal in the type theory? Wouldn't a more sensible notion of completeness be that equal things in the topos are propositionally equal in the type theory?
well, this worries me. In particular I don't understand this paper https://www.math.unipd.it/~maietti/papers/ch.pdf well at all, but it does seem to indicate that something is not working well between the syntax of the type theory as transformed into CT and the more relaxed semantics. but yes, I'd like to understand it better
By analogy, it doesn't seem reasonable to require that the laws of a group hold up to definitional equality; there would be very few examples, and you also wouldn't be able to abstract over a variable group.
This paper out today on the arXiv seems relevant to the discussion. It proposes a variant of MLTT where judgmental equality is completely eliminated. The authors say:
Clearly, such a system is weaker than the standard systems, so it is natural to wonder how much weaker it is. We claim that this system still suffices for doing all of constructive mathematics; indeed, we conjecture that in such a system it should be possible to formalise most of the HoTT book.
Mike Shulman said:
Christian Williams It's easy to use Coq to work in the language of a certain topos. Just add whatever axioms happen to hold in that topos and away you go. I think what people are talking about here is formalizing reasoning about toposes externally, rather than in its internal language. (But I haven't been following closely, so I could be wrong.)
I'd be curious to see this applied to realisability toposes. Has anyone seen anything like this? How about extraction? I suspect that when reasoning in a realisability topos, you'd expect to be able to extract programs in the underlying PCA (at least for the part living in modest sets). Would Coq let you do this in any way?
Amar Hadzihasanovic said:
This paper out today on the arXiv seems relevant to the discussion. It proposes a variant of MLTT where judgmental equality is completely eliminated.
I had somehow (naïvely and ignorantly!) hoped that removing the judgmental equalities might give you a version of homotopy type theory that would have genuinely infinity-categorical models (as opposed to describing certain strict categories with weak equivalences and fibrations). In this paper they still get an ordinary category though. I might well be too ignorant to understand the answer, but why would composition be strictly associative if you don't have judgmental equality? Shouldn't associativity now be witnessed by a term in an identity type instead of holding "on the nose"?
@Rune Haugseng be careful about what your morphisms are: when working with syntax a morphism is usually a substitution, and composition of substitution is "always" associative on syntax
You can weaken the substitution structure to get, say, bicategorical structure, but the syntax becomes correspondingly more complex.
Does that mean a term of type X -> Y doesn't in some sense give a "morphism from X to Y" in this syntactic/contextual category?
Nathanael Arkor said:
You can weaken the substitution structure to get, say, bicategorical structure, but the syntax becomes correspondingly more complex.
Right, the "always" meant as long as you don't consider explicit substitutions as part of your syntax (also an interesting research direction)
Rune Haugseng said:
Does that mean a term of type X -> Y doesn't in some sense give a "morphism from X to Y" in this syntactic/contextual category?
The morphisms are the same, but you replace implicit substitution of variables for terms with an explicit substitution operator that is not strictly associative or unital (so that composition is different to the usual interpretation).
I think you're answering a much more advanced question than the stupid one I'm asking :-)
If I'm reading right, @Rune Haugseng you were asking about the standard syntactic category, rather than the weakened one Nathanael is referring to? In that case I think any term is interpreted by a morphism . https://ncatlab.org/nlab/show/syntactic+category says that when the theory has function types, the syntactic category is cartesian closed. Maybe then we have something like is isomorphic to ?
Rune Haugseng said:
Does that mean a term of type X -> Y doesn't in some sense give a "morphism from X to Y" in this syntactic/contextual category?
As I understand it, yes, that's right—it does not. In this category the objects are contexts, and the morphisms are—I can't think of the right word to state this, but—updates? to those contexts, updates that are a result of the substitutions.
I think I'm maybe starting to see the point: it's not that "f : X -> Y gives a morphism from X to Y" but that there's a morphism [x:X,f:X->Y] -> [y:Y] corresponding to (x,X,f:X->Y)|-f(x):Y. And then "composition of morphisms" as I imagined it is rather [x:X,f:X->Y,g:Y->Z] -> [y:Y,g:Y->Z] -> [z:Z] given by "substituting y=f(x) in g(y)". Definitely not so clear what an infinity-categorical version of that could mean - I suppose I'd have to insist that I can't just "substitute y=f(x)" because that's a strict equality, which is not allowed, but then these are just formal variables...
Rune Haugseng said:
I suppose I'd have to insist that I can't just "substitute y=f(x)"
That sounds similar to the bicategory structure that Nathanael was describing. I imagine generalising that to infinity categories is a pretty big step though.
For what it's worth, since you originally mentioned models, they're generally given in terms of some additional structure on top of the category of contexts, such as a "category with families", which I think is where the higher category structure usually comes in with HoTT
Valeria de Paiva said:
well, this worries me. In particular I don't understand this paper https://www.math.unipd.it/~maietti/papers/ch.pdf well at all, but it does seem to indicate that something is not working well between the syntax of the type theory as transformed into CT and the more relaxed semantics. but yes, I'd like to understand it better
On a quick glance, it seems to me that paper is just saying that you can't prove choice or unique choice when existence is formulated with respect to a notion of "Prop" that's underspecified, as it is in CoC. That's equally true in categorical models.
Mike Shulman said:
Valeria de Paiva said:
well, this worries me. In particular I don't understand this paper https://www.math.unipd.it/~maietti/papers/ch.pdf well at all, but it does seem to indicate that something is not working well between the syntax of the type theory as transformed into CT and the more relaxed semantics. but yes, I'd like to understand it better
On a quick glance, it seems to me that paper is just saying that you can't prove choice or unique choice when existence is formulated with respect to a notion of "Prop" that's underspecified, as it is in CoC. That's equally true in categorical models.
Thanks for coming in to help, much appreciated! but if you cannot prove choice for CoC doesn't this mean that our implementations in Coq are not helpful at all?
Implementations of what? Not helpful for what?
Mike Shulman said:
Implementations of what? Not helpful for what?
Implementations of mathematical proofs, the theory is (I believe) that we can extract programs from proofs and their witnesses. but if the choice rule doesn't hold, does it mean that the dream of transforming specifications into programs does not work for Coq? this was my question.
Direct-style type theoretic accounts of unique choice (and other things) where computing programs can actually be extracted is a somewhat _new_ development for type theory --- available in languages like cubical type theory, for instance. Non-direct-style approaches (i.e. approaches based on an encoding) where programs can be extracted have also existed for a lot longer, such as the setoid construction.
Coq's Prop is, I believe, designed to be "irrelevant" in that proofs of types in Prop can always be discarded when extracting programs. This is "obviously" somewhat in conflict with the idea of using an axiom like (unique) choice to prove a proposition and then extracting a program from it. If you want programs to be extractable from proofs, I think the proofs need to stay in Type and not descend into Prop. You don't need to go all the way to cubical type theory for that; you can just use the HoTT notion of "proposition" in Coq -- although then your propositions will never be discarded when extracting code, so I guess the result may be more verbose or inefficient than otherwise. I believe (correct me if I'm wrong, @Jonathan Sterling ) that what cubical type theory adds to this is the ability to use extensionality axioms, like function extensionality and univalence, still without disrupting the ability to extract code, which Coq has no way to do.
Jonathan Sterling said:
Direct-style type theoretic accounts of unique choice (and other things) where computing programs can actually be extracted is a somewhat _new_ development for type theory --- available in languages like cubical type theory, for instance. Non-direct-style approaches (i.e. approaches based on an encoding) where programs can be extracted have also existed for a lot longer, such as the setoid construction.
Thanks for also helping, Jonathan! But lots of things existed before cubical type theory and before HoTT.
I'm a little bit familiar with the idea of irrelevant types and, while I agree that they're useful (to avoid the dreadful verbosity of the terms we get) I always thought that people were working on better ways of simplifying the unnecessary verbosity, while keeping the essence of the proofs, to make the whole Propositions-as-Types program work.
and I still don't fully understand your point about "ability to use extensionality axioms" @Mike, but I will try later on. Thanks!
I think people are working on that. The innovations of HoTT and cubical TT are one contribution towards that goal, giving a better-behaved way of identifying the "propositions" while still keeping the computational content. Then one needs a way to decide which propositions can be ignored on extraction. For instance, Agda has a notion of "irrelevant argument" that I don't understand completely, but it's different from Coq's Prop and may have advantages in this regard.
I think it's used in cases where you don't care about its computation, only the constraints it implies for whatever you are trying to prove.
That is, its normal form isn't a detail needed to prove your goal, but the proposition the type represents is.
@Valeria de Paiva I am not sure what you are referring to with "lots of things existed before..." but obviously that's true. What didn't exist beforehand was a directstyle account of these notions that is compatible with computation and obeys the laws needed for them to be usable to formalize standard mathematics, which was the point of my message. :rolling_eyes:
There seems to be a somewhat fundamental incongruity between "irrelevance" qua "delete during extraction" and "irrelevance" qua "well-behaved notion of proposition". HoTT (and its computational variants, like cubical) unabashedly emphasize the latter, but one consequence of this approach is we don't actually know how to strip out propositional things during extraction for such languages (at least, I am unsure how to do it) --- on the bright side, they don't interrupt computation.
Another approach, one of those things that "existed before", is to design your logic in such a way that the propositions can be totally deleted during extraction. Systems of this kind often support computation at the expense of failing to validate certain laws involving propositions that are necessary for them to be practical to use for mathematics. Other times, axioms are added to support these mathematical use-cases, but this interrupts the extraction of programs.
What I'm trying to convey to you is that the trade-offs are complicated, and we are working on it, and no, we are not ignorant of what existed before. We are deeply aware of the old work's shortcomings, and of the (different) shortcomings of what we are doing now.
Jonathan Sterling said:
Valeria de Paiva[...]
What I'm trying to convey to you is that the trade-offs are complicated, and we are working on it, and no, we are not ignorant of what existed before. We are deeply aware of the old work's shortcomings, and of the (different) shortcomings of what we are doing now.
well, we all know that "the trade-offs are complicated" and this is why we're having this conversation.
But no, it's not clear to me that everyone (not even any single one!) is deeply aware of previous and present shortcomings.
it's part of our culture that we don't discuss the shortcomings of our "shining new rocks" very much. Hence anyone who wants to understand pros and cons of approaches has to extract them from relentless positive descriptions. HoTT is not even ten years old yet, Linear Logic is more than 30 years old, Curry-Howard close to a hundred years --if you go by
Schoenfinkel's combinators-- and all of them are work in progress, all have limitations, shortcomings, etc.
so instead of "We are deeply aware of the old work's shortcomings, and of the (different) shortcomings of what we are doing now", I'd much prefer a reasoned list of pros and cons. especially because pros and cons, like shortcomings tend to be individualized, according to a source. your pro might be a con for me and vice-versa.
Mike Shulman said:
I think people are working on that. The innovations of HoTT and cubical TT are one contribution towards that goal, giving a better-behaved way of identifying the "propositions" while still keeping the computational content. Then one needs a way to decide which propositions can be ignored on extraction. For instance, Agda has a notion of "irrelevant argument" that I don't understand completely, but it's different from Coq's Prop and may have advantages in this regard.
yes, I can understand that the goal is a better-behaved way of identifying the "propositions" while still keeping the computational content. and sure "one needs a way to decide which propositions can be ignored on extraction."
So Many Thanks @Mike Shulman for making the argument go back to something I can understand.
this is much clearer.
But what I still want to hear is how this impinges on the categorical models that are at issue. Because the gap between implementations and syntactic descriptions is huge, right? even for the most studied system, simply typed lambda-calculus, the gap is still huge. and attempts along the lines of the Categorical Abstract Machine seem to have fallen in disuse. so it's fine to say we have this nice system MLTT/CIC/whatever that we want to implement and our mathematics of it work better for X, but our implementations work better for Y and hence we're trying Z, etc. this is level of description that I was hoping for and thanks @dan pittman for a stab at that.
Valeria de Paiva said:
I'd much prefer a reasoned list of pros and cons. especially because pros and cons, like shortcomings tend to be individualized, according to a source. your pro might be a con for me and vice-versa.
Isn't that exactly what we've been discussing? E.g. in the two paragraphs of Jon's that came just before the one you quoted?
Mike Shulman said:
Isn't that exactly what we've been discussing? E.g. in the two paragraphs of Jon's that came just before the one you quoted?
I'm sure this is what we're trying to do @Mike Shulman
@Valeria de Paiva I think that you will find more satisfactory answers to your questions if you take a moment to be more receptive to those who are trying to explain things to you, and then ask follow-up questions that can be given precise answers. Respectfully, your behavior in this thread has been to send "eye-roll" emojis to people who are trying to help you, and then to question my claim that even a single person is "deeply aware of previous and present shortcomings" --- you are here claiming that I am lying when I say I am deeply aware of previous and present shortcomings. Why would I have any interest in answering questions about shortcomings asked by someone who just said they think I do not gave any deep awareness of them?
I am happy to discuss these things in more detail if you can commit to using a respectful mode of communication that assumes good-faith, and be open-minded to the possibility that experts in the field do have something relevant to say that could help you.
Valeria de Paiva said:
But what I still want to hear is how this impinges on the categorical models that are at issue.
In a categorical model there are roughly two ways to represent "propositions". They can be arbitrary subobjects in your category, or they can be some other fibration of posets, e.g. the strong subobjects in a quasitopos. The former corresponds to the HoTT notion of proposition, and the latter to a CoC-like "logic over a type theory" notion. Choosing subobjects has all the advantages of HoTT propositions, e.g. provability of unique choice, which is why they're usually what category theorists choose. I don't know anything about attempts to represent computational-irrelevance in categorical semantics, although I'd be surprised if they didn't exist.
Because the gap between implementations and syntactic descriptions is huge, right? even for the most studied system, simply typed lambda-calculus, the gap is still huge.
I'm not sure what gap you're referring to here.
Jonathan Sterling said:
Valeria de Paiva I think that you will find more satisfactory answers to your questions if you take a moment to be more receptive to those who are trying to explain things to you, and then ask follow-up questions that can be given precise answers. Respectfully, your behavior in this thread has been to send "eye-roll" emojis to people who are trying to help you, and then to question my claim that even a single person is "deeply aware of previous and present shortcomings" --- you are here claiming that I am lying when I say I am deeply aware of previous and present shortcomings. Why would I have any interest in answering questions about shortcomings asked by someone who just said they think I do not gave any deep awareness of them?
I think you're reading too much on a single emoji. I meant that I don't understand that statement. but thanks, I think I have had enough expert explanation for now.
Mike Shulman said:
I don't know anything about attempts to represent computational-irrelevance in categorical semantics, although I'd be surprised if they didn't exist.
I'd be very interested to see this, as well as a categorical account of what it means for a type theory to "compute", in the sense of the proof assistants community. I've asked a few people some time ago but none seemed interested (I don't remember who it was, so no need for you to ask!). If it turns out not to exist, I might be interested in contributing!
Mike Shulman said:
Valeria de Paiva said:
But what I still want to hear is how this impinges on the categorical models that are at issue.
In a categorical model there are roughly two ways to represent "propositions". They can be arbitrary subobjects in your category, or they can be some other fibration of posets, e.g. the strong subobjects in a quasitopos. The former corresponds to the HoTT notion of proposition, and the latter to a CoC-like "logic over a type theory" notion. Choosing subobjects has all the advantages of HoTT propositions, e.g. provability of unique choice, which is why they're usually what category theorists choose. I don't know anything about attempts to represent computational-irrelevance in categorical semantics, although I'd be surprised if they didn't exist.
Because the gap between implementations and syntactic descriptions is huge, right? even for the most studied system, simply typed lambda-calculus, the gap is still huge.
I'm not sure what gap you're referring to here.
What I meant about (traditional) representing computational-irrelevance in categorical semantics is the paper Awodey, Steven; Bauer, Andrej (2018): Propositions as [Types]. Carnegie Mellon University. Journal contribution. https://doi.org/10.1184/R1/6492500.v1 which is 10 years old, so I was expecting many further developments.
The gap I referred to is hard to point to a unique paper. I can see it described in his introduction for laypeople to Pierre-Louis recent prize, but the point is not clearly made. I need to find another source which shows it better, working on it.
"In the ‘80s and ‘90s, his research had an impact on the design and implementation of programming languages, leading most notably to the creation of virtual machines, such as the CAM (categorical abstract machine). This machine was a central component of the first compiler for the CAML programming language, which would later become OCaml. “The change in perspective was quite striking: it turned out that the highly abstract structures used for interpreting λ-calculus were a ‘low-level’ language, meaning that it was close to a machine language, and interpretation became compilation”, comments the researcher."
I would say -- and I think Steve and Andrej would agree -- that the HoTT notion of propositional truncation is the "modern successor" of Propositions As [Types]. IIRC the rules in that paper aren't quite right in the intensional / higher-categorical case, but now in HoTT we have the correct rules and a general justification.
Mike Shulman said:
I would say -- and I think Steve and Andrej would agree -- that the HoTT notion of propositional truncation is the "modern successor" of Propositions As [Types]. IIRC the rules in that paper aren't quite right in the intensional / higher-categorical case, but now in HoTT we have the correct rules and a general justification.
I hear you. (trying to avoid emojis, as I'm not very competent in that language)
I hear you about emojis. (-: I generally restrict myself to thumbs-ups.
@Mike one more question on this
IIRC the rules in that paper aren't quite right in the intensional / higher-categorical case,
is that written somewhere explicitly? what was the issue? I don't need the full story, just an impressionistic view will be great
Their rule for eliminating out of into a type requires that you have such that for any we have . This essentially defines to be the coequalizer of the two projections , which is a correct definition of the propositional truncation in a 1-category, but not in a higher category (in general, that coequalizer need not be a proposition). So in intensional/homotopy type theory, instead of that assumption on you need to assume that is a proposition, i.e. for all we have .
Many thanks for the explanation! :+1:
Mike Shulman said:
I would say -- and I think Steve and Andrej would agree -- that the HoTT notion of propositional truncation is the "modern successor" of Propositions As [Types]. IIRC the rules in that paper aren't quite right in the intensional / higher-categorical case, but now in HoTT we have the correct rules and a general justification.
I agree