You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
In topos theory, classically there are a handful of fragments of (infinitary, intuitionistic, first-order) logic which are studied, including regular, coherent and geometric. A common feature of the fragments I just listed is that they contain Horn logic: finite conjunctions are always allowed, and this is justified in the semantics because these logics are always interpreted in categories with finite limits, and moreover their models in toposes are identified with categories of geometric morphisms (whose constituent functors preserve finite limits).
Are there naming conventions for fragments of logic that don't include Horn logic? For example, are there named and studied "purely disjunctive" logics? Or has there not been enough motivation to study these independently?
If yes: what kinds of category are the corresponding syntactic categories?
I'm not sure there are naming conventions, but there are plenty of examples of logics without finite limit structure: first-order equational logic gives you cartesian categories; various linear logics giving rise to variations on monoidal categories (e.g. *-autonomous categories); bunched logic giving forms of duoidal category; and so on.
I'm not sure to what extent you still have a logic if you only allow disjunction, not conjunction.
It's just logic that favors "or" over "and".
It's less studied for some reason. I was shocked when I first heard that a Boolean algebra, in addition to being a cartesian category where each operation has a right adjoint (namely internal implication), is also a cocartesian category where each operation has a left adjoint.
Of course it's obvious because a Boolean algebra is its own opposite: what shocked me is that I'd never thought about "coimplication".
And what really shocked me is that there are bi-Heyting algebras (cartesian and cocartesian posets where each operation has a right adjoint and each operation has a left adjoint) that aren't Boolean algebras.
John Baez said:
It's just logic that favors "or" over "and".
I would have thought this was studied under something like "cologics" instead.
Very cological of you.
One of the presentations of (full) linear logic is pretty biased toward disjunction. You put everything to the right of the turnstile. So e.g. instead of the variable rule you use the 'excluded middle' look alike .
Of course, in that scenario you could also do the exact opposite.
Thanks to all for the input! Ultimately I'm trying to reverse-engineer the logic given a class of (non-monoidal) categories, so I wanted to check whether I could work more efficiently by working "from both ends at once".
It sounds like someone by the name of Kruckman took the idea of (first-order) cologic seriously, and gave a very enlightening math.se answer explaining the motivation for them; it amounts to `first-order logic interpreted the opposite category', which is certainly one way to get a theory which swaps out "ands" for "ors" in some sense.
John Baez said:
And what really shocked me is that there are bi-Heyting algebras (cartesian and cocartesian posets where each operation has a right adjoint and each operation has a left adjoint) that aren't Boolean algebras.
Tristan Crolard wrote a thesis on this, he calls this logic with co-implication, subtractive logic.
Tristan Crolard: Subtractive logic. Theor. Comput. Sci. 254(1-2): 151-185 (2001)
Speaking of Heyting, it appeared to me recently that there's barely any text or book on Heyting arithmetic. Say, some book on number theory in logic that would summarize what propositions aren't provable when you remove LEM and why.
One fact to know is that every sentence provable in PA is also provable in HA. I'm not familiar with much number theory, but it may be that most of it is not sensitive to the difference between the two systems.
Oh I should mention that the 2 volume "Constructivism in Mathematics" by Troelstra and van Dalen, which covers quite a bit about , e.g. the higher-order version doesn't prove more statements (barring further axioms)!
Thanks, Cody.
I had that book in hands a few times. Hope the library lets me touch their shelves soon again.
Cody Roux said:
Oh I should mention that the 2 volume "Constructivism in Mathematics" by Troelstra and van Dalen, which covers quite a bit about , e.g. the higher-order version doesn't prove more statements (barring further axioms)!
hey, we should also mention Troelstra's "Metamathematical Investigation of Intuitionistic Arithmetic and Analysis (Lecture Notes in Mathematics 344), 1973 but re-issued by Springer.
Is there any difference between an absolutely free algebra and a free algebra? For example, the formula algebra corresponding to the wffs of classical propositional logic is said to form an absolutely free algebra stated here. But it looks to me that it is a free algbera.
This StackExchange question explains the distinction. Absolutely free algebras are free algebras for algebraic signatures (with no equations).
I was wondering whether there is any notion of valuations as relations instead of functions. I haven't come across anything like this yet. But if someone has, please give me some references.
I have not seen quite that formulation, but it would make sense in the context of the non-deterministic semantics of Avron and collaborators. The idea is that where you usually use semantics where valuations go into algebras over the same signature as your logic you now look at valuations (still functions) going to multi-algebras over the same signature. A multi-algebra is like an algebra, but the operations take values in subsets instead of single elements (you could think of them as non-deterministic operations, because there can be several outcomes). Phrased differently, the notion of multialgebra arises from that of algebra by no longer demanding that the operations are functions, and instead being content with relations.
Quite a few logics allow slick semantics with tiny non-deterministic matrices, where with deterministic matrices you would not get a semantics with finitely many finite matrices at all.
Now you can also think of multialgebras as places where you compute with subsets instead of elements: To compute an operation on a subset, just take the union of all the outcomes of elements that set. This restores a bit of the algebraic feeling, you have a usual algebra again, but its underlying set is the power set of your structure now. So it would make sense to talk about subset-valued valuations. But that is the same as taking relations for valuations.
Here are two approachable articles with further references: Avron: Logical Non-determinism as a Tool for Logical Modularity: An Introduction and Avron: Non-deterministic Matrices and Modular Semantics of Rules.
Here is a tutorial on non-deterministic matrix semantics.
Everything containing the term "swap structure" on the page of Marcelo Coniglio is also about non-deterministic semantics.
Many thanks @Peter Arndt for the references. My motivation was something like the following:
Let is a set of wffs for a propositional logic (I am being purposefully vague here) and let . Let . Then we say,
Ah, so you even allow _partial_ non-deterministic valuations (not every formula needs to get assigned a truth-value).
Sayantan Roy said:
Many thanks Peter Arndt for the references. My motivation was something like the following:
Let is a set of wffs for a propositional logic (I am being purposefully vague here) and let . Let . Then we say,
- is true if .
- is false if .
- is both true and false if .
- is neither true nor false if .
but then don't you get to Belnap's 4-valued logic stuff?
I really don't know much about this, so this is a question, not a rhetorical one
Another aspect of your very general notion is that the valuations don't need to respect the structure of the formula. E.g. need not be computable in terms of and . That's about treating non-truth-functional logics. The article What Is a Non-Truth-Functional Logic? by João Marcos is something you might enjoy.
There was a project (running 2013-2016) on "generalized truth-functionality" which tried to get a more fine-grained view on how much a logic is truth-functional (rather than just answering yes or no). On the home page of this GeTFun project you can find some articles that might be down your alley. And this connects to what I wrote before: Non-truth-functional logics can be treated with multi-algebra semantics, which gives them back some degree of (non-deterministic) truth-functionality...
Truth-functionality also relates to many-valuedness. You can have a truth-functional many-valued logic for which there is no truth-functional 2-valued semantics (but there always is a 2-valued, possibly non-truth-functional, semantics according to Suszko's thesis). I think you can for example see Belnap's logic either via a 2-valued multialgebra semantics, or alternatively via a usual algebra semantics using an algebra structure on the 4-element set .
Good question @Valeria de Paiva!
Maybe one can recast a semantics given by a set of valuations in Sayantan's sense into a possible translation semantics for a bunch of translations into Belnap's logic. A possible translation semantics of a logic is a set of translations to other logics such that iff .
These would probably have to be translations in the loosest sense, not necessarily respecting any connectives, because Sayantan's valuations don't respect connectives either. Just taking arbitrary translations, the answer is yes by Jerabek's work. The interesting question is whether one can find a translation that somehow uses the truth value sets from Sayantan's valuations...
Peter Arndt said:
These would probably have to be translations in the loosest sense, not necessarily respecting any connectives, because Sayantan's valuations don't respect connectives either. Just taking arbitrary translations, the answer is yes by Jerabek's work. The interesting question is whether one can find a translation that somehow uses the truth value sets from Sayantan's valuations...
hmm, Jerabek's abstract reads "[...] hence in a sense, (almost) any two reasonable deductive systems can be conservatively translated into each other." This seems to me to show that the notion of 'conservatively translated' is not doing its job properly. translations really ought to depend on the components of the formulas and their connectives compositionally, I believe. don't you?
Yes, Jerabek's result definitely shows that arbitrary translations don't reflect anything about the involved logics. That's why the question about the connection to Belnap's logic, if one tries to understand it via translations, should probably become something like this: Is there a (loose) translation from the given logic to Belnap's logic such that every Santayan-valuation (from some set of valuations giving a complete semantics) arises as followed by a Belnap-valuation.
Even such an is probably not very informative about the given logic. But I don't think connective preserving translations can be used here, since Sayantan didn't suppose anything about the signature of his logic.
Valeria de Paiva said:
but then don't you get to Belnap's 4-valued logic stuff?
I really don't know much about this, so this is a question, not a rhetorical one
I didn't know about Belnap's -valued logic at the time of asking this question. My motivation for these was to provide a framework in which we can talk about both "gappy" (3.) and "glutty" (4.) formulas in a single framework. But willn't the answer to your question depend on the definition of the consequence relation?
Peter Arndt said:
Yes, Jerabek's result definitely shows that arbitrary translations don't reflect anything about the involved logics. That's why the question about the connection to Belnap's logic, if one tries to understand it via translations, should probably become something like this: Is there a (loose) translation from the given logic to Belnap's logic such that every Santayan-valuation (from some set of valuations giving a complete semantics) arises as followed by a Belnap-valuation.
In other words, are you asking whether every relational valuation is factorizable through ?
Sayantan Roy said:
In other words, are you asking whether every relational valuation is factorizable through ?
Yes. That's one way I saw of making Valeria's question precise.
Sayantan Roy said:
Valeria de Paiva said:
but then don't you get to Belnap's 4-valued logic stuff?
I really don't know much about this, so this is a question, not a rhetorical oneI didn't know about Belnap's -valued logic at the time of asking this question. My motivation for these was to provide a framework in which we can talk about both "gappy" (3.) and "glutty" (4.) formulas in a single framework. But willn't the answer to your question depend on the definition of the consequence relation?
Sure, you need to have a sensible definition of consequence relation, but my point is that this is not enough. this is the problem I have with institution theories: their model-theoretic based intuition thinks that consequence relations are enough to get you semantics. they're not! you need to preserve more than truth-values of theorems. you need to know how to preserve proofs themselves, I believe. this is the problem with my paper with Goguen, Rabe, Schroeder, and Mossakowski "An institutional view on categorical logic" https://kwarc.info/people/frabe/Research/GMPRS_catlog_07.pdf.
If anyone has such a free pdf copy of Makinson's "Bridges from Classical to Nonmonotonic Logic", could you please send it to me?
I don't. But the Polish version "Od Logiki Klasycznej do Niemonotonicznej" is on the Russian websites, and google translate produces something ugly but intelligible from this: No_fun.png
It's no fun to read this way, but if you are after something particular, maybe it helps.
সায়ন্তন রায় said:
If anyone has such a free pdf copy of Makinson's "Bridges from Classical to Nonmonotonic Logic", could you please send it to me?
There appears to be a full pdf linked here: https://www.researchgate.net/publication/262934388_Bridges_from_Classical_to_Nonmonotonic_Logic
Nope, my mistake: it is just the contents and preface.
In Knowledge Representation in Bicategories of Relations @Evan Patterson shows how Regular logic is the internal language of Bicategories of Relations. Regular logic has conjunction, True, and existential quantification. Geometric Logic adds to that negation, Universal Quantification and disjunction. The string diagrams show the duality between the positive and the negative very nicely.
I was just wondering: if there is a fragment for conjunction, True and Existential quantification, (which is what RDF uses) would the other fragment - ie a logic with only disjunction, False and universal quantification - be useful?
@Bob Atkey I think the pdf you linked only contains the table of contents and acknowledgements.
Henry Story said:
I was just wondering: if there is a fragment for conjunction, True and Existential quantification, (which is what RDF uses) would the other fragment - ie a logic with only disjunction, False and universal quantification - be useful?
Semantically speaking, it's hard to avoid having conjunction. If predicates are subobjects, and if a category has pullbacks of monomorphisms (so that your logic has substitution, which most do), then it automatically has intersections of subobjects. Similarly, if predicates are subobjects, you always have True, since the identity morphism is always a subobject. For analogous reasons, as soon as you have universal quantification, you also have implication.
Examples of categories that have disjunction and False (and, of course, conjunction and True) but not existential quantification seem a bit rare, but they do exist; a couple were discussed at this MO question. I haven't checked whether either of these examples have universal quantification (and implication).
Of course, you could always just dualize and interpret predicates by cosubobjects, in which case everything would turn around. (-:
Mhh, perhaps that is actually what people like Popper were arguing for: scientific logic makes universally quantified statements that can be falsified; there is no truth, only falsification.... Not sure if he had anything clearly to say about conjunction or disjunction though...
Henry Story said:
In Knowledge Representation in Bicategories of Relations Evan Patterson shows how Regular logic is the internal language of Bicategories of Relations. Regular logic has conjunction, True, and existential quantification. Geometric Logic adds to that negation, Universal Quantification and disjunction. The string diagrams show the duality between the positive and the negative very nicely.
I was just wondering: if there is a fragment for conjunction, True and Existential quantification, (which is what RDF uses) would the other fragment - ie a logic with only disjunction, False and universal quantification - be useful?
hi Henry, this is probably not what you want, but if I wanted to make RDF deal with contexts, I'd start from https://www.aclweb.org/anthology/S18-2013.pdf myself. let me know if you run into issues.
Thanks, that article "Named Graphs for Semantic Representations" is both short and very clear, giving a very good explanation as to why Regular logic with conjunction Truth and existential quantifiers is so well fitted to making factual assertions. It also then makes a very good case for how a certain type of negations can still be expressed there, by allowing one to express for example that two sets don't overlap - i.e. two concepts are distinct, and so knowing the positive that fred is a cat one can know that fred is not a dog. (This should match up with Patterson's dual to bicategories of relations, getting one to geometric logic... )
The article then goes on to speak of named graphs and showing how they allow one to put graphs in a context giving one stronger forms of negations, and the ability to start making modal logical statements such as doxastic and epistemic assertions, and indeed even emotional ones regarding love.
The reason I feel Category Theory has to be able to provide some insights into this, is that these are such simple structural changes that bring such powerful capabilities with them.
Now it seems that there may be a bit of a limitiation in the concept of Named Graphs as they were defined at least in the RDF1.1 spec, and that is they don't seem to allow one to define named graphs within named graphs. Ie. I could not say that @Valeria de Paiva asked @Henry Story to ask @Corina Cirstea if she wanted to be put down on the list of women mathematicians. It is not surprising that this did not come out of a W3C standards committee because the use case for named graphs is one agent fetching data from the web at different locations, not one agent going through a proxy or signing a claim signed by someone else already, as folks want to do now. Luckily Tim Berners-Lee who tends to look more at the big picture developed N3 where graphs withing graphs are recursively available. (but there the quad structure is not visible for some reason!) On the Web Cats thread I just made the interesting discovery that such Named Graphs in named graphs could be very nicely modelled as CoFree structures, which would I think make them coalgebraic, so observational, and so linked to modal logic...
Let be a set. Suppose that and let be such that,
Then is it always true that for all ?
Well yeah, I think so. Without real loss of generality you may as well assume , and the third item becomes redundant by the fourth item. If contained an element besides , then would be violated.
I was just wondering whether it could be deducible easily from standard results. That would save us the trouble of proving it separately in our paper.
But how do we rule out the cases where, e.g., say and and where are all distinct?
You look at , for example. There you would get , which violates the fourth condition.
I see. Many thanks. I think I can see a proof.