Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: logic

Topic: axiom systems


view this post on Zulip Nikolaj Kuntner (Jun 01 2020 at 00:51):

According to this MO thread, the Axiom of Replacement is needed for at least some constructions, such as (cardinally larger) (co-)limits and, as discussed in the comments of the question, also bigger functor categories
https://mathoverflow.net/questions/41118/axiom-of-replacement-in-category-theory
This makes me wonder about Mac Lane set theory, which has less tools at hand than Z, which has no F (Freankels Replacement).
Does this say some of those constructions are generally not available in those theories (and the corresponding topoi). E.g. if you want to investigate bigger objects in functor categories. Or maybe it does depend on the coding?
(PS this is not about CS kind of classes per se, but I didn't find a better suited channel.)

view this post on Zulip Todd Trimble (Jun 01 2020 at 01:29):

Yes. There are some arguments in category theory where Mac Lane set theory doesn't cut it. The types of examples I have in mind include algebraic versions of small object arguments, and transfinite constructions of free monads.

This used to freak me out a little. Not as much anymore. Categorical formulations of Replacement were described quite some time ago by McLarty, and Mike Shulman subsequently took a deep dive into the matter which eventually bore fruit in his stack semantics. (I link to some slides rather than his matured papers, because it was through slides that I began to see light.) I found it a good exercise to try to work through this by considering the construction of n:NPn(N)\sum_{n: \mathbb{N}} P^n(\mathbb{N}), which is not possible within Mac Lane set theory. Even now I find this subtle and tricky.

view this post on Zulip Nikolaj Kuntner (Jun 01 2020 at 02:07):

Thanks for clarifying.
Pointed trees = Rooted trees in the slides?
Would you know any reasons for a set theory not to adopt replacement, other than maybe it not being necessary as one may only work with low rank objects

view this post on Zulip Cody Roux (Jun 01 2020 at 02:28):

Todd Trimble said:

Yes. There are some arguments in category theory where Mac Lane set theory doesn't cut it. The types of examples I have in mind include algebraic versions of small object arguments, and transfinite constructions of free monads.

This used to freak me out a little. Not as much anymore. Categorical formulations of Replacement were described quite some time ago by McLarty, and Mike Shulman subsequently took a deep dive into the matter which eventually bore fruit in his stack semantics. (I link to some slides rather than his matured papers, because it was through slides that I began to see light.) I found it a good exercise to try to work through this by considering the construction of n:NPn(N)\sum_{n: \mathbb{N}} P^n(\mathbb{N}), which is not possible within Mac Lane set theory. Even now I find this subtle and tricky.

Quite relevant to the "Sheaves in Geometry and Logic" reading group might I add.

view this post on Zulip Todd Trimble (Jun 01 2020 at 02:43):

Nikolaj Kuntner said:

Thanks for clarifying.
Pointed trees = Rooted trees in the slides?
Would you know any reasons for a set theory not to adopt replacement, other than maybe it not being necessary as one may only work with low rank objects

Ultimately, not really, except that (as you suggest) most of mathematics doesn't really require replacement to get by. My guess is that all pre-20th century mathematics had no need of it at all.

I didn't see mention of pointed trees in Mike's slides, but if I missed it, I'd guess they were the same as rooted trees (I mean, they are the same: pick a node in a tree as undirected graph, then you can orient all edges towards the direction of that node, etc.)

view this post on Zulip সায়ন্তন রায় (Jun 01 2020 at 04:24):

Anyone here familiar with the program of Universal Logic?

view this post on Zulip John Baez (Jun 01 2020 at 04:34):

No. It sounds like a good idea. They should use category theory to help organize things, the way universal algebra now does.

view this post on Zulip সায়ন্তন রায় (Jun 01 2020 at 04:39):

The people who work in institution theory do.

view this post on Zulip Peter Arndt (Jun 01 2020 at 05:36):

Sayantan Roy said:

Anyone here familiar with the program of Universal Logic?

Yes, I'm a regular at the Unilog conference series. And of course I do use category theory for thinking about it.

view this post on Zulip সায়ন্তন রায় (Jun 01 2020 at 05:56):

I would very much like to know the current status of the research on Universal Logic using Béziau's definition of logical structures as it is (i.e., as simply a pair of the form (L,)(L,\vdash) where LL is a (preferably nonempty) set and P(L)×L\vdash\subseteq \mathcal{P}(L)\times L (for details see this MathOverflow question).

view this post on Zulip Peter Arndt (Jun 01 2020 at 14:59):

I don't think very much has been done. I think this kind of structure just is a bit too general - not necessarily because that makes it too difficult to study, but because one misses some interesting behaviour of logics in this too loose framework.

One example: D'Ottaviano and Feitosa here studied logics and translations in the following sense: A logic is a set X with a consequence relation \vdash such that the operator given by XSC(S):={xXSx}X \supseteq S\mapsto C(S):=\{x \in X | S \vdash x\} is a closure operator. This corresponds to three classical axioms on consequence relations proposed by Tarski, but he only proposed to consider this on algebraic structures, and D'Ottaviano and Feitosa consider it on mere sets. They study conservative translations, i.e. maps preserving and reflecting the consequence relation. Later Jerabek showed that with this notion of logic and translation every logic over a countable set can be conservatively translated into almost every other logic. This would probably still be true if one dropped the three axioms on the consequence relation. This does not do justice to hat is interesting about the double negation translation and things like this: they preserve the structure of the language, e.g. in the sense that they commute with substitution.

Another example: Combining logics is an interesting topic. Basically it is about constructing and studying colimits in categories of logics and translations -- merging several logics into a bigger one. If you take the category of logics and consequence preserving maps in Beziaux's sense, the coproducts there for example are totally boring: You just take disjoint union of the sets and of the consequence relations. It only gets interesting if from your two given logics you generate a formal language containing both given ones as fragments, and taking e.g. the smallest consequence relation on that language that still has some properties like corresponding to a closure operator or being substitution invariant.
It seems you just need to demand some properties to have interesting behaviour.

Of course you could be flexible about these properties and carry them around as a variable. This is sort of what Coniglio does in this article. I also did it in upcoming work where I just give some conditions on properties of consequence relations to ensure my results (which are: there is a structure of cofibration category on the cat of logics and translations which is super useful)...

view this post on Zulip Valeria de Paiva (Jun 01 2020 at 19:28):

Peter Arndt said:

Sayantan Roy said:

Anyone here familiar with the program of Universal Logic?

Yes, I'm a regular at the Unilog conference series. And of course I do use category theory for thinking about it.

I am also familiar with Universal logic, with Abstract Algebraic Logic and with institution theory. I am not a regular on the Universal Logic meetings, but I have been to two. I don't like the work PEter is mentioning above, because it does pay enough attention to proofs, only to consequence relations and consequence relations are not enough for the work in CT that I want to do. Also this work tends to insist on Tarski's conditions on the consequence relation, which include monotonicity and do not cover Linear Logic.

view this post on Zulip সায়ন্তন রায় (Jul 11 2020 at 13:52):

Does anyone here have a free copy of this paper?

view this post on Zulip Nikolaj Kuntner (Sep 15 2020 at 21:30):

In this paper, author Nik Weaver warns that there could be questions of Σ1\Sigma_1-validity of ZFC. He suggests that the axioms of a set theory might be strong enough to decide PA-undecidable statements about Turing machines. And moreover decide them wrongly, in the sense that e.g. ZFC might prove for a particular machine tt there to "exist" a nNn\in {\mathbb N} such that it halts at step nn, but "in reality" the machine actually can't do that. This wouldn't be a problem w.r.t. consistency, since it affects a statement where PA axioms were too weak to make a call on it.

I know that there are conservativity results of PA over HA on I think that level of the hierarchy, and similarly with PA_2/HA_2 or ZF over IZF. I.e. we know that for n.(f(n)=0)\exists n.(f(n)=0)-statements of $HA$, won't see anything new in PA (PA can't fail the world of HA on that level, so to speak). But do we know anything more about ZFC speaking about PA statements? Assuming PA is right about the world, can ZFC be wrong about the world in this "wrong promise/wait forever" kind of way?
Is this issue discussed somewhere on that level of the hierarchy, where validity issues in connection with "the real world" would still be "hands on" in this sense (i.e. not stuff about Turing jumps somewhere removed from recursive machines.)

view this post on Zulip James Wood (Sep 19 2020 at 21:02):

Are you basically asking whether there are any results the author might have overlooked that contradict his conjecture?

view this post on Zulip Dan Doel (Sep 19 2020 at 21:57):

Are there formulas of PA that don't translate to Δ0Δ_0 formulas of ZFC? Isn't the 'obvious' translation going to make all the quantifiers bounded (by the naturals)?

view this post on Zulip Nikolaj Kuntner (Sep 19 2020 at 22:26):

I tried to summarize it into a question here, but the relation between ZFC proofs and PA proofs is not clear enough to me to understand what the paper warns about.

view this post on Zulip Mike Stay (Oct 02 2020 at 23:38):

Sayantan Roy said:

The people who work in institution theory do.

Am I right in reading the "such that" clause on this page as a naturality condition? That is, letting i ⁣:SetReli\colon{\rm Set} \hookrightarrow {\rm Rel} and reading C|C| as the underlying set of CC, satisfaction is a natural transformation
:iModisen⊨: i\circ |-| \circ Mod \Rightarrow i \circ sen?

Or is there something more to it?

view this post on Zulip সায়ন্তন রায় (Oct 03 2020 at 08:14):

I think there is a typographical error in your question. The domains of iModi∘∣−∣∘Mod and iseni∘sen are different.

view this post on Zulip সায়ন্তন রায় (Oct 03 2020 at 09:05):

But otherwise, yes.

view this post on Zulip সায়ন্তন রায় (Jan 26 2021 at 14:31):

In this SEP article it is stated that the finiteness axiom implies monotonicity. This implies, in particular, that for a non-monotonic logic there exists a formula α\alpha and an infinite set Γ\Gamma such that Γα\Gamma\vdash\alpha but there is no finite set of Γ\Gamma which entails α\alpha. Is there some constructive way to find one such Γ\Gamma and α\alpha satisfying this for some well known and nice enough non-monotonic logic?

view this post on Zulip সায়ন্তন রায় (Feb 06 2021 at 05:40):

Does there exist any non-monotonic logic where it is possible to apply Lindenbaum's lemma to obtain completeness?

view this post on Zulip Valeria de Paiva (Feb 07 2021 at 19:26):

Sayantan Roy said:

Does there exist any non-monotonic logic where it is possible to apply Lindenbaum's lemma to obtain completeness?

yes, plenty of them, including Linear Logic. (Assuming you mean by non-monotonic logic that $$\Gamma\dash B$$ implies $$\Gamma, A\dash B$$ in their sequent calculus.)

view this post on Zulip সায়ন্তন রায় (Feb 08 2021 at 06:57):

yes, plenty of them, including Linear Logic. (Assuming you mean by non-monotonic logic that ΓB\Gamma\vdash B implies Γ,AB\Gamma, A\vdash B in their sequent calculus.)

I am a bit confused @Valeria de Paiva. If ΓB\Gamma\vdash B implies Γ,AB\Gamma, A\vdash B in the sequent calculus of a logic then shouldn't it be monotone? (Admittedly, I don't know anything about sequent systems, so I may be wrong.)

Just to be clear, by non-monotonic logic I mean a logic where the following property does not hold: For all Γ,Σ\Gamma, \Sigma, ΓΣ\Gamma\subseteq \Sigma implies that C(Γ)C(Σ)C(\Gamma)\subseteq C(\Sigma).

view this post on Zulip Valeria de Paiva (Feb 08 2021 at 13:19):

Sayantan Roy said:

yes, plenty of them, including Linear Logic. (Assuming you mean by non-monotonic logic that ΓB\Gamma\vdash B implies Γ,AB\Gamma, A\vdash B in their sequent calculus.)

I am a bit confused Valeria de Paiva. If ΓB\Gamma\vdash B implies Γ,AB\Gamma, A\vdash B in the sequent calculus of a logic then shouldn't it be monotone? (Admittedly, I don't know anything about sequent systems, so I may be wrong.)

Just to be clear, by non-monotonic logic I mean a logic where the following property does not hold: For all Γ,Σ\Gamma, \Sigma, ΓΣ\Gamma\subseteq \Sigma implies that C(Γ)C(Σ)C(\Gamma)\subseteq C(\Sigma).

Sorry, indeed, the "not" was dropped from my description of a non-monotonic logic. and yes you can say for monotonic that Γ,ΓB\Gamma, \Gamma'\vdash B implies Γ,A,ΓB\Gamma, A, \Gamma'\vdash B to not worry about commutativity.

view this post on Zulip সায়ন্তন রায় (Feb 08 2021 at 13:26):

In that case, I am very much interested to see a proof of this result. Where should I start? (I don't have any familiarity with Linear Logic at all.)

view this post on Zulip Peter Arndt (Feb 09 2021 at 16:05):

You may find something in Restall: Introduction to substructural logics.

view this post on Zulip James Wood (Feb 09 2021 at 18:37):

A proof that linear logic is non-monotonic? Here's a self-contained proof: 1\vdash 1 is cut-free derivable, whereas 1\top \vdash 1 is not. This is because the right rule for 11 is 1\overline{\vdash 1} and there is no left rule for \top. The only structural rule is exchange, but that cannot apply. The key is that there is no weakening, which is how you'd usually prove monotonicity for sufficiently classical logics.

view this post on Zulip Valeria de Paiva (Feb 10 2021 at 00:05):

Sayantan Roy said:

In that case, I am very much interested to see a proof of this result. Where should I start? (I don't have any familiarity with Linear Logic at all.)

Troelstra's old book Lectures on Linear Logic has one, and many, many other places have them.
I wasn't going to mention Greg's book as I didn't know you could find it on the web. (thanks @Peter Arndt ). the point is that Lindenbaum's idea applies fairly generally (I think I read somewhere that Rasiowa was the first one who realized that you could simply use it everywhere, but it would take some work to discover who told me so. I think Nick Galatos' book also has the proof--in the non-commutative case.

view this post on Zulip Valeria de Paiva (Feb 10 2021 at 00:16):

James Wood said:

A proof that linear logic is non-monotonic? Here's a self-contained proof: 1\vdash 1 is cut-free derivable, whereas 1\top \vdash 1 is not. This is because the right rule for 11 is 1\overline{\vdash 1} and there is no left rule for \top. The only structural rule is exchange, but that cannot apply. The key is that there is no weakening, which is how you'd usually prove monotonicity for sufficiently classical logics.

I think @Sayantan Roy wanted to see a proof of Lindenbaum's lemma for a non-monotonic logic. (Relevance logicians like Mike Dunn books also have them too.) the point I think is interesting is that the so-called "abstract algebraic logic" originally of Blok and Pigozzi did not accept consequence relations that are not monotonic and I don't know why. I don't even know if after the "algebraic proof theory" of Ciabattoni, Galatos, Okada and others, this changed or not.

view this post on Zulip Peter Arndt (Feb 11 2021 at 07:32):

Valeria de Paiva said:

The point I think is interesting is that the so-called "abstract algebraic logic" originally of Blok and Pigozzi did not accept consequence relations that are not monotonic and I don't know why.

For a class of algebraic structures C\mathcal{C} there is an equational consequence relation, where {t1=s1,,tn=sn}Ct=s\{t_1=s_1, \ldots, t_n=s_n\}\vdash_{\mathcal{C}} t = s means that any algebra in C\mathcal{C} satisfying the equations t1=s1,,tn=snt_1=s_1, \ldots, t_n=s_n must also satisfy the equation s=ts=t. This equational consequence relation is clearly monotonic.
A logic (Fm,)(Fm, \vdash) has an algebraic semantics in the sense of Blok/Pigozzi, if there is a faithful translation to the equational consequence relation of some class C\mathcal{C} of algebras over the same signature. I.e. there must be a map f ⁣:FmEqf\colon Fm \to Eq from the formulas of our logic to the set of equations (with properties, not getting too technical here ...) such that Γϕf(Γ)Cf(ϕ)\Gamma\vdash \phi \Leftrightarrow f(\Gamma)\vdash_{\mathcal{C}} f(\phi). So such logics must also be monotonic: If Γϕ\Gamma \vdash \phi, then f(Γ)Cf(ϕ)f(\Gamma)\vdash_{\mathcal{C}} f(\phi), hence f(Γ),f(Δ)Cf(ϕ)f(\Gamma), f(\Delta)\vdash_{\mathcal{C}} f(\phi), hence Γ,Δϕ\Gamma, \Delta \vdash \phi.

I guess that explains it to some extent. It could still be true, though, that there are protoalgebraic logics that don't have an algebraic semantics, and there is the somewhat independent direction of asking whether logical equivalence constitutes a congruence relation on the algebra of formulas ("self-extensional logics"). Both situations also fall under "abstract algebraic logic" nowadays and still make sense for non-monotonic logics, I think.

view this post on Zulip Valeria de Paiva (Feb 12 2021 at 17:43):

Peter Arndt said:

Valeria de Paiva said:

The point I think is interesting is that the so-called "abstract algebraic logic" originally of Blok and Pigozzi did not accept consequence relations that are not monotonic and I don't know why.

For a class of algebraic structures C\mathcal{C} there is an equational consequence relation, where {t1=s1,,tn=sn}Ct=s\{t_1=s_1, \ldots, t_n=s_n\}\vdash_{\mathcal{C}} t = s means that any algebra in C\mathcal{C} satisfying the equations t1=s1,,tn=snt_1=s_1, \ldots, t_n=s_n must also satisfy the equation s=ts=t. This equational consequence relation is clearly monotonic.
A logic (Fm,)(Fm, \vdash) has an algebraic semantics in the sense of Blok/Pigozzi, if there is a faithful translation to the equational consequence relation of some class C\mathcal{C} of algebras over the same signature. I.e. there must be a map f ⁣:FmEqf\colon Fm \to Eq from the formulas of our logic to the set of equations (with properties, not getting too technical here ...) such that Γϕf(Γ)Cf(ϕ)\Gamma\vdash \phi \Leftrightarrow f(\Gamma)\vdash_{\mathcal{C}} f(\phi). So such logics must also be monotonic: If Γϕ\Gamma \vdash \phi, then f(Γ)Cf(ϕ)f(\Gamma)\vdash_{\mathcal{C}} f(\phi), hence f(Γ),f(Δ)Cf(ϕ)f(\Gamma), f(\Delta)\vdash_{\mathcal{C}} f(\phi), hence Γ,Δϕ\Gamma, \Delta \vdash \phi.

I guess that explains it to some extent. It could still be true, though, that there are protoalgebraic logics that don't have an algebraic semantics, and there is the somewhat independent direction of asking whether logical equivalence constitutes a congruence relation on the algebra of formulas ("self-extensional logics"). Both situations also fall under "abstract algebraic logic" nowadays and still make sense for non-monotonic logics, I think.

hmm, I don't think this explains it, @Peter Arndt . They've chosen not to accept monotonicity as part of their definition of logic consequence, and sure, in the 70s that was maybe a reasonable choice. they mention Scott IIRC. but yes, there are too many different things being called "abstract algebraic logic", I think. some writing, cleaning it from the CT perspective would be a good thing, I believe.

view this post on Zulip সায়ন্তন রায় (Feb 13 2021 at 05:04):

@Valeria de Paiva , there is something called "Categorical Abstract Algebraic Logic" initiated by George Voutsadakis.

view this post on Zulip Valeria de Paiva (Feb 13 2021 at 05:37):

Sayantan Roy said:

Valeria de Paiva , there is something called "Categorical Abstract Algebraic Logic" initiated by George Voutsadakis.

does it look good to you? if so, I want a reference.

view this post on Zulip সায়ন্তন রায় (Feb 13 2021 at 06:30):

Well, I never had the chance to go through this line of research in detail. But you can take a look at his PhD thesis and decide this for yourself.

But I must admit that I am not a great fan of the notion of institution. It seems too difficult for me to grasp.

view this post on Zulip Valeria de Paiva (Feb 17 2021 at 22:37):

Sayantan Roy said:

Valeria de Paiva , there is something called "Categorical Abstract Algebraic Logic" initiated by George Voutsadakis.

this seems strangely redundant, does it not? Algebraic Logic is fine, Abstract Algebraic logic too. Even Abstract Categorical Logic will work for me, but "Categorical Abstract Algebraic Logic" seems odd. (thanks for the link though and sorry for the delay, I could've sworn that I had replied.)

view this post on Zulip সায়ন্তন রায় (Apr 19 2021 at 16:03):

Let (L,)(L,\vdash) be a logic such that whenever for any set Σ\Sigma we have C(Σ)=LC(\Sigma)=L, there exists a finite set Σ0Σ\Sigma_0\subseteq \Sigma such that C(Σ0)=LC(\Sigma_0)=L. Is there any name for logics satisfying this particular property?

view this post on Zulip John Baez (Apr 20 2021 at 01:59):

I don't know one, but maybe "compact" would do.

view this post on Zulip Peter Arndt (Apr 20 2021 at 08:24):

I don't know a standard name. "Compact" is sometimes used for saying that the logic satisfies your property for all sets of consequences, not just for the whole set of formulas (although "finitary" is more standard).

In the context of paraconsistent logics, where they really separate the notions of "contradictory" and "allows to infer anything", a set of formulas that allows to infer anything is called "explosive". So you could call your property "compactly explosive" maybe?

view this post on Zulip সায়ন্তন রায় (Apr 20 2021 at 12:04):

I looked around and I found that this property is called 11-compactness in Wójcicki's book.

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:04):

Does there exist a two sided sequent calculus representation of Linear Logic that doesn't involve multisets? If so, could anyone please point out some references?

view this post on Zulip Chad Nester (May 05 2021 at 15:17):

Does the usual presentation involve multisets? I guess we could think of the lists of formulae on either side of the turnstile as multisets, but I don't think that's how it's usually presented.

view this post on Zulip Chad Nester (May 05 2021 at 15:18):

More formally: multisets are commutative lists, and linear logic has the exchange rule.

view this post on Zulip Chad Nester (May 05 2021 at 15:18):

So in a sense I guess the answer is "no", but in another sense you don't need multisets to talk about the sequent calculus so my answer would be "yes".

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:20):

The exchange rules are probably for the one sided presentation. I am only aware of this presentation.

view this post on Zulip Chad Nester (May 05 2021 at 15:24):

The non-multiset version is the same, but has a rule allowing you to permute things on either side. I'm not sure how to typeset sequents here but the rule is that if I can derive Δ1,A,B,Δ2Γ\Delta_1,A,B,\Delta_2 \vdash \Gamma then I can derive Δ1,B,A,Δ2Γ\Delta_1,B,A,\Delta_2 \vdash \Gamma, and symmetrically that if I can derive ΔΓ1,A,B,Γ2\Delta \vdash \Gamma_1,A,B,\Gamma_2 then I can derive ΔΓ1,B,A,Γ2\Delta \vdash \Gamma_1,B,A,\Gamma_2, where the big letters are sequences.

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:24):

I think you were talking about this presentation @Chad Nester .

view this post on Zulip Chad Nester (May 05 2021 at 15:24):

No my point is that these are the same thing (multisets vs. sequences with exchange)

view this post on Zulip Chad Nester (May 05 2021 at 15:25):

This is one way to define what a multiset is.

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:26):

I see. That makes sense.

view this post on Zulip Mike Shulman (May 05 2021 at 15:28):

But it's perhaps worth pointing out that if by a "multiset" you really mean an element of a free commutative monoid, then using multisets breaks the usual categorical semantics in \ast-autonomous categories, because it prevents you from distinguishing which copy of a type a rule is applied to. This is the same problem as arises when relating Petri nets with symmetric monoidal categories (as opposed to commutative monoidal categories).

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:29):

Chad Nester said:

The non-multiset version is the same, but has a rule allowing you to permute things on either side. I'm not sure how to typeset sequents here but the rule is that if I can derive Δ1,A,B,Δ2Γ\Delta_1,A,B,\Delta_2 \vdash \Gamma then I can derive Δ1,B,A,Δ2Γ\Delta_1,B,A,\Delta_2 \vdash \Gamma, and symmetrically that if I can derive ΔΓ1,A,B,Γ2\Delta \vdash \Gamma_1,A,B,\Gamma_2 then I can derive ΔΓ1,B,A,Γ2\Delta \vdash \Gamma_1,B,A,\Gamma_2, where the big letters are sequences.

But how then can we talk about sequents of the form Δ1,A,Δ2Γ\Delta_1, A, \Delta_2\vdash\Gamma?

view this post on Zulip Chad Nester (May 05 2021 at 15:30):

I don't understand what you don't understand :\

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:31):

Oops! Sorry. I think I was still thinking of the symbols as sets!

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:31):

Thanks @Chad Nester .

view this post on Zulip Chad Nester (May 05 2021 at 15:32):

:)

view this post on Zulip সায়ন্তন রায় (May 05 2021 at 15:33):

@Mike Shulman This is way over my head :sweat_smile: . (I just came to know about the sequent calculus presentation of LL, not more than 45 minutes ago!).

view this post on Zulip Chad Nester (May 05 2021 at 15:33):

That's an interesting point Mike.

view this post on Zulip সায়ন্তন রায় (Jul 14 2021 at 14:28):

Does anyone know any good class of examples of logics satisfying cautious monotony where the union of two nontrivial sets is again nontrivial? (Here by a nontrivial set I mean that it doesn't entail everything.)