Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: events

Topic: Binary Operation


view this post on Zulip Morgan Rogers (he/him) (Mar 04 2023 at 08:59):

Morgan Rogers (he/him) said:

William Troiani and I set up a Twitch stream where we have maths research discussions live; here is our channel. We're keeping our discussion to "side projects" which enable us to learn stuff and for which we don't mind taking the "risks" of open discussion; our first project is about machine learning and cryptography.
We're planning another one this Saturday morning; I expect we'll start at around 10am (Paris time) although we're still ironing out the technical set-up so it might be later. Drop by if you like!

We're still setting up for this but will be live in a short while!

view this post on Zulip Morgan Rogers (he/him) (Mar 04 2023 at 09:25):

[technical issues ensue]

view this post on Zulip Morgan Rogers (he/him) (Mar 04 2023 at 11:56):

It was fun! We spent the stream understanding SHA-2 so that we can mess around with variations of it next time. Thanks to those who joined!

view this post on Zulip Morgan Rogers (he/him) (Mar 11 2023 at 12:01):

Looks like @William Troiani and I will be doing another stream tomorrow morning (again from around *11am Paris time). Take a peek if you have a chance.

view this post on Zulip Morgan Rogers (he/him) (Mar 12 2023 at 10:22):

Starting soon (I would apologise for lateness, but our audience isn't quite big enough to hold us accountable yet).

view this post on Zulip Morgan Rogers (he/him) (Mar 18 2023 at 12:30):

Late notice that @William Troiani and I will be streaming from appoximately 3:30pm Paris time today. We read a couple of papers last time and decided how we were going to implement our miniature version of SHA-2. In the mean time we have implemented that in Python. This week we will be discussing neural nets and potential pitfalls of our project overall. See you there!

view this post on Zulip Morgan Rogers (he/him) (Mar 26 2023 at 07:18):

In spite of neither of us having had time to complete the promised programming from last time, @William Troiani and I will be doing another research stream from 1pm Paris time today (beware that clocks changed this morning so that might be an hour sooner than you expect). Please do come along even if you haven't been following, we can always use feedback ;)

view this post on Zulip Morgan Rogers (he/him) (Apr 01 2023 at 20:54):

To mix things up, Will and I are going to try out Monday morning (we'll start shortly after 9am Paris time I expect; I'll post here if it looks like it will be later). If that gives you an excuse to check it out, then see you there!

view this post on Zulip Morgan Rogers (he/him) (Apr 08 2023 at 10:13):

Hi everyone, I moved posts about the Binary Operation Twitch stream to #general: events from the topic on my work. We now have a Mastodon account and an Instagram account, although we're still deciding how to use both of these; please follow if you're on these platforms!

The next stream will be tomorrow (Sunday) from 9:30am Paris time. We'll be diving deeper into finding out what asymptotic invariance of the distribution of stuff in the universe would look like. If that's a strange enough sentence to make you curious, please do drop by... It will be a physics-inspired examination of how symmetry interacts with complexity classes/asymptotic behaviour of functions.

If you don't have a Twitch account (you don't need one to watch, but you would need one to leave a comment during the stream), feel free to send questions either here or through the platforms mentioned above.

view this post on Zulip Morgan Rogers (he/him) (Apr 15 2023 at 16:32):

Hi folks, the next stream will be tomorrow at 4pm Paris time, on the usual channel.
Last time we talked about asymptotic equations, but when I tried to look these up on stream, the kinds of equation that came up were not of the form that we were considering. I wonder if anyone here has thought about asymptotic analysis and could lend us some insight?

view this post on Zulip Morgan Rogers (he/him) (Apr 16 2023 at 14:01):

(starting soon)

view this post on Zulip Morgan Rogers (he/him) (Apr 22 2023 at 11:26):

Hi folks! Relatively short notice, but we will be streaming today starting at 4pm Paris time, on the usual channel. Last time we figured out what it means to solve an asymptotic equation in one dimension; this time we'll be jumping up to three! I expect this will be the last session on asymptotic analysis (unless we make an exciting breakthrough that we can't help following up on), so do send us some suggestions for what you'd like to see us think about next.

Also, I'd like to give a shout out to @Ralph Sarkis, who has managed to attend every stream up to now and is our all-round biggest supporter. Thanks Ralph!

view this post on Zulip Ralph Sarkis (Apr 22 2023 at 12:04):

Thanks for the shout-out! I'll repeat what I said in private: I have enjoyed your streams very much, and they inspire me to follow your lead in sharing the process of doing research (I have many ideas, but I'll let them mature a bit).

Unfortunately, I can't join today.

view this post on Zulip Morgan Rogers (he/him) (May 07 2023 at 08:53):

Hi everyone. We skipped a week, but we'll be back today from 3pm Paris time. See you there!

This week will probably be a recap of what we've been up to on the stream, plus some spitballing of ideas of what to do next, so come along if you haven't had a chance to yet!

view this post on Zulip Morgan Rogers (he/him) (May 07 2023 at 12:21):

Bonus! It looks like we will have Giti Omidvar, a PhD student in proof theory, as a guest this week. A ternary operation for a change! ( @William Troiani would say "trinary", but you must forgive him. )

view this post on Zulip David Egolf (May 07 2023 at 17:30):

I'm enjoying listening to the recording of this last one! It's quite a different experience from reading a book, in a refreshing way.

I especially appreciate how you are all taking time and effort to explain the notation being introduced. For example, that horizontal line syntax (natural deduction syntax?) has scared me away from several books before, so it's awesome to see it discussed and not just introduced without comment!

I also enjoyed learning a new word: "meta-notation"! Although I confess I don't understand precisely what it means, the general idea of notation that doesn't live in some specific formal context (but instead in some "meta" context?) is an interesting one.

view this post on Zulip John Baez (May 07 2023 at 17:50):

I don't know what "meta-notation" is, but it reminds me of "meta-variables" in logic. In the traditional approach to first-order logic we often want to say stuff like "for any variable xx and any formula P(x)P(x), there is a formula xP(x)\forall x P(x). But variables are specific symbols, so "for any variable xx" doesn't make sense unless xx is a meta-variable - a symbol that can stand for any variable!

view this post on Zulip John Baez (May 07 2023 at 17:51):

I remember being delighted by this when I was first learning logic.

view this post on Zulip David Egolf (May 07 2023 at 18:03):

John Baez said:

I don't know what "meta-notation" is, but it reminds me of "meta-variables" in logic. In the traditional approach to first-order logic we often want to say stuff like "for any variable xx and any formula P(x)P(x), there is a formula xP(x)\forall x P(x). But variables are specific symbols, so "for any variable xx" doesn't make sense unless xx is a meta-variable - a symbol that can stand for any variable!

Oh, that's interesting! Although I'm not sure I follow.

If "usual" variables are used to denote an arbitrary real number (for example), is the idea that "meta" variables are used to denote an arbitrary usual variable? For example, let xx be some real number and yy be some real number and then let tt be either xx or yy. Would tt be a meta variable?

Presumably you could have multiple meta variables used to denote arbitrary elements of different collections of usual variables. Then I suppose we could have a meta-meta variable, which we could use to denote an arbitrary element of a collection of meta variables!

I'm not sure this is the right idea, though.

view this post on Zulip John Baez (May 07 2023 at 18:09):

In traditional logic variables are specific symbols x,y,z,....x, y, z, .... , which you use in "formulas" like

P(x) P(x)

(here PP is a "predicate") or

xP(x) \forall x P(x)

view this post on Zulip John Baez (May 07 2023 at 18:10):

We need to know exactly what these symbols are, or we don't know what counts as a formula. For example

PP(:car: )

is not a formula unless you decreed that :car: is a variable. Get it? Once you've set up your system of logic, you don't get to scrawl arbitrary junk and call that a formula.

view this post on Zulip John Baez (May 07 2023 at 18:12):

But now say I want to state a rule of logic, like

xP(x) \forall x P(x)

----
P(y)P(y)

view this post on Zulip John Baez (May 07 2023 at 18:13):

This is my feeble attempt to write a "deduction rule" which means "from xP(x)\forall x P(x) you can deduce P(y)P(y)". That horizontal line thing is the thing that was scaring you earlier.

view this post on Zulip John Baez (May 07 2023 at 18:14):

The problem though is that I just wrote this rule with a specific variable symbol xx! So this rule doesn't help me at all with zP(z)\forall z P(z)!

view this post on Zulip John Baez (May 07 2023 at 18:15):

What I want to say is that this rule works for any variable symbol xx - and any predicate symbol PP.

view this post on Zulip John Baez (May 07 2023 at 18:16):

So I should write something like

xP(x)\forall \textbf{x} \textbf{P}(\textbf{x})
-----
P(y) \textbf{P}(\textbf{y})

view this post on Zulip John Baez (May 07 2023 at 18:18):

Here the guys in boldface are "meta-variables": x\textbf{x} and y\textbf{y} stand for an arbitrary variable symbol, and P\textbf{P} stands for an arbitrary predicate symbol

view this post on Zulip John Baez (May 07 2023 at 18:19):

From this rule I can get zillions of specific rules like

zP(z) \forall z P(z)

----
P(x)P(x)

etc.

view this post on Zulip John Baez (May 07 2023 at 18:20):

This "meta" stuff is showing up because we are trying, not just to use logic, but talk about logic in a logical way.

view this post on Zulip David Egolf (May 07 2023 at 18:25):

If I understand correctly, zP(z)\forall z P(z) and xP(x)\forall x P(x) are different formulas because they use different symbols. However, intuitively it seems like they should mean the same thing: for any input provided to PP, PP evaluates to true when fed that input.

The meta stuff then comes in because we want to notice a pattern for rules that is valid across the change of symbols, I think (?). So, for a rule in terms of a meta-variable, any specific choice of symbol substituted in for our meta-variable yields a rule in terms of that symbol.

(And you've gone the extra mile and also used a meta-variable for the predicate, so that we need to substitute in two specific things (a choice of symbol and a choice of predicate) to get a rule in terms of specific symbols and predicates.)

I think it's making more sense now. Thanks for explaining!

view this post on Zulip John Baez (May 07 2023 at 18:39):

David Egolf said:

If I understand correctly, zP(z)\forall z P(z) and xP(x)\forall x P(x) are different formulas because they use different symbols. However, intuitively it seems like they should mean the same thing: for any input provided to PP, PP evaluates to true when fed that input.

Yes, they mean the same thing. Any respectable system of logic would let you show

zP(z)    xP(x) \forall z P(z) \iff \forall x P(x)

and have deduction rules that imply the deduction rule

zP(z) \forall z P(z)
----
xP(x)\forall x P(x)

and

xP(x) \forall x P(x)
----
zP(z)\forall z P(z)

view this post on Zulip John Baez (May 07 2023 at 18:41):

But they are different strings of symbols written on the page. Traditional logic focuses a lot on 'syntax': that is, roughly, expressions you write on the page, and rules for manipulating them.

view this post on Zulip John Baez (May 07 2023 at 18:42):

The goal was to figure out whether, or how, we can capture everything we know about mathematics in some system for manipulating strings symbols written on the page.

view this post on Zulip John Baez (May 07 2023 at 18:42):

Nowadays some people focus more on computer code rather than symbols on the page, but the ideas are similar: we're trying to reduce mathematics to completely formal rules for manipulating data.

view this post on Zulip John Baez (May 07 2023 at 18:45):

David Egolf said:

The meta stuff then comes in because we want to notice a pattern for rules that is valid across the change of symbols, I think (?). So, for a rule in terms of a meta-variable, any specific choice of symbol substituted in for our meta-variable yields a rule in terms of that symbol.

Right! Or in other words, to state general rules for manipulating symbols, we need symbols that stand for these symbols!

view this post on Zulip David Egolf (May 07 2023 at 19:13):

Back to the twitch stream, I would also like to say that the discussed mapping from formulas to graphs is a lot of fun to hear about! Although it's surprising to me that a formula and its negation are sent to isomorphic graphs.

Really fun and interesting to watch, thanks for doing this!

view this post on Zulip Morgan Rogers (he/him) (May 07 2023 at 21:00):

Connecting the stream with this discussion, we were looking at Natural Deduction (one of the formal systems for expressing proofs in classical logic). The definition of the system introduces proofs and "discharging" formulas, expressed in the definitions using "meta-notation" to stand for hypothetical proofs and an implicitly performed operation (of removing a hypothesis), respectively.
We all concluded that it's a horrible system, but I'm really glad our talking about it was interesting :star_struck:

By the way @David Egolf were you the mystery commenter during the stream?

view this post on Zulip David Egolf (May 07 2023 at 21:28):

Morgan Rogers (he/him) said:

By the way David Egolf were you the mystery commenter during the stream?

No, I was not the mystery commenter. I was watching the archived video, well after you finished streaming. I'm glad that option was available, as the time zones wouldn't have worked out otherwise! (I'm on the west coast of Canada.)

view this post on Zulip John Baez (May 07 2023 at 22:09):

It's fun to read about the origins of natural deduction:

Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935.His proposals led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant called system L.

Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper.

view this post on Zulip Mike Shulman (May 08 2023 at 04:44):

Changing x.P(x)\forall x. P(x) to z.P(z)\forall z. P(z) is called "α\alpha-equivalence", and is indeed usually something that no one bothers about. In fact there are ways to make logic precise in such a way that this doesn't even arise, such as using "De Bruijn indices" instead of named variables. I think the more important meta-variable is PP, which could stand not only for a variable but for an arbitrary formula, so that P(x)P(x) could mean for instance (x>0)((x3)(x2=17))(x>0) \wedge ((x \neq 3) \to (x^2 = 17)).

view this post on Zulip John Baez (May 08 2023 at 16:20):

Well, while in some sense "no one bothers about" α\alpha-equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)

view this post on Zulip dan pittman (May 08 2023 at 16:48):

Yeah, that's right. They're a fairly common way to implement parsers, interpreters, &c. for functional languages.

view this post on Zulip dan pittman (May 08 2023 at 16:51):

Also, fwiw, in block-style math on Zulip you can write deductions / typing rules using \cfrac, if you're interested:

x P(x)z P(z)\cfrac{\forall x\ P(x)}{\forall z\ P(z)}

view this post on Zulip John Baez (May 08 2023 at 21:43):

Thanks - that's the best thing I've learned all day!

view this post on Zulip John Baez (May 08 2023 at 21:44):

P    Q,Q    RP    R \cfrac{P \implies Q, Q \implies R}{P \implies R}

view this post on Zulip Leopold Schlicht (May 09 2023 at 11:01):

FWIF, I would separate the premises by \quad or \qquad instead of comma:
P    QQ    RP    R\cfrac{P \implies Q\qquad Q \implies R}{P \implies R}

view this post on Zulip Leopold Schlicht (May 09 2023 at 11:06):

(Or \and when using \inferrule.)

view this post on Zulip Damiano Mazza (May 09 2023 at 13:11):

Does \inferrule work here on Zulip? I'm trying it and I can't get it to work. What's the syntax?

view this post on Zulip Damiano Mazza (May 09 2023 at 14:04):

Anyway, to stay on topic, I guess I must reply to this:

Morgan Rogers (he/him) said:

we were looking at Natural Deduction [...] We all concluded that it's a horrible system.

I wouldn't say that it is so horrible. Of all the available systems in proof theory, it is still the most "natural" in the sense that it reflects quite closely the flow of reasoning that we develop when we prove something. And it is very well structured: it has symmetries (introduction/elimination), a notion of analytic proof, etc., not to mention that the correspondence with the λ\lambda-calculus, known as the Curry-Howard isomorphism, is really an isomorphism only for natural deduction. Other proof systems may be related to computation, but only the proofs of natural deduction are exactly λ\lambda-terms. This may not be fundamental (some people, who prefer sequent calculus, may even see it as a limitation), but I wouldn't say it is horrible! :big_smile:

view this post on Zulip Damiano Mazza (May 09 2023 at 14:09):

From my perspective, natural deduction is much better, for instance, than the proof system presented in chapter D1 of the Elephant. That is a horrible system! For example, I don't see any way of extracting from it a notion of analytic proof, that is, something satisfying the property that, in order to prove a formula AA, you only use subformulas of AA. This property is extremely important in proof theory and both natural deduction and sequent calculus admit complete subsystems which verify it (this is basically Gentzen's cut-elimination theorem).

view this post on Zulip Damiano Mazza (May 09 2023 at 14:11):

But of course I suppose that if your viewpoint is purely model-theoretic/category-theoretic, then of course the Elephant's system makes more sense and might be more appealing :smile:

view this post on Zulip dan pittman (May 09 2023 at 14:32):

I believe Curry-Howard has been downgraded to a “correspondence”. To me, this is because, if you’re proving an isomorphism between two formal systems, what system do you use to do your proof?

view this post on Zulip John Baez (May 09 2023 at 14:45):

You use whatever system of logic you want. I don't think that's why the Curry-Howard correspondence is not considered an isomorphism by some: I think it's because they don't know a category in which the two things in the correspondence are isomorphic objects. I.e. they don't know a general kind of thing, such that the correspondence is an isomorphism between two of these things.

view this post on Zulip Mike Shulman (May 09 2023 at 15:31):

John Baez said:

Well, while in some sense "no one bothers about" α\alpha-equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)

Yes, agreed. I just meant that I don't think dealing with α\alpha-equivalence is the primary application of metavariables.

view this post on Zulip dan pittman (May 09 2023 at 15:34):

John Baez said:

You use whatever system of logic you want. I don't think that's why the Curry-Howard correspondence is not considered an isomorphism by some: I think it's because they don't know a category in which the two things in the correspondence are isomorphic objects. I.e. they don't know a general kind of thing, such that the correspondence is an isomorphism between two of these things.

Proving something about a formal system in that formal system feels very paradoxical to me.

view this post on Zulip Mike Shulman (May 09 2023 at 15:34):

dan pittman said:

Proving something about a formal system in that formal system feels very paradoxical to me.

Why more so than making statements about English in English?

view this post on Zulip Mike Shulman (May 09 2023 at 15:36):

Regarding natural deduction, I'm not sure if this is exactly what you meant, but I agree that it looks pretty bizarre when you write it with hypotheses at the top of proof trees, and have to "discharge" them when you do hypothetical reasoning like so:

[A]BAB\cfrac{\cfrac{\cfrac{[A]}{\vdots}}{B}}{A\to B}

However, there's a much nicer way to write it, by making the context of every judgment explicit. Writing ΓA\Gamma \vdash A to mean the judgment that AA is true in context Γ\Gamma, the above rule becomes

Γ,ABΓAB\cfrac{\Gamma, A \vdash B}{\Gamma \vdash A\to B}

In other words, when proving BB in the premise, we get to assume an additional hypothesis of AA relative to the assumptions used for the conclusion ABA\to B.

view this post on Zulip dan pittman (May 09 2023 at 16:08):

Mike Shulman said:

Why more so than making statements about English in English?

So my answer here is more philosophical, and probably veers into personal belief territory, but, I would say the same about English to English to be honest. Bear with me—I go back to my understanding of what Russell's original intention was with type theory, that because it was predicative, it was therefore not susceptible to the "set of all sets" problem. This of course was found not to be true either—what we now call Girard's paradox—out of which comes (eventually) Pure Type Systems which have an infinite hierarchy of universes, staving off that paradox on into infinity. It seems to me, that to link two entirely independent systems, you would have to step beyond that infinity. Now, that said, going back to your question about English to English, I agree with you in some ways, because what other choice do we have?! Maybe y'all are saying the same thing about the CHC, but w/r/t personal belief I find myself thinking often about the (mis)quoted epigraph from Le Guin's *Lathe of Heaven*:

To let understanding stop at what cannot be understood is a high attainment. Those who cannot do it will be destroyed on the lathe of heaven.

view this post on Zulip dan pittman (May 09 2023 at 16:10):

This also feels related to type theory's initiality problem. I remember reading a thread on the HoTT mailing list years ago, where (maybe it was even you that said it, @Mike Shulman !) that to do type theory in type theory we'd have to formalize pen and paper!

view this post on Zulip Morgan Rogers (he/him) (May 09 2023 at 17:03):

Mike Shulman said:

Regarding natural deduction, I'm not sure if this is exactly what you meant, but I agree that it looks pretty bizarre when you write it with hypotheses at the top of proof trees, and have to "discharge" them when you do hypothetical reasoning like so:

[A]BAB\cfrac{\cfrac{\cfrac{[A]}{\vdots}}{B}}{A\to B}

However, there's a much nicer way to write it, by making the context of every judgment explicit. Writing $\Gamma \vdash A$ to mean the judgment that $A$ is true in context $\Gamma$, the above rule becomes

Γ,ABΓAB\cfrac{\Gamma, A \vdash B}{\Gamma \vdash A\to B}

In other words, when proving $B$ in the premise, we get to assume an additional hypothesis of $A$ relative to the assumptions used for the conclusion $A\to B$.

Isn't that the translation into sequent calculus? The proof tree syntax was exactly our complaint about natural deduction.

view this post on Zulip Mike Shulman (May 09 2023 at 17:37):

@Morgan Rogers (he/him) Well, people do use language in different ways, and in particular some people call anything that looks like "ΓA\Gamma \vdash A" a "sequent". But I've been taught (by type theorists like Dan Licata) that the difference between "natural deduction" and "sequent calculus" lies in the treatment of contexts and cut vs substitution, rather than in how you write the rules on the page. For instance, the natural deduction rule for eliminating disjunction, written with explicit contexts, is

ΓABΓ,ACΓ,BCΓC\cfrac{\Gamma \vdash A\vee B \qquad \Gamma,A \vdash C \qquad \Gamma,B \vdash C}{\Gamma\vdash C}

whereas the sequent calculus rule is

Γ,ACΓ,BCΓ,ABC\cfrac{\Gamma, A \vdash C \qquad \Gamma,B \vdash C}{\Gamma, A\vee B \vdash C}

In general, each connective in sequent calculus comes with "left rules" that introduce that connective on the left of the turnstile and "right rules" that introduce it on the right, while each connective in natural deduction comes with "introduction rules" that introduce it on the right (often the same as the sequent-calculus right-rules) and "elimination rules" that contain it on the right in a premise.

The guiding principle of natural deduction is that the conclusion of every rule should have an arbitrary context. This allows substitution / composition to be defined by a simple induction, preserving the form of all rules. But it results in "redexes", proofs where a connective is introduced and then immediately eliminated, and that leads us to introduce a process of β\beta-reduction that "normalizes" a proof.

By contrast, the guiding principle of sequent calculus is that the formulae in the premises are always simpler than those in the conclusion. This is great for automated proof search, because you can successively pick formulas to "focus" on and apply their rules, trusting that the search space is constrained by the complexity of the formulas you started with. This relies in particular on a theorem that every theorem has a proof not using the "cut rule"

ABBCAC\cfrac{A\vdash B \qquad B \vdash C}{A\vdash C}

because it brings in a completetly arbitrary formula BB in the premises. The proof that cut in sequent calculus is "eliminable" or "admissible" involves more or less the same computations as the proof that any natural deduction proof has a normal form without redexes, so you can sort of think of sequent calculus as a way of directly presenting only the normal forms of proofs.

view this post on Zulip Mike Shulman (May 09 2023 at 17:38):

In real life things get much more complicated than that, and lines get blurry, but in principle that's the idea of the distinction as I understand it.

view this post on Zulip John Baez (May 09 2023 at 19:07):

Mike Shulman said:

John Baez said:

Well, while in some sense "no one bothers about" α\alpha-equivalence, logicians and computer programmers do have to deal with it, one way or another - including tricks like De Bruijn indices, which I believe are implemented in some software. (Is that right?)

Yes, agreed. I just meant that I don't think dealing with α\alpha-equivalence is the primary application of metavariables.

Agreed. I was just illustrating a random law of first-order logic that involves variables, and explaining to @David Egolf that to state such a law we often use metavariables to stand for arbitrary variables. I should have picked a more exciting law.

view this post on Zulip John Baez (May 09 2023 at 19:12):

dan pittman said:

Proving something about a formal system in that formal system feels very paradoxical to me.

If you don't like It, use some other formal system.

But it's really quite important that formal systems can "talk about themselves". When we take something like the Liar Paradox and try to express it precisely using the ability of formal systems to refer to themselves, we don't get a paradox: instead, we get important results like Tarski's undefinability theorem: very roughly, it's impossible to define arithmetical truth in arithmetic.

view this post on Zulip dan pittman (May 09 2023 at 20:09):

John Baez said:

If you don't like It, use some other formal system.

But it's really quite important that formal systems can "talk about themselves". When we take something like the Liar Paradox and try to express it precisely using the ability of formal systems to refer to themselves, we don't get a paradox: instead, we get important results like Tarski's undefinability theorem: very roughly, it's impossible to define arithmetical truth in arithmetic.

Yes, yes. I agree! I think the sentiment you're expressing via Tarski's theorem is all I mean when I say it "feels paradoxical". Secondarily, I agree that it's important, if not straight up compulsory for systems to be able to talk about themselves. Going back to Mike's point about English in English—because there isn't a metatheory, really, of English, we have no other choice!

view this post on Zulip John Baez (May 09 2023 at 20:13):

Okay, sorry - I thought the "feeling paradoxical" business was intended as an objection. If you said "it feels delightfully paradoxical" I would just have agreed. :upside_down:

view this post on Zulip Damiano Mazza (May 09 2023 at 21:23):

Morgan Rogers (he/him) said:

The proof tree syntax was exactly our complaint about natural deduction.

Let me add something about this. You may not like it, but the proof tree syntax is what makes natural deduction "natural". When you write down a proof in "real life" (for example, in a journal paper), you usually proceed by successively building statements from hypotheses or from previously proved statements. Natural deduction formalizes this by saying that you build a proof tree by always working at the root of the tree. In its proof-tree formulation, the rules of natural deduction do precisely that: half of them (the introduction rules) tell you how to get a bigger formula at the root from smaller ones, the other half (the elimination rules) tell you how to get a (potentially) smaller formula at the root from a bigger one.

The rules that "discard" hypotheses do not really break the idea that you only work at the root, because they are not doing anything with those hypotheses, they are just marking them as "globally unnecessary", that is, those hypotheses are used locally in the proof but they are not global hypothesis of the final theorem. Also (but you probably already know this), via the Curry-Howard correspondence, discarding hypotheses is exactly binding variables: discarding occurrences of AA in a proof tt of BB in order to prove ABA\to B is the same as forming the term λx.t\lambda x.t, in which we are binding certain free variable occurrences (those called xx) in the term tt. The variable xx becomes "local" to tt, it is invisible from the outside. Maybe the proof-tree syntax looks weird, but if you see it as a term, it's not so strange after all. The "sequent" formulation of natural deduction mentioned by @Mike Shulman makes explicit which variables are free/which hypothesis have not been discharged, but does not change the idea that you always work at the root, as you "naturally" do in real life.

In order to have only introduction rules (that is, rules that build bigger formulas from smaller ones), sequent calculus has one half of its rules (the right rules) which are exactly the same as the introduction rules of natural deduction, but the other half (the left rules) act on the leaves of the proof tree, rather than the root. This is quite unnatural, you never do this when writing down a proof.

The example given by @Mike Shulman illustrates this very well. Your goal is CC, so far you proved ABA\lor B. To finish your proof, you assume AA and prove CC, then assume BB and prove CC, and you're happy, QED. This is the elimination rule for disjunction in natural deduction. Notice that the hypotheses AA and BB are "local": they are used in those two sub-proofs (you could call them "lemmas"), but they are not hypotheses of your main theorem, which is CC and which only uses whatever you used to get ABA\lor B. That's why you need to discharge them. I find this extremely "natural" :big_smile:

By contrast, sequent calculus does this weird thing where, at the point where you would have written QED above, you instead take your two proofs of CC, one using the hypothesis AA and the other using the hypothesis BB, and observe that you therefore have a proof of CC using the hypothesis ABA\lor B. Only then you may conclude CC but using a cut rule with your previous proof of ABA\lor B. Of course this is perfectly sound, but you would never write things this way!

That being said, sequent calculus is probably the best system for proof theory, because this idea that you only build bigger formulas from smaller ones is extremely powerful and, in particular, matches directly that notion of analytic proof that I mentioned before: every rule of first-order sequent calculus, except cut, has the subformula property, that is, the formulas appearing in the premises are subformulas of the formulas appearing in the conclusion. Since you don't need cut (as mentioned by Mike), you can prove any provable formula AA by only using the subformulas of AA! This has lots of important consequences (for example, the consistency of sequent calculus itself).

view this post on Zulip Ralph Sarkis (May 10 2023 at 00:19):

I just took the time to watch the replay and it was great. I think it's a great format to invite someone to explain something to you and you try to understand it. It shows the process of learning, and it was apparently fun for everyone involved.

view this post on Zulip Valeria de Paiva (May 10 2023 at 17:10):

the difference between "natural deduction" and "sequent calculus" lies in the treatment of contexts and cut vs substitution, rather than in how you write the rules on the page.

Agreed! actually I was always told that writing Natural Deduction with Gammas and Deltas is called "ND in Martin-Loef style".

view this post on Zulip Morgan Rogers (he/him) (Jun 10 2023 at 19:14):

It's been a month, but tomorrow at 9am Paris time we'll be back!
See you there.

view this post on Zulip Morgan Rogers (he/him) (Jun 11 2023 at 07:09):

Have had to cancel due to a major miscommunication about location :sweat_smile:
Sorry folks (pinging @Ralph Sarkis )

view this post on Zulip Ralph Sarkis (Jun 11 2023 at 07:10):

:cry: thanks!

view this post on Zulip Morgan Rogers (he/him) (Jun 11 2023 at 07:10):

There is a small chance we will be back later today instead, I'll try to give advanced notice if so. (This did not happen)

view this post on Zulip Morgan Rogers (he/him) (Jun 17 2023 at 12:02):

Assuming @William Troiani remembers, we will be streaming in two hours, at 4pm Paris time. Link as above :)

view this post on Zulip Morgan Rogers (he/him) (Jun 17 2023 at 14:17):

[Technical issues are happening, namely that Will's laptop is not switching on, which is concerning beyond streaming purposes]

view this post on Zulip Morgan Rogers (he/him) (Jun 17 2023 at 14:25):

Solved!

view this post on Zulip Morgan Rogers (he/him) (Jun 17 2023 at 16:07):

This week we talked about Ramsey theory and some related combinatorics (we applied Hall's theorem in the context of Ramsey theory!). There was a stream freeze for 10 mins or more at one point, but hopefully there is still enough audio for it to be entertaining to watch beyond that point.

view this post on Zulip Morgan Rogers (he/him) (Jan 18 2024 at 23:10):

Hello!
After I mentioned the stream in my update the other day, @William Troiani was keen to bring it back.
We're planning a stream this Tuesday 23rd January, starting at or after 8am CET so that the time zones overlap. It's not prime time, but it's research, so maybe you can get away with joining at work!

This week we will be discussing reading maths papers: what our approaches are and how to do it efficiently. We'll be reading a paper of Lurie, but the main focus will be on the "how" rather than the "what"; I'll try to keep the discussion to paper-reading in general. I hope to see some of you there!

view this post on Zulip Morgan Rogers (he/him) (Jan 23 2024 at 07:00):

Starting shortly!

view this post on Zulip David Egolf (Jan 23 2024 at 18:41):

I watched part of the recording, and it was great! (I hope to watch more of it when I'm feeling better, health-wise). Thanks for discussing several fairly specific points regarding strategies for reading papers (or books). I'll be interested to try and make use of these. I feel that such strategies are especially relevant for someone like me who is trying to learn math largely through self-directed study.

To take one example, I appreciated the emphasis on trying to understand the "story" a paper is trying to tell, and to understand at least somewhat its claimed justification for existence. Amusingly, this reminds me of my recent efforts to puzzle out a mystery novel I've been working through recently: if one can understand the big picture motivation and goals of the characters (or narrative), it becomes significantly easier to reason about specific actions that the characters take. I suppose it's not too unreasonable to view a math paper as a sort of mystery that one is attempting to puzzle out...

One other point also comes to mind - and I don't know if it was addressed in the stream because I haven't watched the whole recording yet. But in the context of self-study, I find it important to be able to form an estimate of how accessible a paper or book is me at the current time. One of my current self-learning strategies is: try to primarily use the least difficult paper or book that still gets me excited about a given topic, while continuing to cross-reference more difficult (and less difficult) papers/books.

Of course, this may not be an appropriate approach if one's goal is to learn a lot about a very specific thing, as opposed to reading about related things. I'd be curious what strategies or techniques you have found helpful in this direction, or more broadly the criteria you make use of for selecting a book or paper to focus on.

view this post on Zulip David Egolf (Jan 23 2024 at 19:02):

Just watched a bit more and wanted to note a couple more interesting strategies discussed in the stream:

Great stuff!

view this post on Zulip Morgan Rogers (he/him) (Jan 23 2024 at 19:20):

Thanks so much for watching @David Egolf , it feels great to hear that those ideas landed, and it's also really useful for me to think about the contexts in which people might be attempting to read academic papers.

view this post on Zulip John Baez (Jan 23 2024 at 23:50):

I guess it's an obvious thought, and people in the video may have discussed it, but I always try to become intimately familiar with some examples of the concepts that a paper is proving things about.

It's not necessary to have lots of examples, but it's important to have enough that it's unlikely some property will hold for all these examples but not in general. In other words, you shouldn't only have "trivial" or "degenerate" examples - the examples should be "generic".

Then, if you know these examples really intimately, you can often just see (in your mind's eye) whether a statement hold for all these examples, and guess that if so, it holds in general.

view this post on Zulip John Baez (Jan 23 2024 at 23:52):

This technique gives you insight, which is like a second way of knowing, less certain than deductive reasoning, but often broader in scope, and lightning-fast.

Insight also helps you read papers! You evaluate each thing you read against your stock of examples and see how it works in those cases.

view this post on Zulip Morgan Rogers (he/him) (Jan 29 2024 at 22:54):

Hi folks, I'm not available but @William Troiani will be doing a Unary Operation stream from 8am CET tomorrow (Tuesday 30/01).
He's going to be reading some notes emtitled "Waddler's Theorems for Free", so tune in if you want some more direct insight into his paper reading process!

I'll be back next time to discuss more general article-reading skills. :innocent:

view this post on Zulip Morgan Rogers (he/him) (Feb 05 2024 at 14:26):

Hi folks. @William Troiani and I are planning another episode of Binary Operation tomorrow. We'll be continuing our "article reading" discussion, focussing on how you can extract (a first approximation of) what you want to learn from a paper in a reasonable amount of time, and then how to efficiently close-read (or at least our approach to that).
Please do tune in tomorrow from 8am CET if you have time!

view this post on Zulip Ralph Sarkis (Feb 05 2024 at 14:30):

I won't be able to join :cry:, but have fun! I wanted to suggest trying some AI tools to help reading a paper. I remember this one was very good at paraphrasing (even with papers using weird symbols), but it did not really make things clearer. Curious what you think about it.

view this post on Zulip Morgan Rogers (he/him) (Feb 06 2024 at 07:42):

In case anyone was considering joining but forgot, we've been chatting about some algebra until now and are only just getting onto paper reading, so it's a good time to hop in :)

view this post on Zulip Morgan Rogers (he/him) (Feb 06 2024 at 08:51):

For anyone watching the stream recording later, we start with me explaining the contents of a paper of mine that got accepted last week. That's more technical than I would have liked to open the stream, so I would recommend you skip to the cute algebra puzzle starting at 18 mins. After that, skip to 42 mins in for the start of the reading discussion, and to 1 hour 10 mins for us playing around with the AI summarizing tool that @Ralph Sarkis suggested. Spoilers: the AI wasn't very good.

view this post on Zulip David Egolf (Feb 06 2024 at 18:16):

Quite enjoyed the algebra puzzle, and am hoping to watch more when I have more energy! These streams are a lot of fun, and educational too. :smile:

view this post on Zulip David Egolf (Feb 06 2024 at 18:54):

On the topic of studying a paper or a book, there was a brief mention (during the stream) of using LaTeX to create notes. I wanted to mention that my favorite environment for typing up notes that include LaTeX these days is Obsidian.

I like it because:

view this post on Zulip David Egolf (Feb 06 2024 at 19:07):

My work environment ends up looking like this:
obsidian demo

view this post on Zulip Morgan Rogers (he/him) (Feb 13 2024 at 06:54):

Hi folks! Short notice, but we will start streaming in around 15 mins. See you there ;)

view this post on Zulip Morgan Rogers (he/him) (Feb 13 2024 at 09:09):

This week we talked about a surprising connection between axioms of set theory and functors for about 40 minutes before getting onto what may be our last reading discussion:

One possibility is discussing preparing a talk. There's lots to cover there, from structure to slide layout to time-saving tricks to performance advice, so it seems promising.
On the technical side, someone expressed an interest in topos theory. There was also another suggestion, I think, although I seem to have forgotten it :stuck_out_tongue_wink:

What would you like to see us do? Ideally we would like to combine suggestions for form and content, as we have done with the reading streams, where we were examining both how to read effectively/efficiently while also being interested in the maths that we were reading (it doesn't have to be research level, by the way). Do let us know what you would like to see!

view this post on Zulip Morgan Rogers (he/him) (Feb 20 2024 at 07:16):

Hi folks, it's Tuesday morning again and in spite of a late start we'll be streaming from less than 10 mins from now ;)
Hope to see you there!

view this post on Zulip Morgan Rogers (he/him) (Feb 20 2024 at 09:03):

Today we discussed a paper of mine (in collaboration with @Marie Kerjean and @Valentin Maestracci ) that is being submitted to FSCD, then had a more general discussion about the risks of insularity in maths. For me, this discussion was motivated by my perpetual feeling of ignorance regarding the broader motivations in logic. If you have some preference on what we do/discuss next week, let us know :)

view this post on Zulip Morgan Rogers (he/him) (Feb 20 2024 at 10:38):

disclaimer: please do not take our reported speech of other mathematicians as authoritative, we did not consult anyone ahead of making this stream :sweat_smile: hopefully we can have a more experienced logician to tell us about their motivations on the stream in the future?

view this post on Zulip Morgan Rogers (he/him) (Feb 26 2024 at 11:45):

Tomorrow (Tuesday) morning from 8am CET we will be discussing how to prepare a talk. We will select a result from a Category Theory textbook as material that we want to present to an audience, and see what is involved in the process from there.
We hope to see you there!

view this post on Zulip Morgan Rogers (he/him) (Feb 27 2024 at 07:03):

(Starting now!)

view this post on Zulip Morgan Rogers (he/him) (Feb 27 2024 at 10:18):

We discussed the value of regular research group meetings for PhD students for the first twenty minutes, then moved onto a series of slides on 'How to prepare a talk' by Marjorie Bachelor for around fifty minutes, and finally we spent the last ten minutes or so selected a result from Emily Riehl's Category Theory in Context that we will go through the process of preparing a talk about next time (unless another topic takes our interest in the mean time).

Speaking of next time, we need to change the day/time that we stream due to Will's teaching commitments. This will probably mean no stream next Tuesday in a first instance. I'll post here when we decide when to stream next!

view this post on Zulip Valeria de Paiva (Mar 05 2024 at 22:21):

moved onto a series of slides on 'How to prepare a talk' by Marjorie Bachelor for around fifty minutes,

thanks for this, I know Marj, but didn't know about the notes!